id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.01471
Guiding Text-to-Text Privatization by Syntax
Metric Differential Privacy is a generalization of differential privacy tailored to address the unique challenges of text-to-text privatization. By adding noise to the representation of words in the geometric space of embeddings, words are replaced with words located in the proximity of the noisy representation. Since embeddings are trained based on word co-occurrences, this mechanism ensures that substitutions stem from a common semantic context. Without considering the grammatical category of words, however, this mechanism cannot guarantee that substitutions play similar syntactic roles. We analyze the capability of text-to-text privatization to preserve the grammatical category of words after substitution and find that surrogate texts consist almost exclusively of nouns. Lacking the capability to produce surrogate texts that correlate with the structure of the sensitive texts, we encompass our analysis by transforming the privatization step into a candidate selection problem in which substitutions are directed to words with matching grammatical properties. We demonstrate a substantial improvement in the performance of downstream tasks by up to $4.66\%$ while retaining comparative privacy guarantees.
Stefan Arnold, Dilara Yesilbas, Sven Weinzierl
2023-06-02T11:52:21Z
http://arxiv.org/abs/2306.01471v1
# Guiding Text-to-Text Privatization by Syntax ###### Abstract _Metric Differential Privacy_ is a generalization of differential privacy tailored to address the unique challenges of text-to-text privatization. By adding noise to the representation of words in the geometric space of embeddings, words are replaced with words located in the proximity of the noisy representation. Since embeddings are trained based on word co-occurrences, this mechanism ensures that substitutions stem from a common semantic context. Without considering the grammatical category of words, however, this mechanism cannot guarantee that substitutions play similar syntactic roles. We analyze the capability of text-to-text privatization to preserve the grammatical category of words after substitution and find that surrogate texts consist almost exclusively of nouns. Lacking the capability to produce surrogate texts that correlate with the structure of the sensitive texts, we encompass our analysis by transforming the privatization step into a candidate selection problem in which substitutions are directed to words with matching grammatical properties. We demonstrate a substantial improvement in the performance of downstream tasks by up to \(4.66\%\) while retaining comparative privacy guarantees. ## 1 Introduction From compliance with stringent data protection regulations to building trust, privacy emerged as a formidable challenge to applications that build on user-generated data, and consensus exists regarding the need to safeguard user privacy. In the context of text analysis, privacy is typically protected by sanitizing personally identifiable information from the text via ad-hoc filtering or anonymization. The literature is replete with naive approaches that either redact words from the text or insert distractive words into the text. Using generalization and suppression on quasi-identifiers, an intuitive way of expressing privacy is presented by \(k\)-anonymity (Sweeney, 2002) and its notable adaptations for text data (Jiang et al., 2009; Sanchez and Batet, 2016). However, these approaches are fundamentally flawed. Incapable of anticipating an adversary's side knowledge, most anonymization schemes are vulnerable to re-identification and thus provably non-private. As text conveys seemingly innocuous information, researchers demonstrated that this information can be leveraged to identify authorship (Song and Shmatikov, 2019) or disclose identifiable information (Carlini et al., 2020; Pan et al., 2020; Song and Raghunathan, 2020; Thomas et al., 2020). Carlini et al. (2020), for instance, recovered verbatim text from the training corpus using black-box querying to a language model. Building upon noise calibration, _Differential Privacy_ (DP) (Dwork et al., 2006b) attracted considerable attention for their robust notion of privacy. For text analysis, DP is applied to the vector-valued representation of text data (Coavoux et al., 2018; Weggenmann and Kerschbaum, 2018; Vu et al., 2019). We focus on _Metric Differential Privacy_(Chatzikokolakis et al., 2013), in which data is processed independently, similar to the setting of randomized response (Kasiviswanathan et al., 2011). To avoid the curse of dimensionality of randomized response, noise is scaled by a general distance metric. For text-to-text privatization, Feyisetan et al. (2020) adopted a distance metric so that words that are close (_i.e._ more similar) to a word are assigned with a higher substitution probability than those that are more distant (_i.e._ less similar). This requires that the text is mapped onto a continuous embedding space (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017). Proceeding from the embedding, each word in the text is privatized by a three-step protocol: (1) retrieving the vector representation of the word, (2) perturbing the vector representation of the word with noise sampled from a multivariate distribution, and (3) projecting the noisy representation of the word back to the discrete vocabulary space. As the noisy representations are unlikely to exactly represent words in the embedding space, a nearest neighbor approximation is returned. Since text-to-text privatization operates directly on embeddings and words in the embedding space are mapped based on co-occurrences, words tend to be substituted by words that stem from a common semantic context. However, there is no guarantee that words are substituted by words that serve similar roles within the grammatical structure of a text. Motivated by the example of sentiment analysis, in which sentiment is typically expressed by adjectives and forms of adjectives [1], we hypothesize that substitutions strictly based on co-occurrences may degrade downstream performance. This hypothesis is in line with linguists finding repeated evidence for the relevance of grammatical properties for language understanding [11]. We summarize our contributions as follows: * We investigate text-to-text privatization via metric differential privacy in terms of its capability to preserve the grammatical properties of words after substitution. We find that privatization produces texts that consist to a large extent of incoherent nouns. * We incorporate grammatical categories into the privatization step in the form of a constraint to the candidate selection. We demonstrate that broadening the candidate pool to \(k>1\) (instead of \(k=1\)) and selecting a substitution with matching grammatical properties amplifies the performance in downstream tasks while maintaining an equivalent level of privacy. ## 2 Preliminaries ### Differential Privacy _Differential Privacy_ (DP) [13] emerged as a robust notion for privacy applied in privacy-preserving data mining and machine learning. Due to its composability and robustness to post-processing regardless of an adversary's side knowledge, it formalizes privacy without the critical pitfalls of previous anonymization schemes. To ensure a consistent understanding of the algorithmic foundation of differential privacy, we present a brief taxonomy and a formal definition of the variants used for text analysis. Formally, a randomized mechanism \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{R}\) with domain \(\mathcal{D}\) and range \(\mathcal{R}\) satisfies \(\varepsilon\)-indistinguishability if any two adjacent inputs \(d,d^{\prime}\in\mathcal{D}\) and for any subset of outputs \(S\subseteq\mathcal{R}\) it holds that: \[\frac{\mathbb{P}[\mathcal{M}(d)\in S]}{\mathbb{P}[\mathcal{M}(d^{\prime})\in S ]}\leq e^{\varepsilon}. \tag{1}\] At a high level, a randomized mechanism is differentially-private if the output distributions from two adjacent datasets are (near) indistinguishable, where any two datasets are considered adjacent that differ in at most one record. An adversary seeing the output can therefore not discriminate if a particular observation was used. This notion of indistinguishability is controlled by the parameter \(\varepsilon\) acting as a privacy budget. It defines the strength of the privacy guarantee (with \(\varepsilon\to 0\) representing strict privacy and \(\varepsilon\rightarrow\infty\) representing the lack of privacy). To enhance the accounting of the privacy budget, several relaxations exist [13, 14, 15]. Depending on the setting, DP can be categorized into _global_ DP [13] and _local_ DP [12]. Global DP addresses the setting in which privacy is defined with respect to aggregate statistics. It assumes a trusted curator who can collect and access raw user data. The randomized mechanism is applied to the collected dataset to produce differentially private output for downstream use. With noise drawn from a predetermined distribution, the design of the randomized mechanism builds upon an additive noise mechanism. Commonly used distributions for adding noise include Laplace and Gaussian distribution [13]. The noise is further calibrated according to the function's sensitivity and the privacy budget. This technique is useful for controlling the disclosure of private information of records processed with real-valued and vector-valued functions. Local DP addresses the setting in which privacy is defined with respect to individual records. In contrast to global DP, local DP does not rely on a trusted curator. Instead of a trusted curator that applies the randomized mechanism, the randomized mechanism is applied to all records independently to provide plausible deniability [10]. The randomized mechanism to achieve local DP is typically _Randomized_ Response_ (RR) Warner (1965), which protects private information by answering a plausible response to the sensitive query. Since we aim for text-to-text privatization, formulating DP in the local setting through RR appears to be a natural solution. However, the strong privacy guarantees constituted by RR impose requirements that render it impractical for text. That is, RR requires that a sentence \(s\) must have a non-negligible probability of being transformed into any other sentence \(s^{{}^{\prime}}\) regardless of how unrelated \(s\) and \(s^{{}^{\prime}}\) are. This indistinguishability constraint makes it virtually impossible to enforce that the semantics of a sentence \(s\) are approximately captured by a privatized sentence \(s\). Since the vocabulary size can grow exponentially large in length \(|s|\), the number of sentences semantically related to \(s\) becomes vanishingly small probability under RR Feyisetan et al. (2020). ### Metric Differential Privacy _Metric Differential Privacy_ Chatzikokolakis et al. (2013) is a generalization of differential privacy that originated in the context of location-based privacy, where locations close to a user are assigned with a high probability, while distant locations are given negligible probability. By using word embeddings as a corollary to geo-location coordinates, metric differential privacy was adopted from location analysis to textual analysis by Feyisetan et al. (2020). We follow the formulation of Xu et al. (2021) for metric differential privacy in the context of textual analysis. Equipped with a discrete vocabulary set \(\mathcal{W}\), an embedding function \(\phi:\mathcal{W}\rightarrow\mathbb{R}\), where \(\mathbb{R}\) represents a high-dimensional embedding space, and a distance function \(d:\mathbb{R}\times\mathbb{R}\rightarrow[0,\infty)\) satisfying the axioms of a metric (_i.e._, identity of indiscernibles, symmetry, and triangle inequality), metric differential privacy is defined in terms of the distinguishability level between pairs of words. A randomized mechanism \(\mathcal{M}:\mathcal{W}\rightarrow\mathcal{W}\) satisfies metric differential privacy with respect to the distance metric \(d(\cdot)\) if for any \(w,w^{{}^{\prime}},\hat{w}\in\mathcal{W}\) the output distributions of \(\mathcal{M}(w)\) and \(\mathcal{M}(w^{{}^{\prime}})\) are bounded by Equation 2 for any privacy budget \(\varepsilon>0\): \[\frac{\mathbb{P}[\mathcal{M}(w)=\hat{w}]}{\mathbb{P}[\mathcal{M}(w^{{}^{\prime }})=\hat{w}]}\leq e^{\varepsilon d[\phi(w),\phi(w^{{}^{\prime}})]}. \tag{2}\] This probabilistic guarantee ensures that the log-likelihood ratio of observing any word \(\hat{w}\) given two words \(w\) and \(w^{\prime}\) is bounded by \(\varepsilon d\{\phi(w),\phi(w^{\prime})\}\) and provides plausible deniability Bindschaedler et al. (2017) with respect to all \(w\in\mathcal{W}\). We refer to Feyisetan et al. (2020) for a complete proof of privacy. For \(\mathcal{M}\) to provide plausible deniability, additive noise is in practice sampled from a multivariate distribution such as the _multivariate Laplace distribution_Feyisetan et al. (2020) or _truncated Gumbel distribution_Carvalho et al. (2021). We recall that differential privacy requires adjacent datasets that differ in at most one record. Since the distance \(d(\cdot)\) captures the notion of closeness between datasets, metric differential privacy instantiates differential privacy when Hamming distance is used, _i.e._, if \(\forall x,x^{\prime}:d\{\phi(w),\phi(w^{{}^{\prime}})\}=1\). Depending on the distance function \(d(\cdot)\), metric differential privacy is therefore generally less restrictive than differential privacy. Intuitively, words that are distant in metric space are easier to distinguish compared words that are in close proximity. Scaling the indistinguishability by a distance \(d(\cdot)\) avoids the curse of dimensionality that arises from a large vocabulary \(\mathcal{W}\) and allows the mechanism \(\mathcal{M}\) to produce similar substitutions \(\hat{w}\) for similar \(w\) and \(w^{{}^{\prime}}\). However, this scaling complicates the interpretation of the privacy budget \(\varepsilon\), as it changes depending on the metric employed. ### Related Work Grounded in metric differential privacy, text-to-text privatization implies that the indistinguishability of substitutions of any two words in the vocabulary is scaled by their distance. Fernandes et al. (2018) achieve this indistinguishability by generating a bag-of-words representation and applying the _Earth Mover's distance_ to obtain privatized bags. In contrast to a bag-of-words representation, Feyisetan et al. (2020) formalized text-to-text privatization to operate on continuous word embeddings. Word embeddings capture the level of semantic similarity between words and have been popularized by efficient embedding mechanisms Mikolov et al. (2013); Pennington et al. (2014). This mechanism was termed MADLIB. The issue with this mechanism is that the magnitude of the noise is proportional to the dimensionality of the vector representation. This translates into adding the same amount of noise to any word in the embedding space, regardless of whether this word is located in a dense or sparse region. For words in densely populated areas, adding noise that is large in magnitude renders it difficult for the mech anism to select reasonable substitutions, as nearby relevant words cannot be distinguished from other nearby but irrelevant words. For words in sparsely populated areas, adding noise of small magnitude renders the mechanism susceptible to reconstruction, as the word closest to a noisy representation is likely to be the original word. To tackle some of the severe shortcomings of MADLIB, a variety of distance metrics have been employed to scale the indistinguishability, including _Hamming distance_Carvalho et al. (2021), _Manhattan distance_Fernandes et al. (2019), _Euclidean distance_Fernandes et al. (2019); Feyisetan et al. (2020); Carvalho et al. (2021); Feyisetan and Kasiviswanathan (2021), _Mahalanobis distance_Xu et al. (2020) and _Hyperbolic distance_Feyisetan et al. (2019). While related extensions have focused almost exclusively on geometric properties to enhance text-to-text privatization, we focus on linguistic properties. We extend MADLIB by a candidate selection that directs substitutions based on matching grammatical properties and demonstrate that multivariate perturbations supported by grammatical properties substantially improve the utility of the surrogate texts in downstream tasks. ## 3 Methodology Since text-to-text privatization operates directly on geometric space of embeddings, it is necessary to understand the structure of the embedding space. To get an understanding of the embedding space, we selected a subset of \(1,000\) most frequent words from the \(100\)-dimensional GloVe embedding and manifolded them onto a two-dimensional representation. Enriched by grammatical properties derived from the universal part-of-speech tagset Petrov et al. (2011), we chart a \(t\)-distributed stochastic neighbor embedding Van der Maaten and Hinton (2008) in Figure 1. We note that we derived each word's grammatical category without context, which may explain the general tendency towards _nouns_ (presumably misclassified _verbs_). Regardless of potentially misclassified grammatical categories, we can draw the following conclusions: while _nouns_, _verbs_, and _adjectives_ are distributed throughout the embedding space, we find distinct subspaces for _numerals_ and _punctuation_. This is because word embeddings are trained towards an objective that ensures that words occurring in a common context have similar embeddings, disregarding their syntactic roles within the structure of a text. Considering that text-to-text privatization typically selects the nearest approximate neighbor after the randomized mechanism is queried as substitution, we expect this mechanism to fall short in producing syntactically coherent texts. We adopt the multivariate Laplace mechanisms of MADLIBFeyisetan et al. (2020). Aimed at preserving the grammatical category of a word after its substitution, we incorporate a constraint into the candidate selection that directs the randomized mechanism towards words with a matching grammatical category. This constraint is incorporated as follows: we create a dictionary that serves as a lookup table for the grammatical category of each word in the vocabulary and generalize the randomized mechanism to return a flexible \(k\gg 1\) (instead of \(k=1\)) approximate nearest neighbors. If available, a word is replaced by the nearest word (measured from the noisy representation) that matches its grammatical category. Otherwise, the protocol reduces to canonical MADLIB. The computational overhead of the candidate selection is \(O(\log k)\). This modification introduces the size of the candidate pool \(k\) as an additional hyperparameter. Intuitively, \(k\) should be chosen based on the geometric properties of the embedding, _i.e._, \(k\) should be large enough to contain at least one other word with a matching grammatical category. We investigate our modification toMADLIB in terms of its capability to preserve grammatical properties and its implications. For reasons of reproducibility, we base all experiments on the \(100\)-dimensional GloVe embedding. To keep the computational effort feasible, we formed a vocabulary that consists of \(24,525\) words Figure 1: Embedding space of the \(1,000\) most frequent words in \(100\)-dimensional GloVe, automatically encoded with their universal part-of-speech tags. reflecting a natural distribution of grammatical categories: \(26\)_pronouns_, \(5,000\)_nouns_, \(5,000\)_verbs_, \(5,000\)_adjectives_, \(4,341\)_adverbs_, \(92\)_adpositions_, \(5,000\)_numerals_, \(6\)_conjunctions_, \(2\)_particles_, \(39\)_determiner_, and \(19\)_punctuations_. Once we determined our sub-vocabulary, we calculated the necessary size of the candidate pool \(k\). We counted the number of steps required from each word in our subset until a neighbor with a matching category was found. Averaging this count revealed that each word is linked to another word with a matching category within a neighborhood of \(20\). We thus parameterized the candidate pool to a fixed \(k=20\) across all experiments. ## 4 Experiments We conduct a series of experiments at a strategically chosen set of privacy budgets \(\varepsilon=\{5,10,25\}\) to demonstrate the relevance of directing substitution to words that share similar syntactic roles rather than restricting substitution only to words that appear in a similar semantic context. These privacy budgets represent three privacy regimes: \(\varepsilon=5\) for high privacy, \(\varepsilon=10\) for moderate privacy, and \(\varepsilon=25\) for low privacy. ### Linguistic Analysis We intend to assess the effectiveness of our constraint to the candidate selection in retaining grammatical properties of words after substitution. We query each word contained in the vocabulary \(100\) times and record the grammatical category for its surrogate word in the form of a frequency count. Given a moderate privacy budget of \(\varepsilon=10\), Figure 2 visualizes the calculated frequency counts similar to a confusion matrix. The diagonal represents the preservation capability of grammatical categories, _i.e._, universal part-of-speech tags. A comparison across \(\varepsilon\in\{5,10,25\}\) is deferred to Figure A.1 in the Appendix A. We start with the examination of the baseline mechanism in Figure 3(a). Consistent with the independent and concurrent results of Mattern et al. (2022), our results indicate that the privatization mechanism is likely to cause grammatical errors. Mattern et al. (2022) estimate that the grammatical category changes in \(7.8\%\), whereas we calculated about \(45.1\%\) for an identical privacy budget. This difference arises from the fact that Mattern et al. (2022) only consider the four most frequent categories of _nouns_, _verbs_, _adjectives_, and _adverbs_, while we consider eleven categories according to the universal part-of-speech tagset. In addition to the number of grammatical categories, we indicate the fluctuations between categories, while Mattern et al. (2022) only measures whether a category was changed. Owing to the tracking of the fluctuations, we find a disparate impact on the preservation of the grammatical categories. We find that the preservation of grammatical categories of words declines with growing guarantees for privacy, until the text after privatization consist almost entirely of nouns. We compare these results to our constrained mechanism in Figure 2(b). With the introduction of a constrained candidate pool of size \(k=20\), we observe an increased likelihood that surrogate texts retain the grammatical structure of the original texts. This can be seen by the dominance of the vertical line in Figure 3(a) compared to initial signs Figure 2: Approximated frequency counts by querying a subset of words and recording their universal part-of-speech tags before and after substitution. The diagonal represents the ideal preservation of grammatical properties. of a diagonal line in Figure 2(b). Compared to the baseline value \(45.1\%\), the preservation capability bounds at \(81.4\%\). We illustrate the alignment of grammatical properties between words from a sensitive text and and their surrogate words with an example sentence in Figure 3. We note that our syntactic guidance prevents words from being misleadingly replaced by numbers (and vice versa), as in the case of _before_ being replaced by _1979_. ### Geometric Analysis Intuitive properties for analyzing a mechanism operating on embeddings include magnitude, direction, and orthogonality. Since embeddings capture word co-occurrences, we expect most substitutions to be located in the same region of an embedding space and in the same direction from the embedding origin. We aim to measure the differences in the Euclidean distance of words with those of their corresponding substitutes generated by baseline \(\mathcal{M}(w)\) and our constraint \(\mathcal{M}^{{}^{\prime}}(w)\). The results capture \(\|w-\hat{w}\|\) and \(\|w-\hat{w^{{}^{\prime}}}\|\), respectively. Since the distances are zero when \(w=\hat{w}\) or identical when \(\hat{w}=\hat{w^{{}^{\prime}}}\), we are only interested in the distances when a substitution has occurred and the mechanisms decided on a distinct candidate for their substitution, _i.e._, \(\mathcal{M}(w)\neq\mathcal{M}^{{}^{\prime}}(w)\neq w\). Figure 4 depicts the calculated distances for querying words from our subset \(100\) times. The distance approximation was carried out at a strategically chosen discrete set of values of \(\varepsilon=\{5,10,25\}\). Since the distance is calculated as the difference between words and their substitutes, lower values indicate better substitutions. The distances depend on the amount of noise injected into the randomized mechanisms. The more noise, the larger the distances. Apparent across all privacy budgets, the distances between words and their substitutions are slightly shifted towards smaller distances. Since the distributions of distances are almost identical, we can take a principled guess that substitution in both mechanisms generally occurs within a similar region of the embedding space. ### Privacy Analysis Confronted with a non-zero probability that the candidate pool contains the sensitive word and no other word exists in the candidate pool with matching grammatical properties, it could be argued that the privacy guarantees suffer from the increased risk of self-substitution. By calculating the plausible deniability (Bindschaedler et al., 2017), we evaluate the risk of self-substitution arising from our grammatically constrained candidate selection. In line with previous studies on text-to-text privatization (Feysietan et al., 2019, 2020; Xu et al., 2021; Qu et al., 2021), we record the following statistics as proxies for plausible deniability. * \(N_{w}=\mathbb{P}\{M(w)=w\}\) measures the probability that a word is not substituted by the mechanism. This is approximated by counting the number of times a word \(w\) is substituted by the same word after running the mechanism \(100\) times. * \(S_{w}=|\mathbb{P}\{M(w)=w^{{}^{\prime}}\}|\) measures the effective support in terms of the number of distinct Figure 4: Euclidean distance for word substitutions. We depict default MADLIB (\(k=1\)) in blue and MADLIB (\(k=20\)) with grammatical constraint in orange. Figure 3: Example of syntax-preserving capabilities of MADLIB with and without grammatical constraint. substitutions produced for a word from the mechanism. This is approximated by the cardinality of the set of words \(w\)' after running the mechanism \(100\) times. Since the noise is scaled by \(\nicefrac{{1}}{{\varepsilon}}\), we can make a connection between the proxy statistics and the privacy budget \(\varepsilon\). A smaller \(\varepsilon\) corresponds to a more stringent privacy guarantee. Adding more noise to the vector representation of a word results in fewer self-substituted words (lower \(N_{w}\)) and a more diverse set of distinct substitutions (higher \(S_{w}\)). A higher \(\varepsilon\) corresponds to a less stringent privacy guarantee. This translates into less substitutions (higher \(N_{w}\)) and a narrow set of distinct substitutions (lower \(S_{w}\)). From a distributional perspective, it follows that \(N_{w}\) (\(S_{w}\) should be positively (negatively) skewed to provide reasonable privacy guarantees. For privacy budgets of \(\varepsilon=\{5,10,25\}\), we present the distribution of \(N_{w}\) and \(S_{w}\) over \(100\) independent queries Figure 5. While lower values of \(\varepsilon\) are desirable from a privacy perspective, it is widely known that text-to-text privatization requires slightly larger privacy budgets to provide reasonable utility in practice. Values of \(\varepsilon\) up to 20 and 30 have been reported in related mechanisms (Feyisetan et al., 2020). The histograms serve as visual guidance for comparing (and selecting) the required privacy budget \(\varepsilon\). As both mechanisms build upon the Euclidean distance as a metric, their privacy guarantees should match by using the same privacy budget \(\varepsilon\). Directing the substitution to words with a matching grammatical category result in marginal changes to the plausible deniability. This is visually recognizable by the distribution shift. The grammatical constraint risks slightly more self-substitutions and reduced effective support. This is because words are substituted (almost) only by words from the same grammatical category, reducing the pool of unique words that are appropriate for substitution and thus reducing the effective support of the multivariate mechanism. Out of \(100\) words queried given a fixed privacy budget of \(\varepsilon=10\), self-substitution increases on average from about \(29\) to \(32\), while effective support decreases on average from about \(66\) to \(61\). The fact that both changes in \(N_{w}\) and \(S_{w}\) do not exceed or fall below \(50\) indicates that plausible deniability is assured for the average-case scenario. We conclude that the grammatically constrained candidate selection does not come at the expense of privacy and can therefore be incorporated into the privatization step without the need to recalibrate the proxies for plausible deniability. Rather than compromising privacy, our constrained candidate selection can be alternatively viewed as a barrier against reconstruction attacks. Recall that the nearest neighbor search is generalized from \(k=1\) to \(k\gg 1\). This generalization may impede naive inversion attacks such as the one proposed in Song and Raghunathan (2020), in which an adversary attempts to recover a word by finding the nearest neighbor to the substitute word. Although this inversion attack is not comprehensive, it can be used as a reference point for investigations regarding the robustness of privacy attacks. We include the setup and the results of a membership inference attack in the Appendix B. ### Utility Analysis To evaluate whether the preservation of syntactic roles translates to better utility in downstream tasks, we conduct experiments with BERT(Devlin et al., 2018) on a subset of GLUE(Wang et al., 2019). Once for each mechanism under comparison, we privatize the training corpus of each dataset. Since the privacy guarantees do not exactly match, we calculate the available privacy budget for each mechanism such that the \(.90\) quantile of words is plausible deniable. This resembles a practical scenario where we allow a negligible subset of words Figure 5: Plausible deniability statistics approximated for a carefully compiled sub-vocabulary of \(24,525\) words of varying lexical categories, with each word independently privatized over a total number of \(100\) queries. We present the baseline in blue and highlight the distribution shift induced by the grammatical constraint in orange. without provable privacy guarantees. We report the performance scores in Table 1. A baseline trained on unprotected data is listed as an upper bound on the performance. All trials mimic the training of the baseline. To privatize the texts in the datasets, we use our modification with a varying candidate pool of size \(k\in 1,20\). Recall that \(k=1\) reduces our modification to the multivariate mechanisms of Feyisetan et al. (2020). Although we focus our analysis on a worst-case scenario in which the \(.90\)-quantile of words is plausibly deniable, we included test results for an average-case scenario in which only a \(.50\)-quantile of words enjoys plausible deniability. On average, BERT bounds at \(81.46\%\) when trained on sensitive text. Compared to the baseline, BERT trained on surrogate texts attains \(55.45\%\) when the candidate pool is \(k=1\). By broadening the candidate pool to \(k=20\) and directing the substitution to words with matching grammatical categories, BERT trained on surrogate texts ranks at \(60.11\%\). This corresponds to narrowing down the performance loss by \(4.66\%\). Contrary to our initial assumption that preserving the syntactic role of words is particularly relevant to sentiment analysis, we find evidence that accounting for syntactic information during privatization benefits a variety of downstream tasks. We conclude that linguistic guidance is a legitimate alternative perspective to previous extensions that focus on the geometric position of words in the embedding. ## 5 Conclusion Privatizing written text is typically achieved through text-to-text privatization over the embedding space. Since text-to-text privatization scales the notion of indistinguishably of differential privacy by a distance in the geometric space of embeddings, prior studies focused on geometric properties (Feyisetan et al., 2019; Xu et al., 2020; Carvalho et al., 2021). Unlike prior studies on amplifying text-to-text privatization by accounting for the geometric position of words within the embedding space, we initialized a set of strategies for amplification from the perspective of grammatical properties, such as _category_, _number_, or _tense_. By incorporating grammatical properties in the form of part-of-speech tags into text-to-text privatization, we direct the privatization step towards preserving the syntactic role of a word in a text. We experimentally demonstrated that that surrogate texts that conform to the structure of the sensitive text outperform surrogate texts that strictly rely on co-occurrences of words in the embedding space. Limitations.We note that directing the substitution to candidates with matching grammatical categories incurs additional information leakage that is not accounted for by our modification. Too remedy the unaccounted information leakage, one could recast the candidate selection through the exponential mechanism (McSherry and Talwar, 2007). ## Acknowledgment We gratefully acknowledge that this research was supported in part by the _German Federal Ministry of Education and Research_ through the _Software Campus_ (ref. _01IS17045_). \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Classification**} & \multicolumn{3}{c}{**Textual Similarity**} & \multicolumn{3}{c}{**Textual Entailment**} & **Avg.** \\ \cline{3-10} & **Level of** & CoLA & SST2 & QQP & MRPC & STSB & MNLI & QNLI & RTE & - \\ & **Privacy** & (MCC) & (ACC) & (ACC) & (ACC) & (SCC) & (ACC) & (ACC) & (ACC) & - \\ \hline \multirow{2}{*}{**BERT**} & \multirow{2}{*}{-} & 0.5792 & 0.9243 & 0.8879 & 0.8329 & 0.8854 & 0.8229 & 0.8912 & 0.6927 & 0.8146 \\ \hline \multirow{2}{*}{\(k=1\)} & \(p=0.9\) & 0.0248 & 0.8127 & 0.6940 & 0.5603 & 0.6153 & 0.5304 & 0.6327 & **0.5663** & 0.5545 \\ & \(p=0.5\) & 0.2303 & 0.8848 & 0.8181 & 0.6242 & 0.7951 & 0.7114 & 0.8339 & 0.6027 & 0.6875 \\ \multirow{2}{*}{\(k=20\)} & \(p=0.9\) & **0.0928** & **0.8510** & **0.7519** & **0.5946** & **0.6988** & **0.6251** & **0.7423** & 0.4525 & **0.6011** \\ & \(p=0.5\) & 0.3493 & 0.9035 & 0.8397 & 0.6333 & 0.8011 & 0.7301 & 0.8627 & 0.5420 & 0.7077 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on a subset of GLUE (Wang et al., 2019). We report Matthews correlation for the CoLA dataset, Spearman correlation for the STSB dataset, and the accuracy score for all remaining datasets. The level of privacy increases with the quantile of words that are provable plausible deniable. \(p=.90\) denotes an (almost) worst-case scenario. \(p=.50\) denotes an average-case scenario. We fixed the candidate pool to \(k=20\). A candidate pool of \(k=1\) reduces to the randomized mechanism of Feyisetan et al. (2020). Bold font indicates the best result from three independent trials of the worst-case scenario.
2310.13793
A Unified View of Evaluation Metrics for Structured Prediction
We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e.g. event and relation extraction, syntactic and semantic parsing). Our framework requires representing the outputs of these tasks as objects of certain data types, and derives metrics through matching of common substructures, possibly followed by normalization. We demonstrate how commonly used metrics for a number of tasks can be succinctly expressed by this framework, and show that new metrics can be naturally derived in a bottom-up way based on an output structure. We release a library that enables this derivation to create new metrics. Finally, we consider how specific characteristics of tasks motivate metric design decisions, and suggest possible modifications to existing metrics in line with those motivations.
Yunmo Chen, William Gantt, Tongfei Chen, Aaron Steven White, Benjamin Van Durme
2023-10-20T20:02:02Z
http://arxiv.org/abs/2310.13793v1
# A Unified View of Evaluation Metrics for Structured Prediction ###### Abstract We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e.g. event and relation extraction, syntactic and semantic parsing). Our framework requires representing the outputs of these tasks as objects of certain data types, and derives metrics through _matching of common substructures_, possibly followed by _normalization_. We demonstrate how commonly used metrics for a number of tasks can be succinctly expressed by this framework, and show that new metrics can be naturally derived in a bottom-up way based on an output structure. We release a library that enables this derivation to create new metrics.1 Finally, we consider how specific characteristics of tasks motivate metric design decisions, and suggest possible modifications to existing metrics in line with those motivations. Footnote 1: [https://github.com/wannok/metametric](https://github.com/wannok/metametric). ## 1 Introduction A wide range of tasks in NLP can be considered as forms of _structured prediction_. Syntactic and semantic parsing produces a tree or graph2 based on text. Information extraction (IE) aims to produce structured representations of data extracted from unstructured sources, often in the form of relations that may be used to populate a database Grishman (2019). Such relations may be typed or untyped, may have different numbers of arguments, and may relate objects of different kinds (e.g. mentions, entities, events, or even images). Footnote 2: Often in the form of a _directed acyclic graph_ (DAG), as in the task of AMR parsing. The structural complexity of these representations varies considerably between tasks. On the simpler end, problems like _binary relation extraction_ require identifying relationships between pairs of entity mentions. On the more complex end are tasks like _template extraction_, which requires populating various types of slots with _sets_ of mentions, categorical values, or even whole event structures, and _AMR parsing_Langkilde and Knight (1998); Banarescu et al. (2013), which requires generating a DAG of entities and values representing their semantic relations. A wide array of evaluation metrics have been proposed across this spectrum of tasks. For simpler ones, researchers have generally converged to a standardized set of metrics (e.g. trigger and argument F\({}_{1}\) for event extraction). However, for more complex tasks like template extraction, researchers have often proposed bespoke metrics tailored to the problem at hand, complicating comparison with prior work on similar problems Chinchor (1991); Du et al. (2021); Chen et al. (2023). Given the common goal of predicting structured Figure 1: Our generic framework, with the CEAF\({}_{\phi_{4}}\) metric Luo (2005) as for coreference resolution as an example. Here the task output is a set of entities, where each entity is a set of coreferent mentions identfied in the document. Computing CEAF\({}_{\phi_{4}}\) thus amounts to calculating the matching similarity between the predicted (\(P\)) and reference (\(R\)) sets of entities. objects, our aim is to present a similarly unified, high-level picture of evaluation. We observe that a variety of metrics can be viewed as computing scores over a _matching between substructures_ of predicted and reference objects, where this score decomposes as a _normalized sum over matched pairs_. The process of computing metrics can thus be abstracted to a framework as shown in Figure 1. On the one hand, this observation drives a contribution to structured prediction _theory_, clarifying the relationships among numerous metrics proposed over the years by identifying their core components. On the other, it drives a contribution to NLP _practice_, offering a bottom-up process for designing _new_ metrics based on an output structure. Our contributions can be summarized as follows: * We present a _unified framework_ for expressing structured prediction metrics; * We demonstrate how to derive various classic metrics using this framework, given a specification of a task's output structure; * We consider how different problem features may recommend particular design decisions within the framework -- often different decisions from those realized by existing metrics; * We release a library that enables bottom-up creation of new metrics based on predefined output data structure of a given task. Throughout, we emphasize both how evaluation of substructures (e.g. mentions) composes in the evaluation of superstructures (e.g. relations, templates), as well as the different notions of similarity employed for different structures. Our discussion starts with simpler tasks and proceeds to more complex ones, interleaving with examples throughout our exposition. ## 2 Records and Sets We begin by focusing on records3 with non-nested, fixed-named _fields_ or _slots_. Specifically, for \(P\), \(R\in X\) of _Predicted_ and _Reference_ objects of record type \(X\), we induce a _similarity function_ over \(X\). Footnote 3: In the context of relational databases, a _record_ is a _row_ (also _named tuple_) describing structured data in a table. **Definition 1**.: _A **similarity** over \(X\) is a function \(\phi\colon X\times X\rightarrow[0,1]\) such that \(\forall x,y\in X\), \(\phi(x,y)\leq\phi(x,x)=1\), i.e. an object is at least as similar to itself as to any other. A relaxed version is an **unnormalized** similarity, where \(\phi\colon X\times X\rightarrow[0,+\infty)\)._ Discrete Similarity4Equality is a trivial but important notion of similarity, which can be expressed by the Kronecker delta or the Iverson bracket5 as Footnote 4: Akin to _discrete metric_ and _discrete topology_. Throughout this work we use the word _metric_ as it’s commonly used in NLP literature: a score for evaluation purposes, rather than the formal mathematical notion that generalizes _distances_. Footnote 5: The Iverson bracket \(\llbracket p\rrbracket\) is 1 if \(p\) is true; otherwise 0. \[\delta_{X}(x,y)=\llbracket x=y\rrbracket=\begin{cases}1&\text{if }x=y;\\ 0&\text{if }x\neq y.\end{cases} \tag{1}\] Product Similarity for RecordsGiven two similarities \(\phi\) and \(\psi\) over sets \(X\) and \(Y\), we can define a _product similarity_\(\phi\times\psi\) for tuples of \(X\times Y\): \[(\phi\times\psi)\;((x,y),(x^{\prime},y^{\prime}))=\phi(x,x^{\prime})\cdot\psi (y,y^{\prime}) \tag{2}\] Clearly, the product similarity of two similarities is also a similarity.6 This generalizes to \(n\)-tuples, Figure 2: Output structure of common IE tasks discussed in this paper, with examples of their outputs. or record/class types7 if a similarity function is defined for each field in the record. Footnote 7: _Product types_ in programming languages literature. Set Intersection and NormalizationSets are commonly compared with Jaccard similarity, or F\({}_{1}\) score. Note that the core of such comparison is the **overlap** between two sets \(P\), \(R\subseteq X\), namely \[\Sigma_{\delta}(P,R)=|P\cap R| \tag{3}\] if we consider the elements of \(X\) as discrete (using \(\delta_{X}\) as their similarity). This overlap score \(\Sigma_{\delta}\) is an _unnormalized_ similarity under our definition. There are multiple ways to _normalize_ this \(\Sigma\) score so that the result is a (proper) similarity. We consider a few common choices: precision (Eq. 4), recall (Eq. 5), and F\({}_{1}\) (or _Dice score_; Eq. 6): \[p=\mathsf{P}(P,R) =\frac{|P\cap R|}{|P|} =\frac{\Sigma(P,R)}{\Sigma(P,P)}; \tag{4}\] \[r=\mathsf{R}(P,R) =\frac{|P\cap R|}{|R|} =\frac{\Sigma(P,R)}{\Sigma(R,R)};\] (5) \[\mathsf{F}(P,R) =\frac{2pr}{p+r}; \tag{6}\] And the Jaccard similarity: \[\mathsf{J}(P,R) =\frac{|P\cap R|}{|P\cup R|}\] \[=\frac{\Sigma(P,R)}{\Sigma(P,P)+\Sigma(R,R)-\Sigma(P,R)}. \tag{7}\] Note that all these normalizers can be expressed solely with the _overlap scoring function_\(\Sigma\). Let \(\mathsf{N}\in\{\mathsf{P},\mathsf{R},\mathsf{F},\mathsf{J}\}\) be a normalizer over objects of type \(X\). Hence we arrive at a normalized similarity over sets of \(X\): \(\mathsf{N}[\delta](P,R)=\mathsf{N}(\Sigma_{\delta}(P,R))\). We have created the basic tools needed to derive metrics for many simple tasks. Next, we illustrate how to do so for two common NLP tasks. ### Binary Relation Extraction Binary relation extraction (RE) focuses on typed relations (e.g. is-capital-of) with two arguments, a _subject_ and an _object_. Traditionally, both the subject and the object are text spans (i.e. _mentions_). Given a text passage, the objective is to output a set of binary relations. To ground our discussion of concrete structured prediction tasks, we specify relevant output data structure(s) in a Python dataclass-like syntax. For binary RE, these are as follows: ``` classMention: left:int#leftspanoffset(inclusive) right:int#rightspanoffset(inclusive) classRelation: type:RelationType#is-capital-of subj:Mention#London obj:Mention#UnitedKingdom classRelationSet:#taskoutput relations:Set[Relation] ``` We will now derive a metric bottom-up. A standard similarity for mentions is exact offset match8, where two mentions are considered the same if and only if both the left and right boundaries match. This is an instance of product similarity:9 Footnote 8: For consistent presentation, we define \(\phi_{\mathsf{Attention}}\) in terms of offsets, but other string similarities could be substituted w.l.o.g., e.g. based on string value of the tokens (e.g. bag-of-tokens F\({}_{1}\) employed in MRC/QA (Rajpurkar et al., 2016)). Footnote 9: Given a class CIs with fld as a field, we write \(\phi_{\mathsf{cls.fld}}(x,y)\) (or \(\phi_{\mathsf{fld}}\) if it is not ambiguous) where \(x,y\in\mathsf{cls}\) to mean \(\phi_{\mathsf{cls.fld}}(x,y)=\phi(x.\mathsf{fld},y.\mathsf{fld})\). \[\phi_{\mathsf{Relation}}=\delta_{\mathsf{type}}\times\delta_{\mathsf{subj}} \times\delta_{\mathsf{obj}} \tag{9}\] Finally, precision, recall, and F\({}_{1}\) score are the most common metrics to evaluate predicted relations. Practically, this requires finding the intersection between predicted and reference relations:10 Footnote 10: For concision, we present only F in our metric definitions, but precision and recall are defined analogously, substituting P or R for F as appropriate. \[\mathrm{RelF}_{1}=\phi_{\mathsf{RelationSet}}=\mathsf{F}_{\mathsf{relations}}[ \phi_{\mathsf{Relation}}] \tag{10}\] ### Dependency Parsing Our next example is dependency parsing, where dependencies are relations between a _governor_ and its _dependent_. The output structure is as follows: ``` classDependency: gov:int dep:int#indexoftheword rel:DependencyType#nsubj,advmod,... classDependencyParse:#taskoutput edges:Set[Dependency] ``` Dependency parsing is evaluated using unlabeled (UAS) and labeled (LAS) attachment scores (Buchholz and Marsi, 2006), which are simply F\({}_{1}\) scores over dependency edges: \[\mathrm{UAS} =\mathsf{F}_{\mathsf{edges}}\left[\delta_{\mathsf{gov}}\times \delta_{\mathsf{dep}}\right] \tag{11}\] \[\mathrm{LAS} =\mathsf{F}_{\mathsf{edges}}\left[\delta_{\mathsf{gov}}\times \delta_{\mathsf{dep}}\times\delta_{\mathsf{rel}}\right] \tag{12}\] Matching of Sets In the previous section, we derived \(\Sigma_{\delta}\), a similarity for sets whose elements are _discrete_. However, elements of sets can be equipped with their own similarity. For example, in coreference resolution, the output of a system is a _set_ of _entities_, where each entity is in turn a _set_ of _mentions_ that may _partially_ overlap. We develop the idea of a _matching of sets_ to express these cases. We derive a similarity \(\phi_{\mathcal{P}(X)}\) over _sets_ of elements of \(X\) (i.e. elements of the power set \(\mathcal{P}(X)\)) using _bipartite graphs_. Assuming that elements in \(X\) are compared with a custom similarity \(\phi_{X}\), given two sets \(P,R\subseteq X\), we can construct a bipartite similarity graph \(G=(P,R,E)\) between \(P\) and \(R\), where \(E\subseteq P\times R\) is the set of edges, and the weight on each edge \(\phi_{X}(u,v)\) corresponds to the value of the similarity (\(\phi_{X}\)) between nodes \(u\) and \(v\). We then determine a _matching_\(M^{\diamond}\subseteq E\) on this bipartite graph \(G\). An unnormalized **matching score** between \(P\) and \(R\) is defined to be the maximum sum of weights of all edges in a matching, subject to some constraint: \[\Sigma^{\diamond}[\phi_{X}](P,R)=\max_{M^{\diamond}}\sum_{(u,v)\in M^{ \diamond}}\phi_{X}(u,v), \tag{13}\] where \(\diamond\in\{\leftrightarrow,\rightarrow,\leftarrow,\sim\}\) is the **matching constraint**. Specifically we have the following: * **1:1 (\(\leftrightarrow\)):** Each element of \(P\) can be matched to at most one element of \(R\), and vice versa. This is corresponds to the _unbalanced assignment problem_, and can be solved efficiently with the Hungarian algorithm [15, 16]. We denote this \(M^{\leftrightarrow}\) since the matching is a (partial) bijection between \(P\) and \(R\). * \(N\)**:1 (\(\rightarrow\)) / 1:\(N\) (\(\leftarrow\)):** Each element of \(P\) can be matched to at most one element of \(R\), but each element of \(R\) can be matched to multiple elements of \(P\). We denote this \(M^{\rightarrow}\) since the matching is a (partial) function from \(P\) to \(R\). A flipped version \(M^{\leftarrow}\) obviously follows. * \(N\)**:\(N\) (\(\sim\)):** Every element of \(P\) may be matched with multiple elements of \(R\), and vice versa, without constraints. We denote this \(M^{\sim}=E\), as the matching may be any relation between \(P\) and \(R\). Note that the _overlap score_ developed in SS2 is a special case of the 1:1 _matching score_ here, since \[\Sigma_{\delta}(P,R)=|P\cap R|=\Sigma^{\leftrightarrow}[\delta](P,R). \tag{14}\] Thus we arrived at a generalization of our original overlap score. We denote the \(\mathsf{N}\)-normalized (\(\mathsf{N}\in\{\mathsf{P},\mathsf{R},\mathsf{F},\mathsf{J}\}\)) matching score \(\Sigma^{\diamond}[\phi_{X}]\) simply as \(\mathsf{N}^{\leftrightarrow}[\phi_{X}]\). Such (normalized) matching scores are sometimes _kernels_, which have additional nice properties. For discussion, see Appendix B. With the notion of matching of sets, we next consider metrics for several more complex tasks. ### Event Extraction Our first such task is event extraction.11 We imagine that events and arguments are represented using the following data structures: Footnote 11: This also covers _semantic role labeling_ (SRL), which is evaluated in the same way, and _event argument extraction_ (EAE), which differs only in considering arguments occurring outside the sentence containing the trigger. class Trigger: mention: Mention # defined in SS2.1 type: EventType class Argument: mention: Mention role: RoleType class Event: trig: Trigger args: Set[Argument] class EventSet: # task output events: Set[Event] The canonical metrics for event extraction are labeled precision, recall, and \(\mathsf{F}_{1}\) score for both event triggers and arguments [11, _i.a._). An event trigger is considered correct iff both the event type and the trigger mention offsets exactly match those of the reference (i.e. \(\delta_{\texttt{Trigger}}=\delta_{\texttt{mention}}\times\delta_{\texttt{ type}}\)). An event argument is considered correct iff the argument mention offsets and role exactly match the reference (i.e. \(\delta_{\texttt{Argument}}=\delta_{\texttt{mention}}\times\delta_{\texttt{ role}}\)) _and_ the associated trigger is correct.12 Given these, we can express trigger and argument \(\mathsf{F}_{1}\) scores as Footnote 12: Unlabeled scores, in which event type and argument role are ignored, are also commonly reported. \[\text{TrigF}_{1}=\mathsf{F}^{\leftrightarrow}_{\texttt{events}} \left[\delta_{\texttt{trig}}\right]; \tag{15}\] \[\text{ArgF}_{1}=\mathsf{F}^{\leftrightarrow}_{\texttt{events}} \left[\delta_{\texttt{trig}}\times\Sigma^{\leftrightarrow}_{\texttt{args}}[ \delta_{\texttt{Argument}}]\right]. \tag{16}\] Note that the definition of \(\text{ArgF}_{1}\) suggests that the metric can be viewed as a _nested_ matching, in which we first compute an _unnormalized_ optimal argument matching score (\(\Sigma^{\leftrightarrow}_{\texttt{args}}\), i.e., a raw count of matched arguments) based only on role type and argument boundaries, and then use this score to identify the optimal matching and score conditioned on the trigger. As with \(\mathsf{F}^{*}_{\mathsf{relations}}\) in SS2.1, \(\delta_{\mathsf{trig}}\) renders \(\mathsf{F}^{*}_{\mathsf{events}}\) trivial to compute, as an aligned event pair receives no credit if the triggers do not match. However, this nested matching view articulates a key aspect of our framework, evidenced by other metrics discussed in this section -- namely, that _evaluation of complex structures depends on an optimal matching of their substructures._ ### Coreference Resolution Event extraction deals only with trigger and argument _mentions_, but IE also deals with coreference resolution, where systems predict a _set_ of entities, which in turn are _sets_ of coreferent mentions:13 Footnote 13: We focus on entity coreference here, though the same metrics can be used for event coreference. class Entity: mentions: Set[Mention] class EntitySet: # task output entities: Set[Entity] A variety of metrics have been proposed for coreference resolution. Commonly used are CEAF (Luo, 2005), MUC (Vilain et al., 1995) and B3 (Bagga and Baldwin, 1998). CezfWe start with CEAF since it explicitly evaluates coreferences as sets of mentions. CEAF computes entity precision, recall, and \(\mathsf{F}_{1}\) by finding a partial bijection between predicted and reference entities that maximizes an entity similarity. Luo (2005) considers several functions - denoted \(\phi_{\{1,2,3,4\}}\) - ultimately preferring \(\phi_{3}\) and \(\phi_{4}\): \[\phi_{3} =\Sigma^{\leftrightarrow}_{\mathsf{mentions}}\left[\delta_{ \mathsf{Mention}}\right]; \tag{17}\] \[\phi_{4} =\mathsf{F}^{\leftrightarrow}_{\mathsf{mentions}}\left[\delta_{ \mathsf{Mention}}\right]; \tag{18}\] Both correspond to intuitive notions of entity similarity, with \(\phi_{3}\) simply counting the number of mentions a pair of entities have in common, while \(\phi_{4}\)\(\mathsf{F}\)-normalizes this value.14 In contrast to the identity similarities (\(\delta\)'s) typically used for mentions, the similarity used in coreference resolution is _gradient_: entities can be more or less correct based on their constituent mentions. Coreference resolution researchers have often used \(\phi_{4}\)(Moosavi and Strube, 2016; Joshi et al., 2020, _i.a._), where \(\text{CEAF}_{\phi_{4}}\) is just the \(\mathsf{F}\)-normalized total score under a \(\phi_{4}\)-optimal entity matching: Footnote 14: Note that \(\phi_{3}\) is an _unnormalized_ similarity function. \[\text{CEAF}_{\phi_{4}} =\mathsf{F}^{\leftrightarrow}_{\mathsf{entities}}\left[\phi_{4}\right] \tag{19}\] \[=\mathsf{F}^{\leftrightarrow}_{\mathsf{entities}}\left[\mathsf{F} ^{\leftrightarrow}_{\mathsf{mentions}}\left[\delta_{\mathsf{Mention}} \right]\right].\] CEAF offers a nice illustration of the expressiveness of our framework, computing a matching score between sets (of entities), where the internal metric over elements (entities) is _also_ a matching score over sets (of mentions). MucThe main step of MUC scoring is to create (separate) partitions of the predicted and reference entities (Pradhan et al., 2014). Assume that the predicted and reference entity sets are \(\mathcal{P}\) and \(\mathcal{R}\), and the _partition_ of each reference entity \(R\in\mathcal{R}\) created by intersecting it with predicted entities \(\mathcal{P}\) is \(\text{Part}_{\mathcal{P}}(R)\): i.e. \(\cup_{I\in\text{Part}_{\mathcal{P}}(R)}=R\). MUC recall is computed as \[\mathsf{R}_{\text{MUC}}=\frac{\sum_{I\in\mathcal{R}}\left(|R|-|\text{Part}_{ \mathcal{P}}(R)|\right)}{\sum_{I\in\mathcal{R}}\left(|R|-1\right)}. \tag{20}\] Note that \(|R|-|\text{Part}_{\mathcal{P}}(R)|=\sum_{I\in\text{Part}_{\mathcal{P}}(R)}(|I|-1)\): We can define an unnormalized similarity (number of shared links that link mentions to form a coreference chain) between entities: \[\phi_{\text{link}}(X,Y)=\max\{0,|X\cap Y|-1\}. \tag{21}\] Using this, we see that \(|R|-|\text{Part}_{\mathcal{P}}(R)|=\Sigma^{\sim}_{\mathsf{entities}}[\phi_{ \text{link}}](\mathcal{P},\mathcal{R})\) with the \(N\):\(N\) (\(\sim\)) matching constraint, and \(|\text{Part}_{\mathcal{R}}(R)|=1\). Thus we have \[\mathsf{R}_{\text{MUC}}=\mathsf{R}^{\sim}_{\mathsf{entities}}[\phi_{\text{link }}]. \tag{22}\] Precision can be defined similarly. ### B3 Different from MUC and CEAF, B3 assigns a score to each mention instead of each entity. Here, we need a slightly different data structure, where we pair each mention with the entity it belongs to: class Membership: # An instance of a relation mention: Mention entity: Entity class CorefOutputForB3: rels: Set[Membership] # membership relations The recall of B3 assigns to each reference mention a score equal to the ratio of the number of correct mentions in the predicted entity containing the reference mention to the size of the reference entity to which that mention belongs (Pradhan et al., 2014). Under our new data structure this ratio is just \(\mathsf{R}^{\leftrightarrow}_{\mathsf{entity}}[\delta]\). Precision is computed similarly by switching the role of predicted and reference entities. Thus, B3 can be succinctly expressed as \[\mathsf{R}_{\text{B3}} =\mathsf{R}^{\leftrightarrow}_{\mathsf{rels}}[\delta_{\mathsf{ mention}}\times\mathsf{R}^{\leftrightarrow}_{\mathsf{entity}}[\delta_{\mathsf{ mention}}]] \tag{23}\] \[\mathsf{P}_{\text{B3}} =\mathsf{P}^{\leftrightarrow}_{\mathsf{rels}}[\delta_{\mathsf{ mention}}\times\mathsf{P}^{\leftrightarrow}_{\mathsf{entity}}[\delta_{\mathsf{ mention}}]]. \tag{24}\] Our framework thus captures all three of the standard coreference resolution metrics. ### Role-Filler Entity Extraction & _N_-ary Relation Extraction Relation and event extraction generally take mentions as arguments, but some tasks take entities as arguments (_fillers_) of roles (_slots_) in some relation: class RoleFillerEntity: role: RoleType entity: Entity class MaryRelation: type: RelationType args: Set[RoleFillerEntity] class MaryRelationSet: relations: Set[MaryRelation] Tasks of this form have been instantiated in various ways in prior work, which we discuss below. Role-filler Entity ExtractionOne such instance is _role-filler entity extraction_ (REE), a subtask of template extraction in which one must populate the subset of slots (roles) of a _single_ identified template that takes entities as fillers Du et al. (2021); Huang et al. (2021), _i.a._). Since the task deals with a single template, the output is a single MaryRelation.15 Footnote 15: Some work has evaluated at the mention level Paturshan and Riloff (2009); Huang and Riloff (2011); Du and Cardie (2020), essentially doing named entity recognition (NER). Du et al. (2021) introduced the CEAF-REE metric for REE which differs from CEAF only in requiring matching entities to share a role type and in using a different \(\phi\) for entities: \[\phi_{\subseteq}(P,R):=\llbracket P\subseteq R\rrbracket \tag{25}\] where \(P\) and \(R\) are predicted and reference entities (sets of mentions). CEAF-REE is then defined as: \[\text{CEAF-REE}=\mathsf{F}^{\leftrightarrow}_{\text{args}}\left[\delta_{ \mathsf{role}}\times\phi_{\subseteq}\right] \tag{26}\] Whereas \(\phi_{3}\) and \(\phi_{4}\) award partial credit to predicted entities that contain at least one correct mention, \(\phi_{\subseteq}\) is much stricter, awarding _no_ credit in cases where even one mention is incorrect, while simultaneously awarding _full_ credit to any non-empty subset of the reference entity. This may make sense in some settings, but in most, it is unduly harsh (see SS6). Responding to this observation, Chen et al. (2023) suggest a pair of alternatives to CEAF-REE, CEAF-RME\({}_{\phi_{\subseteq}}\) and CEAF-RME\({}_{\phi_{3}}\), that treat predicted _mentions_ as singleton entities and relax the two-sided matching constraints to one-sided: \[\text{CEAF-RME}_{\phi_{\subseteq}}=\mathsf{F}^{\leftrightarrow}_{ \text{args}}\left[\delta_{\mathsf{role}}\times\phi_{\subseteq}\right] \tag{27}\] \[\text{CEAF-RME}_{\phi_{3}}=\mathsf{F}^{\leftrightarrow}_{\text{args }}\left[\delta_{\mathsf{role}}\times\phi_{3}\right] \tag{28}\] _N_-ary Relation Extraction_N_-ary RE generalizes binary RE to relations among _N entity_ or _mention_ arguments. Here, we will assume we are dealing with entities; the case of mention arguments is comparatively straightforward. Often, work on _N_-ary RE assumes gold entities are given to the model as input, along with a set of candidate relations, and results are reported as relation type classification accuracy or F\({}_{1}\). This is true of much work on a number of recent, popular _N_-ary RE benchmarks, including SciERC Luan et al. (2018), DocRED Yao et al. (2019), and the dataset released by Peng et al. (2017). In a more comprehensive task setting, entities or mentions must also be predicted, along with the relations. We highlight the SciREX benchmark Jain et al. (2020), an extension of SciERC, as an example of evaluation in this setting. SciREX requires extraction of quaternary (dataset, method, task, metric) relations over entities extracted from ML papers. We formulate the SciREX metric in our framework below. For this task, mentions are represented as index ranges: class Mention:#alternativetodefinitionin SS2.1 indices:range#setoftokenindices A predicted mention is considered to match a reference mention iff their Jaccard similarity (considered as bag-of-integer offsets) exceeds 0.5: \[\phi_{\mathsf{Mention}}=\llbracket\mathsf{J}^{\leftrightarrow}_{\text{indices}} [\delta_{\mathsf{int}}]>0.5\rrbracket \tag{29}\] Jain et al. propose computing a role-filling entity matching based on mention and role matching: \[\phi_{\mathsf{RFE}}=\llbracket\mathsf{P}^{\leftrightarrow}_{\text{ mentions}}[\delta_{\mathsf{role}}\times\phi_{\mathsf{Mention}}]>0.5\rrbracket. \tag{30}\] In other words, a pair of entities \(E_{P}\) and \(E_{R}\) will be matched iff more than half of \(E_{P}\)'s mentions appear in \(E_{R}\), and their role matches. Given this matching, predicted 4-ary relations are then evaluated against reference ones using \(\mathsf{F}^{\leftrightarrow}[\phi_{\mathsf{MaryRelation}}]\), where \[\phi_{\mathsf{MaryRelation}}=\llbracket\mathsf{F}^{\leftrightarrow}_{\text{args }}[\phi_{\mathsf{RFE}}]=1\rrbracket. \tag{31}\] \(\mathsf{F}^{\leftrightarrow}_{\text{args}}[\phi_{\mathsf{RFE}}]=1\) means that all four role-filler entities must match under \(\phi_{\mathsf{RFE}}\) to receive credit. \(\mathsf{F}^{\leftrightarrow}_{\text{relations}}[\phi_{\mathsf{MaryRelation}}]\) further illustrates how matching superstructures depends on matching substructures, with optimal matching of relations depending on optimal matching of entities, which in turn depends on optimal matching of mentions. ### Template Extraction We now turn to _template extraction_, which arguably features the most complex outputs of any IE task. It generalizes \(N\)-ary RE by allowing roles in the relation be filled by any number of arguments \(N\geq 0\), which may be of any type T: class SlotFiller[T]: slot: SlotType value: T # Mention, Entity, Event, bool, etc. class Template: type: TemplateType filters: Set[SlotFiller[Any]] class TemplateSet: # task output templates: Set[Template] where a distinct similarity function \(\phi_{\mathsf{T}}\) may be needed for each T. Constraints on template matchings are traditionally two-sided. Below, we consider the metrics employed for the classic MUC-4 task. In Appendix D, we also consider the more recent BETTER Granular benchmark. The MUC-4 dataset (MUC, 1992; Sundheim, 1992) features 6 template types, which concern varieties of terrorist act (e.g. bombing, kidnapping) and which all contain the same slots. Some are "string-fill" slots, which take entity mentions as fillers, and others are "set-fill" slots, which take a categorical value. Although the official MUC-4 evaluation reported several metrics,16 the _overall score_ was \(\mathsf{F}_{1}\) over slot fillers: Footnote 16: See Chinchor (1992) for details. \[\mathsf{F}_{\mathsf{templates}}^{\leftrightarrow}\left[\delta_{\mathsf{type}} \times\Sigma_{\mathsf{filters}}^{\leftrightarrow}\left[\delta_{\mathsf{slot}} \times\phi_{\mathsf{T}}\right]\right] \tag{32}\] where \(\phi_{\mathsf{T}}\in\{\phi_{\mathsf{set}},\phi_{\mathsf{str}}\}\) is the type-appropriate filler similarity function. Both \(\phi_{\mathsf{set}}\) and \(\phi_{\mathsf{str}}\) are somewhat complex and, similar to \(\phi_{3}\) and \(\phi_{4}\), allow for partial credit. For some of the set-fill slots, the possible values are hierarchical; i.e., some values are more specific, and thus considered more accurate, than others. Suppose a set-fill slot \(s\) takes values from a set \(\mathcal{V}\), and we write \(P<:R\) to denote \(P\) is a subtype of \(R\). Then \(P<:R\) iff \(P\) is a descendant of \(R\) according to some hierarchy for \(P,R\in\mathcal{V}\). MUC-4 defines \(\phi_{\mathsf{set}}\) as: \[\phi_{\mathsf{set}}(P,R):=\begin{cases}1&\text{if }P=R;\\ \frac{1}{2}&\text{if }P<:R;\\ 0&\text{otherwise}\end{cases} \tag{33}\] This choice of \(\phi_{\mathsf{set}}\) is notable for suggesting a means of handling hierarchical sub-ontologies of the template ontology itself; such ontologies have seen considerable interest in many recent IE benchmarks, including RAMS (Ebner et al., 2020), WikiEvents (Li et al., 2021), and BETTER (Mckinnon and Rubino, 2022). We return to this in SS6. String-fill slots were evaluated based on maximum lexical overlap between a predicted mention and _all_ mentions in a reference _entity_. We provide more detailed discussion in Appendix C. ## 4 Sets with Latent Variables Next, we consider Abstract Meaning Representation (AMR) parsing (Langkilde and Knight, 1998; Banarescu et al., 2013), which involves outputs with _latent variables_. AMR describes the semantics of a sentence as a rooted, directed graph represented by a set of neo-Davidsonian triples, each with a subject, an object, and a relation. Subjects are variables and objects can be variables or concepts (e.g. from PropBank (Palmer et al., 2005)): classProp: # logical propositions rel: Relation # instance, ARG0, ARG1,... subj: Var # x, y,... obj: Var | Concept # z, want-01, boy,... classAmR: props: Set[Prop] Following the metrics for relation extraction, a _prima facie_ appealing metric for AMR graphs would be just like Eq. 10 for binary RE: \[\mathsf{F}_{\mathsf{props}}^{\leftrightarrow}[\delta_{\mathsf{rel}}\times \phi_{\mathsf{subj}}\times\phi_{\mathsf{obj}}]\] However, this poses a problem, as we cannot know whether two variables \(x\) and \(y\) refer to the same object: instance(\(x\), boy) and instance(\(y\), boy) could match if there is no constraint enforcing that \(x\neq y\). Thus, it is not immediately clear what the similarity function for variables (\(\phi_{\mathsf{var}}\)) should be. The commonly used Smatch metric solves this problem. Smatch is defined to be the _maximum F-score obtainable via a one-to-one matching of variables between two AMRs_(Cai and Knight, 2013). That is, it looks for an optimal partial bijection \(M_{V}^{\leftrightarrow}\subseteq V_{P}\times V_{R}\) between the variables of the predicted and reference AMRs (\(V_{P}\) and \(V_{R}\), respectively). Given \(M_{V}^{\leftrightarrow}\), we can define \[\tilde{\phi}_{\mathsf{Var}}(x,y)=\llbracket(x,y)\in M_{V}^{\leftrightarrow} \rrbracket, \tag{34}\] where \(\tilde{\phi}\) denotes a similarity conditioned on the variables in its arguments being matched. Hence \(\textsc{Smatch}\) is given by \[\textsc{Smatch}=\max_{M_{V}^{\leftrightarrow}}\mathsf{F}_{\textsc{props}}^{ \leftrightarrow}[\delta_{\mathsf{rel}}\times\tilde{\phi}_{\mathsf{subj}} \times\tilde{\phi}_{\mathsf{obj}}]. \tag{35}\] We generalize the spirit of \(\textsc{Smatch}\) to any set of \(X\) with _latent variables_ yet to be matched. The _matching score_ of \(P\), \(R\) with latent variables \(V_{P},V_{R}\) is defined to be \[\Sigma^{\diamond}(P,R)=\max_{M_{V}^{\leftrightarrow},\,M^{\diamond}}\sum_{(u, v)\in M}\tilde{\phi}_{X}(u,v), \tag{36}\] where \(M_{V}^{\leftrightarrow}\) is an one-to-one matching between the variable set \(V_{P}\) and \(V_{R}\); and \(M^{\diamond}\) is an matching between objects in \(P\) and \(R\) under constraint \(\diamond\). Computing this constrained optimization problem requires solving \(M_{V}^{\leftrightarrow}\), which can be done via an integer linear programming (ILP) solver (Cai and Knight, 2013). See Appendix A for more details. ## 5 Matching of Other Structures In the past few sections we developed tools to obtain _matching of sets_. We can extend this to match more complex structures such as _sequences_, _DAGs_, and arbitrary _directed graphs_. Recall the matching score in Eq. 13: we computed a sum of similarities based on _matched pairs_. In the matching of structures, the matching should preserve the structure of the object being matched. Elements of a _sequence_ form a _total order_ where earlier elements _precede_ later elements. Given two sequences \(P\), \(R\) whose elements are of type \(X\), each is equipped with a total order: \((P,\leq_{P}),(R,\leq_{R})\). To compute the matching score of two sequences, we define \[\Sigma^{\diamond}(P,R)=\max_{M^{\diamond}}\sum_{(u,v)\in M^{\diamond}}\phi_{X }(u,v) \tag{37}\] \[\textsc{s.t.}\ \forall(u,v),(u^{\prime},v^{\prime})\in M^{\diamond},u\preceq_{P }u^{\prime}\Longleftrightarrow v\preceq_{R}v^{\prime}.\] That is, we seek a _maximum monotonic matching_ between \(P\) and \(R\) that preserves the total order. For example, the matching score between \((1,2,3,4,5)\) and \((1,3,5,7,9)\) is \(3\) since \(1,3,5\) are monotonically matched. The sequence matching problem given by Eq. (37) is a weighted _longest common subsequence_ (LCS) problem, and thus can be solved with dynamic programming. We can further generalize this matching score to DAGs and graphs by noting that the total order \(\preceq\) of sequence elements is relaxed to a _partial order_ in DAGs and a _preorder_ in arbitrary directed graphs. The constrained optimization problem in Eq. 37 can be solved via ILP, see Appendix A. ## 6 Discussion We have seen that a diverse set of structured prediction metrics can be framed as computing a normalized total matching score for an optimal matching, given some similarity, which may _itself_ reflect a score over an optimal matching of the relevant substructures. We now consider how different problem settings may motivate particular design decisions within this framework. We also highlight a couple of cases in which the actual metrics used for a task might be modified to better fit the problem setting. Partial CreditFor many tasks, we want to award some credit for partially correct responses. In applications where precision is paramount, it may be appropriate to insist on exact matches, but less so when some modest tradeoff with recall is desired. Moreover, many IE objects intuitively admit gradient notions of correctness. Perhaps the most obvious example of this is mentions. Exact match on mentions, whether string- or offset-based, remains surprisingly common despite the possibility for variation in how they are annotated (e.g. disagreements about the extent of NPs). More relaxed mention similarities -- such as head word matching or Jaccard score -- are typically more appropriate. Recently, there has also been greater interest in the _informativity_ of entity mentions (Li et al., 2021; Chen et al., 2023), where, e.g., names > nominal expressions > pronouns, and where scores may need to vary according to a mention's informativity. All of these can be captured by different choices of \(\phi_{\mathsf{Nention}}\). REE offers another example. Earlier (SS3.3), we saw that CEAF-REE uses the \(\phi_{\subseteq}\) entity similarity, which awards no credit at all to entities containing even one _incorrect_ mention, but full credit to entities containing just one _correct_ mention. A more natural extension of the CEAF metric to the REE setting, and one that permits partial credit, would be to replace \(\phi_{\subseteq}\) with \(\phi_{3}\) or \(\phi_{4}\). Hierarchical OntologiesType hierarchies are another common feature of IE problems: both events and entities may have types, subtypes, and even sub-subtypes. This is true of the event ontologies for FrameNet Baker et al. (1998), RAMS Ebner et al. (2020), WikiEvents Li et al. (2021), and even MUC-4.17 Yet, the standard evaluation metrics for these datasets do not take the hierarchy into account, instead treating the ontology as flat. Footnote 17: The attack template type is considered the parent type of all other template types in MUC-4. Following the discussion above, it may thus often be appropriate to replace exact type matches (\(\delta_{\text{type}}\)) with similarities that award partial credit for correct ancestor type prediction. One possibility is a level-based partial scoring: Given a \(D\)-level type ontology with types specified as a \(D\)-tuple \(P=(p_{1},\ldots,p_{D})\), we could, for instance, award credit based on the depth \(d\in\{0,\ldots,D\}\) of the most specific correctly predicted type, e.g.: \[\phi_{\text{type}}(P,R)=\begin{cases}2^{d-D}&\text{if }d>0;\\ 0&\text{otherwise,}\end{cases} \tag{38}\] where \(d=0\) iff even the most general type is incorrectly predicted. Or, one could adopt practices from related work in fine-grained entity typing Ling and Weld (2012); Chen et al. (2020), i.a.), which use the F\({}_{1}\) score of the set of all possible supertypes of predicted / reference types \(S(P)=\{t|p<:t,p\in P\}\): \[\phi_{\text{type}}(P,R)=\mathsf{F}^{\leftrightarrow}[\delta_{\text{type}}](S(P ),S(R)). \tag{39}\] There is some precedent for schemes like this, but proper analysis of performance on tasks with hierarchical ontologies requires metrics that account for that hierarchy, and the field of IE as a whole would benefit from adopting them more widely. One-Sided vs. Two-Sided ConstraintsIn general, metrics impose constraints on the matching between the predictions and the reference. Overwhelmingly, these tend to be two-sided (bijective) constraints, as systems usually try to generate just one predicted object for each one in the reference. But this is not always the case. The CEAF-RME metrics (Eqs. 27 and 28) proposed by Chen et al. (2023), which use one-sided constraints, are motivated in part by a need to evaluate a model that predicts _mention_ fillers against references that contain _entity_ fillers. This suggests a more general motivation for one-sided constraints -- namely, for cases where the reference outputs are _sets_, but where predictions take the form of _members_ of those sets. ## 7 Conclusion We have presented a framework that unifies a variety of structured prediction metrics as normalized scores over (possibly hierarchical) constrained optimal matchings of structured objects. On the side of _theory_, our framework elucidates the relationships among tasks by defining the core components of their metrics. On the side of _practice_, it offers a compositional toolkit for the design of new metrics (aided by our library) and for critically evaluating existing ones, showing where they may inadequately capture important task features (SS6). We intend this work to help the NLP community converge both on a common language for metric design and on more standardized metric implementations. ## Ethics Statement As this work principally describes a conceptual framework and presents a survey of evaluation metrics, we do not believe it raises ethical concerns. ## Limitations While this work aims to give a unified treatment of a variety of different metrics, our coverage of existing metrics is not exhaustive, and is intended rather to convey the expressiveness of the framework. Our framework for evaluation is based on _matching_ of substructures -- thus metrics based on structure _editing_ (e.g. string or tree edit distances; word error rate (WER) in speech recognition) cannot be expressed naturally in our formulation. One can of course define a \(\phi\) based on edit distances over sequences, but that has to be an atomic definition and cannot be derived naturally under our bottom-up approach.
2308.09581
Phase transition for the smallest eigenvalue of covariance matrices
In this paper, we study the smallest non-zero eigenvalue of the sample covariance matrices $\mathcal{S}(Y)=YY^*$, where $Y=(y_{ij})$ is an $M\times N$ matrix with iid mean $0$ variance $N^{-1}$ entries. We prove a phase transition for its distribution, induced by the fatness of the tail of $y_{ij}$'s. More specifically, we assume that $y_{ij}$ is symmetrically distributed with tail probability $\mathbb{P}(|\sqrt{N}y_{ij}|\geq x)\sim x^{-\alpha}$ when $x\to \infty$, for some $\alpha\in (2,4)$. We show the following conclusions: (i). When $\alpha>\frac83$, the smallest eigenvalue follows the Tracy-Widom law on scale $N^{-\frac23}$; (ii). When $2<\alpha<\frac83$, the smallest eigenvalue follows the Gaussian law on scale $N^{-\frac{\alpha}{4}}$; (iii). When $\alpha=\frac83$, the distribution is given by an interpolation between Tracy-Widom and Gaussian; (iv). In case $\alpha\leq \frac{10}{3}$, in addition to the left edge of the MP law, a deterministic shift of order $N^{1-\frac{\alpha}{2}}$ shall be subtracted from the smallest eigenvalue, in both the Tracy-Widom law and the Gaussian law. Overall speaking, our proof strategy is inspired by \cite{ALY} which is originally done for the bulk regime of the L\'{e}vy Wigner matrices. In addition to various technical complications arising from the bulk-to-edge extension, two ingredients are needed for our derivation: an intermediate left edge local law based on a simple but effective matrix minor argument, and a mesoscopic CLT for the linear spectral statistic with asymptotic expansion for its expectation.
Zhigang Bao, Jaehun Lee, Xiaocong Xu
2023-08-18T14:22:25Z
http://arxiv.org/abs/2308.09581v4
# Phase transition for the smallest eigenvalue of covariance matrices ###### Abstract In this paper, we study the smallest non-zero eigenvalue of the sample covariance matrices \(\mathcal{S}(Y)=YY^{*}\), where \(Y=(y_{ij})\) is an \(M\times N\) matrix with iid mean \(0\) variance \(N^{-1}\) entries. We consider the regime \(M=M(N)\) and \(M/N\to c_{\infty}\in\mathbb{R}\setminus\{1\}\) as \(N\to\infty\). It is known that for the extreme eigenvalues of Wigner matrices and the largest eigenvalue of \(\mathcal{S}(Y)\), a weak 4th moment condition is necessary and sufficient for the Tracy-Widom law [51, 22]. In this paper, we show that the Tracy-Widom law is more robust for the smallest eigenvalue of \(\mathcal{S}(Y)\), by discovering a phase transition induced by the fatness of the tail of \(y_{ij}\)'s. More specifically, we assume that \(y_{ij}\) is symmetrically distributed with tail probability \(\mathbb{P}(|\sqrt{N}y_{ij}|\geq x)\sim x^{-\alpha}\) when \(x\to\infty\), for some \(\alpha\in(2,4)\). We show the following conclusions: (i). When \(\alpha>\frac{8}{3}\), the smallest eigenvalue follows the Tracy-Widom law on scale \(N^{-\frac{\alpha}{2}}\); (ii). When \(2<\alpha<\frac{8}{3}\), the smallest eigenvalue follows the Gaussian law on scale \(N^{-\frac{\alpha}{2}}\); (iii). When \(\alpha=\frac{8}{3}\), the distribution is given by an interpolation between Tracy-Widom and Gaussian; (iv). In case \(\alpha\leq\frac{10}{3}\), in addition to the left edge of the MP law, a deterministic shift of order \(N^{1-\frac{\alpha}{2}}\) shall be subtracted from the smallest eigenvalue, in both the Tracy-Widom law and the Gaussian law. Overall speaking, our proof strategy is inspired by [5] which is originally done for the bulk regime of the Levy Wigner matrices. In addition to various technical complications arising from the bulk-to-edge extension, two ingredients are needed for our derivation: an intermediate left edge local law based on a simple but effective matrix minor argument, and a mesoscopic CLT for the linear spectral statistic with asymptotic expansion for its expectation. ## 1. Introduction ### Main results As one of the most classic models in random matrix theory, the sample covariance matrices have been widely studied. When considering the high-dimensional setting it is well-known that the empirical spectral distribution converges to Marchenko-Pastur law (MP law). Inspired by problems such as PCA, the extreme eigenvalue has also been extensively studied. Among the most well-known results in this direction are probably the Bai-Yin law [8] on the first order limit and the Tracy-Widom law [39, 40] on the second order fluctuation of the extreme eigenvalues. More specifically, let \(Y=(y_{ij})\in\mathbb{R}^{M\times N}\) be a random matrix with i.i.d. mean \(0\) and variance \(N^{-1}\) entries, and assume that \(\sqrt{N}y_{ij}\)'s are i.i.d. copies of an random variable \(\Theta\) which is independent of \(N\). The covariance matrix with the data matrix \(Y\) is defined as \(\mathcal{S}(Y)=YY^{*}\). Let \(\lambda_{1}(\mathcal{S}(Y))\geq\ldots\geq\lambda_{M}(\mathcal{S}(Y))\) be the ordered eigenvalues of \(\mathcal{S}(Y)\). We denote by \(\mu_{N}=\frac{1}{M}\sum_{i=1}^{M}\delta_{\lambda_{i}}\) the empirical spectral distribution. In the regime \(M=M(N)\), \(c_{N}\coloneqq M/N\to c_{\infty}\in(0,\infty)\) as \(N\to\infty\), it is well known since [54] that \(\mu_{N}\) is weakly approximated by the MP law \[\rho^{\mathsf{mp}}(\mathrm{d}x)=\frac{1}{2\pi c_{N}x}\sqrt{[(\lambda_{+}^{ \mathsf{mp}}-x)(x-\lambda_{-}^{\mathsf{mp}})]_{+}}\mathrm{d}x+(1-\frac{1}{c_{N }})_{+}\delta_{0}(x),\qquad\lambda_{\pm}^{\mathsf{mp}}=(1\pm\sqrt{c_{N}})^{2}. \tag{1.1}\] The Stieltjes transform of \(\rho^{\mathsf{mp}}\) is denoted as \(\mathsf{m}_{\mathsf{mp}}(z)\), which satisfies the following equation: \[zc_{N}\mathsf{m}_{\mathsf{mp}}^{2}(z)+\big{(}z-(1-c_{N})\big{)}\mathsf{m}_{ \mathsf{mp}}(z)+1=0. \tag{1.2}\] Equivalently, \[\mathsf{m}_{\mathsf{mp}}(z)=\frac{1-c_{N}-z+\mathrm{i}\sqrt{(\lambda_{+}^{ \mathsf{mp}}-z)(z-\lambda_{-}^{\mathsf{mp}})}}{2zc_{N}}, \tag{1.3}\] where the square root is taken with a branch cut on the negative real axis. Throughout the paper, we will be interested in the regime \(c_{\infty}\neq 1\). In this case, both \(\lambda_{\pm}^{\text{mp}}\) are called soft edges of the spectrum. Regarding the extreme eigenvalues, Bai-Yin law [8] states that \[\lambda_{1}(\mathcal{S}(Y))-\lambda_{+}^{\text{mp}}\xrightarrow{a.s.}0,\qquad \lambda_{M\wedge N}(\mathcal{S}(Y))-\lambda_{-}^{\text{mp}}\xrightarrow{a.s.}0,\] as long as \(\mathbb{E}|\sqrt{N}y_{ij}|^{4}<\infty\) is additionally assumed. It is also shown in [8] that \(\mathbb{E}|\sqrt{N}y_{ij}|^{4}<\infty\) is necessary and sufficient for the convergence of \(\lambda_{1}(\mathcal{S}(Y))\) to \(\lambda_{+}^{\text{mp}}\). It had been widely believed that the convergence of the smallest eigenvalue \(\lambda_{M\wedge N}(\mathcal{S}(Y))\) to \(\lambda_{-}^{\text{mp}}\) requires a weaker moment condition, and indeed it was shown in [62] that the condition of mean 0 and variance 1 for \(\sqrt{N}y_{ij}\)'s is already sufficient. On the level of the second order fluctuation, as an extension of the seminal work on Wigner matrix [51], it was shown in [22] that the sufficient and necessary condition for the Tracy-Widom law of \(\lambda_{1}(\mathcal{S}(Y))\) is the existence of a weak 4-th moment \[\lim_{s\to\infty}s^{4}\mathbb{P}(|\sqrt{N}y_{11}|\geq s)=0. \tag{1.4}\] Similarly to the first order result in [62], it has been believed that the Tracy-Widom law shall hold for the smallest eigenvalue \(\lambda_{M\wedge N}(\mathcal{S}(Y))\) under a weaker condition. In this work, we are going to show that the smallest eigenvalue counterpart of (1.4) is \[\lim_{s\to\infty}s^{\frac{8}{3}\mathbb{P}}(|\sqrt{N}y_{11}|\geq s)=0,\] under Assumption 1.1 below. Moreover, when the tail \(\mathbb{P}(|\sqrt{N}y_{11}|\geq s)\) becomes heavier, the distribution of \(\lambda_{M\wedge N}(\mathcal{S}(Y))\) exhibits a phase transition from Tracy-Widom to Gaussian. For technical reason, we make the following assumptions on \(\mathcal{S}(Y)\). **Assumption 1.1**.: _We make the following assumptions on the covariance matrix \(\mathcal{S}(Y)\)._ _(i). (On matrix entries) We suppose that \(\sqrt{N}y_{ij}\)'s are all iid copies of a random variable \(\Theta\) which is independent of \(N\). Suppose that \(\mathbb{E}\Theta=0\) and \(\mathbb{E}\Theta^{2}=1\). We further assume that \(\Theta\) is symmetrically distributed, absolutely continuous with a positive density at 0 and as \(s\to\infty\),_ \[\left|\mathbb{P}(\Theta>s)+\frac{\mathsf{c}}{\Gamma(1-\alpha/2)}s^{-\alpha} \right|\lesssim s^{-(\alpha+\varrho)}\] _for some \(\alpha\in(2,4)\), some constant \(\mathsf{c}>0\) and some small \(\varrho>0\),_ _(ii). (On dimension) We assume that \(M\coloneqq M(N)\) and as \(N\to\infty\)_ \[c_{N}\coloneqq\frac{M}{N}\to c_{\infty}\in(0,\infty)\setminus\{1\}.\] Our results are collected in the following main theorem. For brevity, we assume \(M<N\) throughout this paper. Analogous results can be easily obtained by switching the role of \(M\) and \(N\) when \(M>N\). **Theorem 1.2**.: _Suppose that Assumption 1.1 holds. There exists a random variable \(\mathcal{X}_{\alpha}\), such that the following statements hold when \(N\to\infty\)._ _(i):_ \[\frac{-M^{\frac{2}{3}}}{\sqrt{c_{N}}(1-\sqrt{c_{N}})^{4/3}}\big{(}\lambda_{M \wedge N}(\mathcal{S}(Y))-\lambda_{-}^{\text{mp}}-\mathcal{X}_{\alpha}\big{)} \Rightarrow\mathrm{T}\mathrm{W}_{1}.\] _(ii):_ \[\frac{N^{\frac{\alpha}{4}}\big{(}\mathcal{X}_{\alpha}-\mathbb{E}\mathcal{X}_{ \alpha}\big{)}}{\sigma_{\alpha}}\Rightarrow N(0,1),\qquad\sigma_{\alpha}^{2}= \frac{\mathsf{c}c_{N}^{(4-\alpha)/4}(1-\sqrt{c_{N}})^{4}(\alpha-2)}{2}\Gamma \Big{(}\frac{\alpha}{2}+1\Big{)}.\] _(iii):_ \[\mathbb{E}\mathcal{X}_{\alpha}=-N^{1-\frac{\alpha}{2}}\frac{\mathsf{c}(1- \sqrt{c_{N}})^{2}}{c_{N}^{(\alpha-2)/4}}\Gamma\Big{(}\frac{\alpha}{2}+1\Big{)} +\mathsf{o}(N^{1-\frac{\alpha}{2}}),\] _._ * _In case_ \(\alpha=8/3\)_, the following convergence holds:_ \[\frac{-M^{\frac{2}{3}}}{\sqrt{c_{N}}(1-\sqrt{c_{N}})^{\frac{4}{3}}}\left(\lambda _{M\wedge N}(\mathcal{S}(Y))-\lambda_{-}^{\text{mp}}-\mathbb{E}\mathcal{X}_{ \alpha}\right)\Rightarrow\operatorname{TW}_{1}+\mathcal{N}(0,\tilde{\sigma}^{2 }),\quad\tilde{\sigma}^{2}=\frac{c\epsilon_{\infty}^{\frac{2}{3}}(1-\sqrt{c_{ \infty}})^{\frac{4}{3}}}{3}\Gamma\left(\frac{7}{3}\right).\] _where_ \(\operatorname{TW}_{1}\) _and_ \(\mathcal{N}(0,\tilde{\sigma})\) _in the RHS of the above convergence are independent._ _Remark 1_.: From the above theorem, we can see that a phase transition occurs at \(\alpha=8/3\). When \(\alpha>8/3\), the fluctuation of \(\lambda_{M\wedge N}(\mathcal{S}(Y))\) is governed by \(\operatorname{TW}_{1}\) on scale \(N^{-2/3}\). When \(2<\alpha<8/3\), the fluctuation is dominated by that of \(\mathcal{X}_{\alpha}\), and thus it is Gaussian on scale \(N^{-\alpha/4}\). In the case \(\alpha=8/3\), the limiting distribution is given by the convolution of a Tracy-Widom and Gaussian. When \(\alpha\leq 10/3\), a shift of order \(N^{1-\alpha/2}\) is created by \(\mathbb{E}\mathcal{X}_{\alpha}\). We remark here that a natural further direction is to exploit the expansion of \(\mathbb{E}\mathcal{X}_{\alpha}\) up to an order smaller than the fluctuation. But due to technical reason, we do not pursue this direction in the current paper. ### Related References The Tracy-Widom distribution in random matrices was first obtained for GOE and GUE in [64, 65] and was later extended to Wishart matrices in [39] and [40]. In the past few decades, the universality of the Tracy-Widom law has been extensively studied. The extreme eigenvalues of many random matrices with general distributions and structures have been proven to follow the Tracy-Widom distribution. We refer to the following literature [59, 61, 31, 55, 56, 58, 30, 46, 43, 10, 51, 49, 48, 6, 57, 24] for related developments. Although the Tracy-Widom distribution is very robust, some phase transitions may occur when considering heavy-tailed matrices or sparse matrices. For example, for sparse Erdos-Renyi graphs \(G(N,p)\), it is known from [36] that a phase transition from Tracy-Widom to Gaussian will occur when \(p\) crosses \(N^{-2/3}\). We also refer to [26, 50, 37, 32, 47] for related study. For heavy-tailed Wigner matrices or sample covariance matrices, as we mentioned, according to [51] and [22], the largest eigenvalue follows the Tracy Widom distribution if and only if a weak 4-th moment condition is satisfied. From [60, 7, 21], we also know the distribution of the largest eigenvalue when the matrix entries have heavier tail. We would also like to mention the recent research on the mobility edge of Levy matrix with \(\alpha<1\) in [3]. On the other hand, if we focus on bulk statistics, universality will be very robust. For any \(\alpha>0\), it is proved in [5, 2] that the bulk universality is valid. An extension of [5] to the hard edge of the covariance matrix in case \(M=N\) is considered in [52]. In our current work, we focus on the regime \(\alpha\in(2,4)\) for the left edge of the covariance matrices. According to [12], even the global law will no longer be MP law in case \(\alpha<2\), and thus we expect a significantly different analysis is needed in this regime. Regarding other works on the behaviour of the spectrum for heavy-tailed matrices, we refer to [13, 14, 18, 19, 15, 34, 33, 41] for instance. ### Proof strategy Our starting point is a decomposition of \(Y\), or more precisely a resampling of \(Y\), from the work [5]. Consider the Bernoulli \(0-1\) random variables \(\psi_{ij}\) and \(\chi_{ij}\) defined by \[\mathbb{P}[\psi_{ij}=1]=\mathbb{P}[|y_{ij}|\geq N^{-\epsilon_{b}}],\quad \mathbb{P}[\chi_{ij}=1]=\frac{\mathbb{P}[|y_{ij}|\in[N^{-1/2-\epsilon_{a}},N^{ -\epsilon_{b}}]]}{\mathbb{P}[|y_{ij}|<N^{-\epsilon_{b}}]} \tag{1.5}\] for some small positive constants \(\epsilon_{a},\epsilon_{b}\). In the sequel, we shall first choose \(\epsilon_{b}\) and then choose \(\epsilon_{a}=\epsilon_{a}(\epsilon_{b},\alpha)\) to be sufficiently small. Specifically, throughout the discussion, we can make the following choice \[0<\epsilon_{b}<(\alpha-2)/10\alpha,\qquad 0<\epsilon_{a}<\min\{\epsilon_{b},4- \alpha\}/10000. \tag{1.6}\] Let \(a,b\), and \(c\) be random variables such that \[\mathbb{P}[a_{ij}\in I] =\frac{\mathbb{P}[y_{ij}\in(-N^{-1/2-\epsilon_{a}},N^{-1/2- \epsilon_{a}})\cap I]}{\mathbb{P}[|y_{ij}|\leq N^{-1/2-\epsilon_{a}}]},\] \[\mathbb{P}[b_{ij}\in I] =\frac{\mathbb{P}[y_{ij}\in\left((-N^{-\epsilon_{b}},-N^{-1/2- \epsilon_{a}}]\cup[N^{-1/2-\epsilon_{a}},N^{-\epsilon_{b}})\right)\cap I]}{ \mathbb{P}[|y_{ij}|\in[N^{-1/2-\epsilon_{a}},N^{-\epsilon_{b}})]},\] \[\mathbb{P}[c_{ij}\in I] =\frac{\mathbb{P}[y_{ij}\in\left((-\infty,-N^{-\epsilon_{b}})\cup (N^{-\epsilon_{b}},\infty)\right)\cap I]}{\mathbb{P}[|y_{ij}|\geq N^{-\epsilon _{b}}]}.\] For each \((i,j)\in[M]\times[N]\), we set \[\mathsf{A}_{ij}=(1-\psi_{ij})(1-\chi_{ij})a_{ij},\quad\mathsf{B}_{ij}=(1-\psi _{ij})\chi_{ij}b_{ij},\quad\mathsf{C}_{ij}=\psi_{ij}c_{ij}\] where \(a,b,c,\psi,\chi\)-variables are all mutually independent. Sample \(Y\) and \(X\) by setting \[Y=\mathsf{A}+\mathsf{B}+\mathsf{C},\qquad X=\mathsf{B}+\mathsf{C}. \tag{1.7}\] The dependence among \(\mathsf{A},\mathsf{B}\) and \(\mathsf{C}\) is then governed by the \(\psi\) and \(\chi\) variables. The purpose of the above decomposition, especially the separation of part \(\mathsf{A}\), is to view our model as a deformed model. We hope that the light-tailed part \(\mathsf{A}\) can regularize the spectrum of the heavy-tailed part \(X=\mathsf{B}+\mathsf{C}\), leading to the emergence of the edge universality. This idea is rooted in the dynamic approach developed in the last decade. We refer to the monograph [29] for a detailed introduction of this powerful approach, and also refer to [46, 45, 27, 20, 44, 35, 1, 28, 4] for instance. On a more specific level, our proof strategy is inspired by [5] where the authors consider the bulk statistics of the Levy Wigner matrices in the regime \(\alpha\in(0,2)\), which we will denote by \(H\) in the sequel. In [5], the main idea to prove the bulk universality of the local statistics is to compare the Levy Wigner matrix \(H=\mathsf{A}_{H}+\mathsf{B}_{H}+\mathsf{C}_{H}\) with the Gaussian divisible model \(H_{t}=\sqrt{t}W_{H}+\mathsf{B}_{H}+\mathsf{C}_{H}\), where \(\mathsf{A}_{H},\mathsf{B}_{H}\) and \(\mathsf{C}_{H}\) are defined similarly to \(\mathsf{A},\mathsf{B},\mathsf{C}\) above, and \(W_{H}\) is a GOE independent of \(H\). Here \(t\) is chosen in such a way that \(\sqrt{t}(W_{H})_{ij}\) matches \((\mathsf{A}_{H})_{ij}\) up to the third moment, conditioning on \((\psi_{H})_{ij}=0\), where \(\psi_{H}\) is defined similarly to \(\psi\). Roughly speaking, the proof strategy of [5] is as follows. First, one needs to prove that the spectrum of \(\mathsf{B}_{H}+\mathsf{C}_{H}\) satisfies an intermediate local law, which shows that the spectral density of \(\mathsf{B}_{H}+\mathsf{C}_{H}\) is bounded below and above at a scale \(\eta_{*}\leq N^{-\delta}t\). This control of the spectral density is also called \(\eta_{*}\)-regularity. Next, with the \(\eta_{*}\)-regularity established, one can use the results from [46] to prove that the \(\sqrt{t}W_{H}\) component can improve the spectral regularity to the optimal (bulk) scale \(\eta\geq N^{-1+\delta}\), and further obtain the bulk universality of \(H_{t}\). Finally, one can prove that the bulk local eigenvalue statistics of \(H\) and \(H_{t}\) have the same asymptotic distribution by comparing the Green functions of \(H\) and \(H_{t}\). However, the main difficulty here is that, unlike in \(H_{t}\), the small part \(\mathsf{A}_{H}\) and the major part \(\mathsf{B}_{H}+\mathsf{C}_{H}\) in \(H\) are not independent. They are coupled by the \(\psi\) and \(\chi\) variables. Despite this dependence being explicit, great effort has been made to carry out the comparison in [5]. At a high level, our proof strategy involves adapting the approach from [5] for the bulk regime to the left edge of the covariance matrices. However, this adaptation is far from being straightforward. We summarize some major ideas as follows. 1. (_Intermediate local law_) Similar to many previous DBM works, if we want to initiate the analysis, we need an intermediate local law for the \(X=\mathsf{B}+\mathsf{C}\) part. More precisely, we require an \(\eta_{*}\)-regularity of the eigenvalue density for \(\mathcal{S}(X)=XX^{*}\) at the left edge of the MP law, for some \(\eta_{*}\ll 1\). According to [7], such a regularity cannot be true at the right edge of the spectrum. In order to explain heuristically the difference between the largest and smallest eigenvalues under the heavy-tailed assumption, we recall the variational definition of the smallest and largest singular values of \(X\), which are also the square roots of the corresponding eigenvalues of \(\mathcal{S}(X)\), \[\sigma_{M}(X)=\inf_{v\in S^{M-1}}\left\|X^{*}v\right\|_{2},\qquad\sigma_{1}(X) =\sup_{v\in S^{M-1}}\left\|X^{*}v\right\|_{2}. \tag{1.8}\] Denote by \(v_{M}\) and \(v_{1}\) the right singular vectors of \(X^{*}\) corresponding to \(\sigma_{M}(X)\) and \(\sigma_{1}(X)\), respectively. From the variational representation, it is clear that \(v_{1}\) favors the large entry of \(X^{*}\), and thus \(\sigma_{1}(X)\) will be large as long as there is a big entry in \(X\). This is indeed the case when the weak \(4\)-th moment condition is not satisfied. In contrast, in (1.8), since \(v_{M}\) is the minimizer, it tries to avoid the big entries of \(X^{*}\), i.e., it tends to live in the null space of \(\mathsf{C}^{*}\). Hence, heuristically, we can believe that removing the \(\mathsf{C}\) entries will not significantly change the smallest singular value, as long as the null space of \(\mathsf{C}\) is sufficiently big. This will be true if \(\operatorname{rank}(\mathsf{C})=o(N)\), which indeed holds when \(\alpha>2\). This simple heuristic explains why the first order behaviour of the smallest singular value of \(X\), is more robust under the weak moment condition, in contrast to the largest singular value. It also indicates the following strategy for obtaining an intermediate local law for \(X\). Let \(\Psi=(\psi_{ij})\). We define the index sets \[\mathcal{D}_{r}\coloneqq\mathcal{D}_{r}(\Psi)\coloneqq\Big{\{}i\in[M]:\sum_{ j=1}^{N}\psi_{ij}\geq 1\Big{\}},\qquad\mathcal{D}_{c}\coloneqq\mathcal{D}_{c}( \Psi)\coloneqq\Big{\{}j\in[N]:\sum_{i=1}^{M}\psi_{ij}\geq 1\Big{\}} \tag{1.9}\] which are the index set of rows/columns in which one can find at least one nonzero \(\psi_{ij}\). For any matrix \(A\in\mathbb{C}^{M\times N}\), let \(A^{(\mathcal{D}_{r})}\) and \(A^{[\mathcal{D}_{c}]}\) be the minors of \(A\) with the \(\mathcal{D}_{r}\) rows and \(\mathcal{D}_{c}\) columns removed, respectively, and we also use \(\mathcal{S}(\mathcal{B})=\mathcal{B}\mathcal{B}^{*}\) for any rectangle matrix \(\mathcal{B}\) in the sequel. By Cauchy interlacing, we can easily see that \[\lambda_{M}(\mathcal{S}(X^{[\mathcal{D}_{c}]}))\leq\lambda_{M}(\mathcal{S}(X) )\leq\lambda_{M-|\mathcal{D}_{r}|}(\mathcal{S}(X^{(\mathcal{D}_{r})}))\] Further notice that \(X^{(\mathcal{D}_{r})}=\mathsf{B}^{(\mathcal{D}_{r})}\) and \(X^{[\mathcal{D}_{c}]}=\mathsf{B}^{[\mathcal{D}_{c}]}\), and thus we have \[\lambda_{M}(\mathcal{S}(\mathsf{B}^{[\mathcal{D}_{c}]}))\leq\lambda_{M}( \mathcal{S}(X))\leq\lambda_{M-|\mathcal{D}_{r}|}(\mathcal{S}(\mathsf{B}^{( \mathcal{D}_{r})})). \tag{1.10}\] Conditioning on the matrix \(\Psi\), we notice that both \(\mathcal{S}(\mathsf{B}^{[\mathcal{D}_{c}]})\) and \(\mathcal{S}(\mathsf{B}^{(\mathcal{D}_{r})})\) are random matrices with bounded support, since \(|b_{ij}|\leq N^{-\epsilon_{b}}\). For such matrices, one has a local law with precision \(N^{-2\epsilon_{b}}\); see [38]. This local law together with (1.10) will give a rigidity estimate of \(\lambda_{M}(\mathcal{S}(X))\) on scale \(\eta_{*}=N^{-\epsilon_{b}}\) according to our choice in (1.6). Similarly applying the above row and column minor argument, one can derive an intermediate local law for \(X\), which implies that \(X\) satisfies the \(\eta_{*}\)-regularity at the left edge. We remark here that in our regime \(\alpha\in(2,4)\), a weak intermediate local law, or alternatively, a weak regularity with \(\eta_{*}\sim N^{-\varepsilon}\) for some small \(\varepsilon>0\) would be sufficient. This is always possible if we choose a suitable \(\epsilon_{b}\). In contrast, in the work [5], in the regime \(\alpha\in(0,2)\), a stronger regularity with a more carefully chosen \(\eta_{*}\) is actually needed. 2. (_Gaussian divisible ensemble_) We then consider the Gaussian divisible model \[V_{t}\coloneqq\sqrt{t}W+\mathsf{B}+\mathsf{C}=\sqrt{t}W+X,\qquad\mathcal{S}( V_{t})=V_{t}V_{t}^{*}, \tag{1.11}\] where \(W=(w_{ij})\in\mathbb{R}^{M\times N}\) is a Gaussian matrix with \(\mathrm{iid}\ N(0,N^{-1})\) entries, and \(t=N\mathbb{E}|\mathsf{A}_{ij}|^{2}\) (slightly different from the choice in [5] for convenience). With the \(\eta_{*}\)-regularity of \(\mathcal{S}(X)\), we then choose \(1\gg t\gg\sqrt{\eta_{*}}\). Actually, our \(t\) would be order \(N^{-2\epsilon_{a}}\). By choosing \(\epsilon_{a}\) sufficiently small in light of (1.6), our \(t\) can be sufficiently close to \(1\). By conditioning on the matrix \(X\), the following edge universality can be achieved for the Gaussian divisible model \(\mathcal{S}(V_{t})\) by extending the result in [46] and [24] to the left edge of the sample covariance matrices \[N^{\frac{2}{3}}\gamma\big{(}(\lambda_{M}(\mathcal{S}(V_{t}))-\lambda_{-,t}) \Rightarrow\mathrm{TW}_{1}, \tag{1.12}\] for some constant \(\gamma\), where \(\lambda_{-,t}\) can be approximated by a mesoscopic statistic of the spectrum of \(\mathcal{S}(X)\). Specifically, \[\lambda_{-,t}=(1-c_{N}tm_{X}(\zeta_{-,t}))^{2}\zeta_{-,t}+(1-c_{N})t(1+c_{N}tm _{X}(\zeta_{-,t})), \tag{1.13}\] where \(m_{X}\) is the Stieltjes transform of the spectral distribution of \(\mathcal{S}(X)\), and \(\zeta_{-,t}\) is a random parameter defined through (2.3). We remark here that even though \(\zeta_{-,t}\) is random, it can be proven that with a high probability, \(\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t}\sim t^{2}\). Hence, regarding the Stieltjes transform \(m_{X}(\zeta_{-,t})\), we are at a (random) mesoscopic energy scale of order \(t^{2}\). From the work [16, 53], one already knows that the global statistic \(m_{X}(z)-\mathbb{E}m_{X}(z)\) follows a CLT on scale \(N^{-\alpha/4}\) for a fixed \(z\) with \(\operatorname{Im}z>0\). Due to the randomness of our parameter \(\zeta_{-,t}\), a further expansion of it around a deterministic parameter \(\zeta_{\mathsf{e}}\) will be needed to adapt the argument in [16, 53]. Consequently, after the expansion, we will need to control the fluctuations of \(m_{X}^{(k)}(\zeta_{\mathsf{e}})\) for \(k=0,\ldots,K\) with a sufficiently large \(K\). Studying the fluctuations of these mesoscopic statistics eventually leads to a CLT \[N^{\frac{\alpha}{4}}(\lambda_{-,t}-\mathbb{E}\lambda_{-,t})\Rightarrow N(0, \sigma_{\alpha}^{2}).\] In addition to the above CLT, we need one more step to study the expansion of \(\mathbb{E}\lambda_{-,t}\). It turns out that \[\mathbb{E}\lambda_{-,t}=\lambda_{-}^{\mathsf{mp}}-N^{1-\frac{\alpha}{2}}s_{ \alpha}+\mathfrak{o}(N^{1-\frac{\alpha}{2}}).\] 3. (_Green function comparison_) Finally, we shall extend the result (1.12) from the Gaussian divisible model to our original matrix \(\mathcal{S}(Y)\), using a Green function comparison inspired by [5]. It is now well-understood that one can compare certain functionals of the Green functions of two matrices instead of their eigenvalue distributions. Recall \(Y_{t}\) from (1.11), and we define the interpolations \[Y^{\gamma}=\gamma\mathsf{A}+t^{1/2}(1-\gamma^{2})^{1/2}W+\mathsf{ B}+\mathsf{C},\qquad S^{\gamma}=Y^{\gamma}(Y^{\gamma})^{*},\] \[G^{\gamma}(z)=(S^{\gamma}-z)^{-1},\qquad\mathcal{G}^{\gamma}(z) =((Y^{\gamma})^{*}Y^{\gamma}-z)^{-1}\qquad m^{\gamma}(z)=\frac{1}{M}\text{Tr} G^{\gamma}(z), \tag{1.14}\] In order to extend (1.12) from \(S^{0}=\mathcal{S}(V_{t})\) to \(S^{1}=\mathcal{S}(Y)\), from [56] for instance, we know that it suffices to establish the following result for some smooth bounded \(F:\mathbb{R}\to\mathbb{R}\) with bounded derivatives \[\Big{|}\mathbb{E}F\Big{(}N\int_{E_{1}}^{E_{2}}\mathrm{d}E\, \operatorname{Im}m^{1}(\lambda_{-,t}+E+\mathrm{i}\eta_{0})\Big{)}-\mathbb{E} F\Big{(}N\int_{E_{1}}^{E_{2}}\mathrm{d}E\,\operatorname{Im}m^{0}(\lambda_{-,t}+E+ \mathrm{i}\eta_{0})\Big{)}\Big{|}\leq N^{-\delta}, \tag{1.15}\] where \(E_{1}<E_{2}\), and \(|E_{i}|\leq N^{-\frac{2}{3}+\varepsilon}\) for \(i=1,2\), and \(\eta_{0}=N^{-\frac{2}{3}-\varepsilon}\), if we have the rigidity estimate \[|\lambda_{M}(S^{a})-\lambda_{-,t}|\prec N^{-\frac{2}{3}},\qquad a=0,1 \tag{1.16}\] The estimate is easily available for the case \(a=0\) (Gaussian divisible model) by a straightforward extension of [46] and [24]. This rigidity estimate for case \(a=0\) is actually a technical input of getting (1.12). Hence, before the comparison in (1.15), we shall first prove (1.16) for \(a=1\), again by a Green function comparison. We claim that it suffices to show for all \(z_{-,t}=\lambda_{-,t}+\kappa+\mathrm{i}\eta\), with \(|\kappa|\in N^{-\epsilon_{b}/2}\), and \(\eta\in[N^{-\frac{2}{3}-\varepsilon},N^{-\varepsilon}]\) with some small \(\varepsilon>0\), \[\mathbb{E}\Big{|}N\eta(\operatorname{Im}m^{1}(z_{-,t})-\operatorname{Im}\tilde {m}^{0}(z_{-,t}))\Big{|}^{2k}\leq(1+o(1))\mathbb{E}\Big{|}N\eta(\operatorname{ Im}m^{0}(z_{-,t})-\operatorname{Im}\tilde{m}^{0}(z_{-,t}))\Big{|}^{2k}+N^{- \delta k}. \tag{1.17}\] Similar estimate also holds when one replaces \(\operatorname{Im}\) to \(\operatorname{Re}\). Here we introduced a copy of \(m^{0}(z)\) \[\tilde{m}^{0}(z)=\frac{1}{M}\text{Tr}(\sqrt{t}\tilde{W}+X-z)^{-1},\] and \(\tilde{W}\) is an iid copy of \(W\). Actually, for the Gaussian divisible model, conditioning on \(X\) and extending the Theorem 3 in [23] on the deformed rectangle matrices from the right edge to the left edge, one can actually get the estimate \[\big{|}\operatorname{Im}m^{0}(z_{-,t})-\operatorname{Im}m_{t}(z_{-,t})\big{|} \prec\left\{\begin{array}{ll}\frac{1}{N\eta},&\text{if }\kappa\geq 0,\\ \frac{1}{N(|\kappa|+\eta)}+\frac{1}{(N\eta)^{2}\sqrt{|\kappa|+\eta}},&\text{if } \kappa\leq 0,\end{array}\right. \tag{1.18}\] where \(m_{t}\) is defined in (2.2). Apparently, the above estimates also hold with \(m^{0}\) replaced by \(\tilde{m}^{0}\). Combining these estimates with (1.17) leads to the bounds \(|\operatorname{Im}m^{1}(z_{-,t})-\operatorname{Im}m_{t}(z_{-,t})|\prec 1/(N\eta)\) when \(\kappa>-N^{-\frac{2}{3}+\varepsilon}\) and \(|\operatorname{Im}m^{1}(z_{-,t})-\operatorname{Im}m_{t}(z_{-,t})|\ll 1/(N\eta)\) (w.h.p) when \(\kappa\leq-N^{-\frac{2}{3}+\varepsilon}\). Such estimates together with the real part analogue of the former will finally lead to the rigidity estimate in (1.16). The proofs of (1.15) and (1.17) are similar. We can turn to bound \[\mathrm{d}\ \mathbb{E}F\Big{(}N\int_{E_{1}}^{E_{2}}\mathrm{Im}\,m^{\gamma}(z_{-,t}^{0 })\mathrm{d}E\Big{)}/\mathrm{d}\gamma \tag{1.19}\] for \(z_{-,t}^{0}\coloneqq\lambda_{-,t}+E+\mathrm{i}\eta_{0}\) with \(\eta_{0}=N^{-\frac{2}{3}-\varepsilon}\), and \[\mathrm{d}\ \mathbb{E}|N\eta(\mathrm{Im}\,m^{\gamma}(z_{-,t})-\mathrm{Im}\, \tilde{m}^{0}(z_{-,t}))|^{2k}/\mathrm{d}\gamma \tag{1.20}\] for \(z_{-,t}=\lambda_{-,t}+E+\mathrm{i}\eta\), where \(E\in(-N^{-\epsilon_{b}/2},N^{-\frac{2}{3}+\epsilon})\) and \(\eta=N^{-\frac{2}{3}}\). Actually, we shall first condition on \(\Psi\), and then first estimate \(\mathbb{E}_{\Psi}\) and then use a law of total expectation to estimate the full expectation. When one try to take the derivatives in (1.19)-(1.20) and estimate the resulting terms, we will need a priori bounds for the Green function entries \[G^{\gamma}_{ij}(z),\quad\mathcal{G}^{\gamma}_{uv}(z),\quad((Y^{\gamma})^{*}G^ {\gamma}(z))_{ui} \tag{1.21}\] in the domain \[\mathsf{D}=\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}) \coloneqq\{z=\lambda_{-}^{\mathsf{mp}}+E+\mathrm{i}\eta:|E|\leq N^{- \varepsilon_{1}},\eta\in[N^{-\frac{2}{3}-\varepsilon_{2}},\varepsilon_{3}]\} \tag{1.22}\] with appropriately chosen small constants \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}\). We shall show that most of these entries are stochastically dominated by \(1\) while a small amount of them are stochastically dominated by \(1/t^{2}\). These bounds are not even known for the Gaussian divisible case, i.e., \(\gamma=0\), at the edge. The idea is to first prove the desired bounds of the quantities in (1.21) for \(\gamma=0\), and then prove another comparison result for the Green functions \[\left|\mathbb{E}|G^{\gamma}_{ij}(z)|^{2k}-\mathbb{E}|G^{0}_{ij}(z)|^{2k}\right| \leq N^{-\delta} \tag{1.23}\] for all \(z\in\mathsf{D}\). Here we refer to [5] and [43] for similar strategy of using comparison to prove Green function bounds on local scale. Hence, based on the above discussion, the proof route is as following \[\boxed{\text{bounds of (\ref{eq:21}) for $\gamma=0$}} \tag{1.23}\] (1.17) which requires a three steps of Green function comparison with different observables. In contrast, in [5], one Green function comparison for the observable \(F(\mathrm{Im}\,G_{a_{1}b_{1}}(z),\cdots,\mathrm{Im}\,G_{a_{m}b_{m}}(z))\) (and its real part analogoue) with a deterministic parameter \(z\) in the bulk regime would be sufficient. Also notice that our parameter \(z_{-,t}\) in (1.15) is random, which further complicates the comparison. Specifically, when we do expansions of the Green function entries w.r.t. the matrix entries, we shall also keep tracking the derivatives of \(\lambda_{-,t}\) w.r.t to these entries. The estimates of these derivatives involve delicate analysis of the subordination equations. Regarding the bounds of (1.21) for \(\gamma=0\), here we shall explain the argument for \(G_{ij}\) only for simplicity. The other two kinds of entries in (1.21) can be handled similarly. For the Gaussian divisible model, conditioning on \(X\), by extending the Theorem 3 in [23] on the deformed rectangle matrices from the right edge to the left edge, with the \(\eta_{*}\)-regularity of the spectrum of \(\mathcal{S}(X)\), we have for \(z\in\mathsf{D}\) \[|G^{0}_{ij}(z)-\Pi_{ij}(z)|\prec\Big{(}t\Big{(}\sqrt{\frac{\mathrm{Im}\,m_{t }(z)}{N\eta}}+\frac{1}{N\eta}\Big{)}+\frac{t^{1/2}}{N^{1/2}}\Big{)}\varpi^{-2} (z),\] where \(\varpi\) is of order \(t^{2}+\eta\), and \[\Pi_{ij}=(1+c_{N}tm_{t})\big{(}XX^{*}-\zeta_{t}(z)\big{)}_{ij}^{-1}=:(1+c_{N}tm _{t})G_{ij}(X,\zeta_{t}(z)).\] which is simply a multiple of a Green function entries of \(\mathcal{S}(X)\), but evaluated at a random parameter \(\zeta_{t}(z)\). By the facts \(t\sim N^{-2\epsilon_{a}}\), \(\eta\gtrsim N^{-\frac{2}{3}-\varepsilon_{2}}\) and \(|m_{t}(z)|\leq(ct|z|)^{-1/2}\) (cf. Lemma 2.1 (iv)), one can easily get \(|G^{0}_{ij}(z)-\Pi_{ij}(z)|\prec 1\). Hence, what remains is to bound \(\Pi_{ij}\), i.e., to bound \(G_{ij}(X,\zeta_{t}(z))\), the Green function entry of the heavy-tailed covariance matrix \(\mathcal{S}(X)\), in the regime \(2<\alpha<4\). We notice that such a bound has been obtained in [2] for the heavy-tailed Wigner matrices in the same regime of \(\alpha\), but in the bulk. Extending such a bound to edge could be difficult due to the deterioration of the stability of self-consistent equation of the Stieltjes transform. However, we notice that with the \(\eta_{*}\)-regularity of the left edge of \(\mathcal{S}(X)\) spectrum, one can show that the parameter \(\zeta_{t}(z)\) is away from the left edge of the \(\mathcal{S}(X)\) spectrum by a distance of order \(t^{2}\). Hence, we are away from the edge by a mesoscopic distance, which allow us the conduct the argument similarly to the bulk case in [2] to get the desired bound for \(\Pi_{ij}(z)\). ### Organization The rest of the paper will be organized as follows. In Section 2, we will state the main results for the Gaussian divisible model, whose proofs will be stated in Section 3. Section 4 is devoted to the statements of the Green function comparisons and prove our main theorem based on the comparisons. In Section 5, we prove these comparison results. Some technical estimates are stated in the appendix. ### Notation Throughout this paper, we regard \(N\) as our fundamental large parameter. Any quantities that are not explicit constant or fixed may depend on \(N\); we almost always omit the argument \(N\) from our notation. We use \(\|u\|_{\alpha}\) to denote the \(\ell^{\alpha}\)-norm of a vector \(u\). We further use \(\|A\|\) for the operator norm of a matrix \(A\). We use \(C\) to denote some generic (large) positive constant. The notation \(a\sim b\) means \(C^{-1}b\leq|a|\leq Cb\) for some positive constant \(C\). Similarly, we use \(a\lesssim b\) to denote the relation \(|a|\leq Cb\) for some positive constant \(C\). \(\mathcal{O}\) and \(\mathfrak{o}\) denote the usual big and small O notation, and \(\mathcal{O}_{p}\) and \(\mathfrak{o}_{p}\) denote the big and small O notation in probability. When we write \(a\ll b\) and \(a\gg b\) for possibly \(N\)-dependent quantities \(a=a(N)\) and \(b=b(N)\), we mean \(|a|/b\to 0\) and \(|a|/b\to\infty\) when \(N\to\infty\), respectively. For any positive integer \(n\), let \([n]=[1:n]\) denote the set \(\{1,\dots,n\}\). For \(a,b\in\mathbb{R}\), \(a\lor b=\max\{a,b\}\) and \(a\wedge b=\min\{a,b\}\). For a square matrix \(A\in\mathbb{R}^{n\times n}\), we let \(A_{\mathrm{diag}}=(A_{ij}\delta_{ij})\in\mathbb{R}^{n\times n}\). We adopt the following Green function notation for any rectangle matrix \(A\in\mathbb{R}^{m\times n}\), \(G(A,z)=(AA^{\top}-z)^{-1}\). ## 2. Gaussian divisible model In this section, we state the main results for a Gaussian divisible model, and leave the detailed proofs to the next section. ### Some definitions Recall that \(W=(w_{ij})\in\mathbb{R}^{M\times N}\) is a Gaussian matrix with iid \(N(0,N^{-1})\) entries, and \(t=N\mathbb{E}|\mathsf{A}_{ij}|^{2}\). Consider the standard signal-plus-noise model \[V_{t}\coloneqq X+\sqrt{t}W. \tag{2.1}\] In this section, we will establish several spectral properties of \(\mathcal{S}(V_{t})\) that will be extended to \(\mathcal{S}(Y)\) later. For most of the discussion in this part, we will condition on \(X\) and regard it as given, and work with the randomness of \(W\). In light of this, we introduce the asymptotic eigenvalue density of \(\mathcal{S}(V_{t})\), denoted by \(\rho_{t}\), through its corresponding Stieltjes transform \(m_{t}\coloneqq m_{t}(z)\). For any \(t>0\), \(m_{t}\) is known to be the unique solution to the following equation: \[m_{t}=\frac{1}{M}\sum_{i=1}^{M}\frac{1+c_{N}tm_{t}}{\lambda_{i}(\mathcal{S}(X) )-\zeta_{t}}, \tag{2.2}\] subject to the condition that \(\operatorname{Im}m_{t}>0\) for any \(z\in\mathbb{C}_{+}\). Here \[\zeta_{t}\coloneqq\zeta_{t}(z)\coloneqq(1+c_{N}tm_{t}(z))^{2}z-t(1-c_{N}t)(1- c_{N}tm_{t}(z)). \tag{2.3}\] In the context of free probability theory, \(\rho_{t}\) corresponds to the rectangular free convolution of the spectral distribution of \(\mathcal{S}(X)\) with the MP law on scale \(t\), and \(\zeta_{t}\) is the so-called subordination function for the rectangular free convolution. The following lemma provides a precise description of the existence and uniqueness of the asymptotic density. The following result holds for any realization of \(X\). **Lemma 2.1** (Existence and uniqueness of asymptotic density, Lemma 2 of [23]).: _For any \(t>0\), the following properties hold._ 1. _There exists a unique solution_ \(m_{t}\) _to equation (_2.2_) satisfying that_ \(\operatorname{Im}m_{t}(z)>0\) _and_ \(\operatorname{Im}zm_{t}(z)>0\) _if_ \(z\in\mathbb{C}^{+}\)_._ 2. _For all_ \(E\in\mathbb{R}\setminus\{0\}\)_,_ \(\lim_{\eta\downarrow 0}m_{t}(E+\mathrm{i}\eta)\) _exits, and we denote it as_ \(m_{t}(E)\)_. The function_ \(m_{t}\) _is continuous on_ \(\mathbb{R}\setminus\{0\}\)_, and_ \(\rho_{t}(E)\coloneqq\pi^{-1}\mathrm{Im}\,m_{t}(E)\) _is a continuous probability density function on_ \(\mathbb{R}^{+}\coloneqq\{E\in\mathbb{R}:E>0\}\)_. Moreover,_ \(m_{t}\) _is the Stieltjes transform of_ \(\rho_{t}\)_. Finally,_ \(m_{t}(E)\) _is a solution to (_2.2_) for_ \(z=E\)_._ 3. _For all_ \(E\in\mathbb{R}\setminus\{0\}\)_,_ \(\lim_{\eta\downarrow 0}\zeta_{t}(E+\mathrm{i}\eta)\) _exits, and we denote it as_ \(\zeta_{t}(E)\)_. Moreover, we have_ \(\operatorname{Im}\zeta_{t}(z)>0\) _if_ \(z\in\mathbb{C}^{+}\)_._ 4. _We have_ \(\operatorname{Re}\left(1+c_{N}tm_{t}(z)\right)>0\) _for all_ \(z\in\mathbb{C}^{+}\) _and_ \(|m_{t}(z)|\leq(c_{N}t|z|)^{-1/2}\)_._ For a realization of \(X\), we can check if it satisfies the following regularity condition on \(m_{X}(z)\). Such a condition is crucial for the edge universality of DBM; see [46, 23] for instance. **Definition 2.2** (\(\eta_{*}\)- regularity).: _Let \(\eta_{*}\) be a parameter satisfying \(\eta_{*}\coloneqq N^{-\tau_{*}}\) for some constant \(0<\tau_{*}\leq 2/3\). For an \(M\times N\) matrix \(H\), we say \(\mathcal{S}(H)\) is \(\eta_{*}\)-regular around the left edge \(\lambda_{-}=\lambda_{M}(\mathcal{S}(H))\) if there exist constants \(c_{H}>0\) and \(C_{H}>1\) such that the following properties hold:_ 1. _For_ \(z=E+\mathrm{i}\eta\) _with_ \(\lambda_{-}\leq E\leq\lambda_{-}+c_{H}\) _and_ \(\eta_{*}+\sqrt{\eta_{*}|E-\lambda_{-}|}\leq\eta\leq 10\)_, we have_ \[\frac{1}{C_{H}}\sqrt{|E-\lambda_{-}|+\eta}\leq\operatorname{Im}m_{H}(E+\mathrm{ i}\eta)\leq C_{H}\sqrt{|E-\lambda_{-}|+\eta}.\] _For_ \(z=E+\mathrm{i}\eta\) _with_ \(\lambda_{-}-c_{H}\leq E\leq\lambda_{-}\) _and_ \(\eta_{*}\leq\eta\leq 10\)_, we have_ \[\frac{1}{C_{H}}\frac{\eta}{\sqrt{|E-\lambda_{-}|+\eta}}\leq \operatorname{Im}m_{H}(E+\mathrm{i}\eta)\leq C_{H}\frac{\eta}{\sqrt{|E- \lambda_{-}|+\eta}}.\] 2. _We have_ \(c_{H}/2\leq\lambda_{-}\leq 2C_{H}\)_._ 3. _We have_ \(\|\mathcal{S}(H)\|\leq N^{C_{H}}\)_._ The following lemma is a direct implication of \(\eta_{*}\)- regularity. **Lemma 2.3** (Lemma 6 of [23]).: _Suppose (a realization of) \(\mathcal{S}(X)\) is \(\eta_{*}\)-regular in the sense of Definition 2.2. Let \(\mu_{X}\) be the measure associated with \(m_{X}(z)\). For any fixed integer \(k\geq 2\), and any \(z\in\mathcal{D}\) with_ \[\mathcal{D} \coloneqq\big{\{}z=E+\mathrm{i}\eta:\lambda_{-}-\frac{3}{4} \tilde{c}\leq E\leq\lambda_{-},2\eta_{*}\leq\eta\leq 10\big{\}}\] \[\quad\cup\big{\{}z=E+\mathrm{i}\eta:\lambda_{-}\leq E\leq\lambda_ {-}+\frac{3}{4}\tilde{c},\eta_{*}+\sqrt{\eta_{*}(E-\lambda_{-})}\leq\eta\leq 10 \big{\}}\] \[\quad\cup\big{\{}z=E+\mathrm{i}\eta:\lambda_{-}-\frac{3}{4} \tilde{c}\leq E\leq\lambda_{-}-2\eta_{*},0\leq\eta\leq 10\big{\}}.\] _Then we have_ \[\int\frac{\mathrm{d}\mu_{X}(x)}{|x-E-\mathrm{i}\eta|^{k}}\sim\frac{\sqrt{|E- \lambda_{-}|+\eta}}{\eta^{k-1}}\mathbf{1}_{E\geq\lambda_{-}}+\frac{1}{(|E- \lambda_{-}|+\eta)^{k-3/2}}\mathbf{1}_{E<\lambda_{-}}.\] The following notion of stochastic domination which originated from [25] will be used throughout the paper. **Definition 2.4** (Stochastic domination).: _Let \(\mathsf{X}=(\mathsf{X}^{(N)}(u):N\in\mathbb{N},u\in\mathsf{U}^{(N)}),\mathrm{Y}= (\mathsf{Y}^{(N)}(u):N\in\mathbb{N},u\in\mathsf{U}^{(N)})\) be two families of random variables, where \(\mathsf{Y}\) is nonnegative, and \(\mathsf{U}^{(N)}\) is a possibly \(N\)-dependent parameter set. We say that \(\mathrm{X}\) is stochastically dominated by \(\mathsf{Y}\), uniformly in \(u\), if for all small \(\epsilon>0\) and large \(D>0\),_ \[\sup_{u\in\mathsf{U}^{(N)}}\mathbb{P}\left(\left|\mathsf{X}^{(N)}(u)\right|>N^{ \varepsilon}\mathsf{Y}^{(N)}(u)\right)\leqslant N^{-D}\] _for large enough \(N>N_{0}(\epsilon,D)\). If \(\mathsf{X}\) is stochastically dominated by \(\mathsf{Y}\), uniformly in \(u\), we use the notation \(\mathsf{X}\prec\mathsf{Y}\), or equivalently \(\mathsf{X}=O_{\prec}(\mathsf{Y})\). Note that in the special case when \(\mathsf{X}\) and \(\mathsf{Y}\) are deterministic, \(\mathsf{X}\prec\mathsf{Y}\) means that for any given \(\epsilon>0\), \(|\mathsf{X}^{(N)}(u)|\leq N^{\epsilon}\mathsf{Y}^{(N)}(u)\) uniformly in \(u\), for all sufficiently large \(N\geq N_{0}(\epsilon)\)._ ### \(\eta_{*}\)- regularity of \(\mathcal{S}(X)\): A matrix minor argument In this subsection, we state that with high probability \(\eta^{*}\)-regularity holds for \(\mathcal{S}(X)\) with \(\eta^{*}=N^{-\epsilon_{b}}\). Recall that \(X\) defined in (1.7). Let us recall \(\Psi=(\psi_{ij})\in\mathbb{R}^{M\times N}\), a random matrix with entries \(\psi_{ij}\) as defined in (1.5). By setting \[\epsilon_{\alpha}=(\alpha-2)/5\alpha, \tag{2.4}\] we call a \(\Psi\)_good_ if it has at most \(N^{1-\epsilon_{\alpha}}\) entries equal to \(1\). The following lemma indicates that \(\Psi\) is, indeed, good with high probability. **Lemma 2.5**.: _For any large \(D>0\), we have \(\mathbb{P}(\Omega_{\Psi}=\{\Psi\text{ is good}\})\geq 1-N^{-D}\)._ Proof.: Observe that \(\mathbb{P}(\Omega_{\Psi}=\{\Psi\text{ is good}\})=1-\mathbb{P}(\#\{(i,j):x_{ij}>N^{- \epsilon_{b}}\}>N^{1-\epsilon_{\alpha}}).\) By Assumption 1.1 (i), we have \[\mathbb{P}(\#\{(i,j):x_{ij}>N^{-\epsilon_{b}}\}>N^{1-\epsilon_{\alpha}}) \lesssim\sum_{j=N^{1-\epsilon_{\alpha}}}^{N^{2}}\binom{N^{2}}{j}N^{-\alpha(1 /2-\epsilon_{b})j}\lesssim\sum_{j=N^{1-\epsilon_{\alpha}}}^{N^{2}}N^{-(\alpha -2)j/2}\lesssim N^{-D}.\] The claim now follows by possibly adjusting the constants. Given any \(\Psi\) is good, the following proposition shows that \(\mathcal{S}(X)\) is \(\eta_{*}\)-regular with \(\eta_{*}=N^{-\tau_{*}}\) for some \(\tau_{*}>0\). Actually, we shall work with a truncation of \(X\), \(X^{\mathcal{C}}\coloneqq(x_{ij}\mathbf{|}_{x_{ij}|\leq N^{100}})_{i\in[M],j \in[N]}\), in order to guarantee Definition 2.2 (iii). Apparently, \(\|\mathcal{S}(X^{\mathcal{C}})\|\leq N^{102}\) and \(\mathbb{P}(X=X^{\mathcal{C}})=1-\mathfrak{o}(1)\). **Proposition 2.6** (\(\eta_{*}\)- regularity of \(\mathcal{S}(X)\)).: _Suppose that \(\Psi\) is good. Let \(\eta_{*}=N^{-\epsilon_{b}}\). Then \(\mathcal{S}(X)\) is \(\eta_{*}\)-regular around its smallest eigenvalue \(\lambda_{M}(\mathcal{S}(X))\) in the sense of Definition 2.2 with high probability._ The proof of Proposition 2.6 is based on the following two lemmas. For notational simplicity, we define \(\mathsf{m}_{\mathsf{mp}}^{(t)}(z)\coloneqq(1-t)^{-1}\mathsf{m}_{\mathsf{mp}}( z/(1-t))\) for any \(t>0\). **Lemma 2.7**.: _Fix \(C>0\). Let us consider \(z\in\{E+i\eta:C^{-1}\lambda_{-}^{\mathsf{mp}}\leq E\leq\lambda_{+}^{\mathsf{mp }}+1,0<\eta<3\}.\) We have_ \[|m_{\mathsf{B}}(z)-\mathsf{m}_{\mathsf{mp}}^{(t)}(z)|\prec N^{-\epsilon_{b}}+( N\eta)^{-1}. \tag{2.5}\] _In addition,_ \[|\lambda_{M}(\mathcal{S}(\mathsf{B}))-(1-t)\lambda_{-}^{\mathsf{mp}}|\prec N ^{-2\epsilon_{b}}+N^{-2/3}. \tag{2.6}\] Proof.: We further denote by \(\tilde{t}:=1-N\mathbb{E}|\mathsf{B}_{ij}|^{2}\). It is easy to show that \(|\tilde{t}-t|=\mathfrak{o}(N^{-1})\), and thus we have \(|\mathsf{m}_{\mathsf{mp}}^{(t)}(z)-\mathsf{m}_{\mathsf{mp}}^{(\tilde{t})}(z)| \leq(N\eta)^{-1}\). Hence, it suffices to show the following estimates \[|m_{\mathsf{B}}(z)-\mathsf{m}_{\mathsf{mp}}^{(\tilde{t})}(z)|\prec N^{- \epsilon_{b}}+(N\eta)^{-1},\qquad|\lambda_{M}(\mathcal{S}(\mathsf{B}))-(1- \tilde{t})\lambda_{-}^{\mathsf{mp}}|\prec N^{-2\epsilon_{b}}+N^{-2/3}. \tag{2.7}\] Notice that \(\mathsf{B}\) is a so-called random matrix with bounded support. The first estimate in (2.7) is given by [38, Theorem 2.7]. We can show the second estimate in (2.7) adapting the proof of [38, Theorem 2.9] from the right edge to the left edge, in a straightforward way, given a crude lower bound of \(\lambda_{M}(\mathcal{S}(\mathsf{B}))\) which is guaranteed by [63]. We omit the details. **Lemma 2.8**.: _Suppose \(\Psi\) is good. Then, we have \(|\lambda_{M}(\mathcal{S}(X))-(1-t)\lambda_{-}^{\mathsf{mp}}|\prec N^{-2\epsilon _{b}}\)._ Proof.: Denote by \(\mathfrak{N}(\mathsf{C})\) the number of nonzero columns of \(\mathsf{C}\). Since \(\Psi\) is good, \(|\mathfrak{N}(\mathsf{C})|\leq N^{1-\epsilon_{\alpha}},\) with high probability. By Cauchy interlacing, we can easily see that \[\lambda_{M}(\mathcal{S}(X^{[\mathcal{D}_{c}]}))\leq\lambda_{M}(\mathcal{S}(X)) \leq\lambda_{M-|\mathcal{D}_{r}|}(\mathcal{S}(X^{(\mathcal{D}_{r})}))\] Further notice that \(X^{(\mathcal{D}_{r})}=\mathsf{B}^{(\mathcal{D}_{r})}\) and \(X^{[\mathcal{D}_{c}]}=\mathsf{B}^{[\mathcal{D}_{c}]}\), and thus we have \[\lambda_{M}(\mathcal{S}(\mathsf{B}^{[\mathcal{D}_{c}]}))\leq\lambda_{M}( \mathcal{S}(X))\leq\lambda_{M-|\mathcal{D}_{r}|}(\mathcal{S}(\mathsf{B}^{( \mathcal{D}_{r})})).\] Applying (2.6) to \(\mathcal{S}(\mathsf{B}^{[\mathcal{D}_{c}]})\) and \(\mathcal{S}(\mathsf{B}^{(\mathcal{D}_{r})})\) with the modified parameter \(c_{N}\), i.e., \(M/(N-|\mathcal{D}_{c}|)\) and \((M-|\mathcal{D}_{r}|)/N\) respectively, in the definition of \(\lambda_{-}^{\mathsf{mp}}\), we can prove the conclusion with the fact \(\epsilon_{\alpha}>2\epsilon_{b}\). Now we show the proof of Proposition 2.6. Proof of Proposition 2.6.: We shall show three properties (i), (ii) and (iii) (as in Definition 2.2) holds with high probability. Suppose that \(\Psi\) is good. (i). Let \(\mu_{X}\) and \(\mu_{\mathsf{B}}\) be the empirical spectral distributions of \(\mathcal{S}(X)\) and \(\mathcal{S}(\mathsf{B})\), respectively. By the rank inequality [9, Theorem A.44], \(|\mu_{X}-\mu_{\mathsf{B}}|\leq 2\text{rank}(\mathsf{C})/N.\) Then, \[|\mathrm{Im}\,m_{X}(z)-\mathrm{Im}\,m_{\mathsf{B}}(z)|\leq\int\Big{|}\frac{ \eta}{(\lambda-E)^{2}+\eta^{2}}(\mu_{X}-\mu_{\mathsf{B}})(d\lambda)\Big{|}.\] It follows from \(\eta((\lambda-E)^{2}+\eta^{2})^{-1}\leq\eta^{-1}\) that \(|\mathrm{Im}\,m_{X}(z)-\mathrm{Im}\,m_{\mathsf{B}}(z)|\lesssim\text{rank}( \mathsf{C})/(N\eta)=N^{-\epsilon_{\mathsf{B}}}\eta^{-1},\) where we use the assumption that \(\Psi\) is good. This together with Lemma 2.7 give \[|\mathrm{Im}\,m_{X}(z)-\mathrm{Im}\,\mathsf{m}_{\mathsf{mp}}^{(t)}(z)|\prec N ^{-\epsilon_{\mathsf{B}}}\eta^{-1}+N^{-\epsilon_{\mathsf{B}}}+(N\eta)^{-1}.\] For \(E\in[\lambda_{M}(\mathcal{S}(X)),\lambda_{M}(\mathcal{S}(X))+\eta_{*}]\), by Lemma 2.8, we have with high probability that, \[|E-(1-t)\lambda_{-}^{\mathsf{mp}}|\leq|E-\lambda_{M}(\mathcal{S}(X))|+| \lambda_{M}(\mathcal{S}(X))-(1-t)\lambda_{-}^{\mathsf{mp}}|\leq 2\eta_{*}.\] Thus, for \(\eta\geq\eta_{*}\), we have \(\mathrm{Im}\,\mathsf{m}_{\mathsf{mp}}^{(t)}(z)\sim\sqrt{\eta}\), which implies that \(\mathrm{Im}\,m_{X}(z)\sim\sqrt{|E-\lambda_{M}(\mathcal{S}(X))|+\eta}\). Similarly, for \(E\in[\lambda_{M}(\mathcal{S}(X))-\eta_{*},\lambda_{M}(\mathcal{S}(X))]\) and \(\eta\geq\eta_{*}\), we can show that \(\mathrm{Im}\,\,m_{X}(z)\sim\eta/\sqrt{|E-\lambda_{M}(\mathcal{S}(X))|+\eta}\). If \(E\geq\lambda_{M}(\mathcal{S}(X))+\eta_{*}\), we can use the fact \(|\lambda_{M}(\mathcal{S}(X))-(1-t)\lambda_{-}^{\mathsf{mp}}|\ll\eta_{*}\) to obtain that \(E\geq(1-t)\lambda_{-}^{\mathsf{mp}}\) and \[\sqrt{|E-\lambda_{M}(\mathcal{S}(X))|+\eta}\sim\sqrt{|E-(1-t)\lambda_{-}^{ \mathsf{mp}}|+\eta}.\] Similarly, if \(E\leq\lambda_{M}(\mathcal{S}(X))-\eta_{*}\), we obatin \(E\leq(1-t)\lambda_{-}^{\mathsf{mp}}\) and \[\frac{\eta}{\sqrt{|E-\lambda_{M}(\mathcal{S}(X))|+\eta}}\sim\frac{\eta}{\sqrt{ |E-(1-t)\lambda_{-}^{\mathsf{mp}}|+\eta}}.\] (ii). It holds with high probability by Lemma 2.7. (iii). See Remark 2 below. Therefore, we conclude the proof. _Remark 2_.: Rigorously speaking, in order to have the above proposition, we shall work with \(X^{\mathsf{C}}\) instead of \(X\). Since these two matrices are identical with probability \(1-\mathfrak{o}(1)\), any spectral statistics of these two matrices are identical with probability \(1-\mathfrak{o}(1)\). For our main theorem, it would be sufficient to work with \(X^{\mathsf{C}}\) instead of \(X\) in the sequel. However, for convenience, we will still work with \(X\) as if the above proposition is also true for \(\mathcal{S}(X)\). In this case, the reader may simply assume that the entries of \(X\) are bounded by \(N^{100}\) (say). We can anyway recover the result without this additional boundedness assumption by comparing the matrix with its truncated version. Let \(\lambda_{-,t}\) be the left edge of \(\rho_{t}\). The Gaussian part in model (2.1) can further improve the scale of the square root behavior of \(\rho_{t}\) around \(\lambda_{-,t}\) on the event that \(\mathcal{S}(X)\) satisfies certain \(\eta_{*}\)-regularity. The following theorem makes this precise. **Theorem 2.9** (Lemma 1 of [23]).: _On \(\Omega_{\Psi}\), we have_ \[\rho_{t}\sim\sqrt{(E-\lambda_{-,t})_{+}}\quad\text{for}\quad\lambda_{-,t}- \frac{3}{4}\tilde{c}\leq E\leq\lambda_{-,t}+\frac{3}{4}\tilde{c},\] _and for \(z=E+\mathrm{i}\eta\in\mathbb{C}^{+}\),_ \[\mathrm{Im}\,m_{t}(z)\sim\begin{cases}\sqrt{|E-\lambda_{-,t}|+\eta},&\lambda_ {-,t}\leq E\leq\lambda_{-,t}+\frac{3}{4}\tilde{c}\\ \frac{\eta}{\sqrt{|E-\lambda_{-,t}|+\eta}},&\lambda_{-,t}-\frac{3}{4}\tilde{c} \leq E\leq\lambda_{-,t}\end{cases}. \tag{2.8}\] Next, we recall the definition in (1.22). The following theorem provide bounds on the Green function entries for the Gaussian divisible model. Further recall the notation in (1.9), we set \(\mathcal{T}_{r}\coloneqq[M]\setminus\mathcal{D}_{r}\), \(\mathcal{T}_{c}\coloneqq[N]\setminus\mathcal{D}_{c}\). **Theorem 2.10**.: _Suppose that \(\Psi\) is good. Let \(z\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\) with \(10\epsilon_{a}\leq\varepsilon_{1}\leq\epsilon_{b}/500\) and sufficiently small \(\varepsilon_{2},\varepsilon_{3}\). The following estimates hold w.r.t. the probability measure \(\mathbb{P}_{\Psi}\)._ 1. \[|G_{ij}(V_{t},z)|\prec\mathbf{1}_{i\in\mathcal{T}_{r}\text{ or }j\in\mathcal{T}_{r}}+t ^{-2}(1-\mathbf{1}_{i\in\mathcal{T}_{r}\text{ or }j\in\mathcal{T}_{r}}),\] 2. \[|G_{uv}(V_{t}^{\top},z)|\prec\mathbf{1}_{u\in\mathcal{T}_{c}\text{ or }v\in \mathcal{T}_{c}}+t^{-2}(1-\mathbf{1}_{u\in\mathcal{T}_{c}\text{ or }v\in \mathcal{T}_{c}}),\] 3. \[|[G(V_{t},z)V_{t}]_{iu}|\prec N^{-\epsilon_{b}/2}\mathbf{1}_{i\in\mathcal{T}_ {r}\text{ or }u\in\mathcal{T}_{c}}+t^{-2}(1-\mathbf{1}_{i\in\mathcal{T}_{r} \text{ or }u\in\mathcal{T}_{c}}).\] The proof of Theorem 2.10 is based on the following results. **Lemma 2.11**.: _Suppose that the assumptions in Theorem 2.10 hold. There exist constants \(c,C>0\) such that for the domain \(\mathsf{D}_{\zeta}=\mathsf{D}_{\zeta}(c,C)\subset\mathbb{C}_{+}\) defined by_ \[\mathsf{D}_{\zeta}\coloneqq\mathsf{D}_{1}\cup\mathsf{D}_{2}, \tag{2.9}\] _where_ \[\mathsf{D}_{1} \coloneqq\{\zeta=E+i\eta:E\leq(1-t)\lambda_{-}^{\mathsf{mp}}-ct^{ 2},\eta\geq ctN^{-2/3-\varepsilon_{2}}\}\] \[\mathsf{D}_{2} \coloneqq\{\zeta=E+i\eta:\eta\geq c(\log N)^{-C}t^{2}\}\] _we have \(\zeta_{t}(z)\in\mathsf{D}_{\zeta}\) with high probability._ Proof.: The proof relies on the definition of \(\zeta_{t}(z)\) as well as the square root behaviour of \(\rho_{t}\) as stated in Theorem 2.9; see Appendix A.1 for the detailed proof. **Proposition 2.12**.: _Let \(\mathsf{D}_{\zeta}\) be as in (2.9). Consider \(\zeta\in\mathsf{D}_{\zeta}\). Suppose that \(\Psi\) is good. The following estimates hold w.r.t. the probability measure \(\mathbb{P}_{\Psi}\). There exists a constant \(c=c(\epsilon_{a},\epsilon_{a},\epsilon_{b})>0\) such that_ \[|G_{ij}(X,\zeta)-\delta_{ij}\mathsf{m}_{\mathsf{mp}}^{(t)}(\zeta )|\prec N^{-c}\mathbf{1}_{i,j\in\mathcal{T}_{r}}+t^{-2}(1-\mathbf{1}_{i,j\in \mathcal{T}_{r}}),\] \[|G_{uv}(X^{\top},\zeta)-\delta_{uv}\underline{\mathsf{m}}_{ \mathsf{mp}}^{(t)}(\zeta)|\prec N^{-c}\mathbf{1}_{u,v\in\mathcal{T}_{c}}+t^{-2 }(1-\mathbf{1}_{u,v\in\mathcal{T}_{c}}),\] _where \(\underline{\mathsf{m}}_{\mathsf{mp}}^{(t)}(\zeta)=c_{N}\mathsf{m}_{\mathsf{mp }}^{(t)}(\zeta)-(1-c_{N})/\zeta\)._ Proof.: The proof of Proposition 2.12 is similar to the light-tailed case proved in [56], but here we shall apply large deviation formula for heavy-tailed random variables; see Appendix A.2 for the detailed proof. Proof of Theorem 2.10.: Given the previous results, the proof strategy for this theorem is briefly introduced in the last paragraph of the Introduction, Section 1, with the detailed proof found in Appendix A.3. The above theorems provide strong evidence supporting the validity of the Tracy-Widom law for \(\lambda_{M}(\mathcal{S}(V_{t}))\) around \(\lambda_{-,t}\). In fact, we are able to establish the following theorem regarding the convergence of the distribution. Before stating the result, we define the function \[\Phi_{t}(\zeta)\coloneqq(1-c_{N}tm_{X}(\zeta))^{2}\zeta+(1-c_{N})t(1-c_{N}tm_ {X}(\zeta)), \tag{2.10}\] and the scaling parameter \[\gamma_{N}\coloneqq\gamma_{N}(t)\coloneqq-\Big{(}\frac{1}{2}\big{[}4\lambda_{ -,t}\zeta_{t}(\lambda_{-,t})+(1-c_{N})^{2}t^{2}\big{]}c_{N}^{2}t^{2}\Phi_{t}^{ \prime\prime}(\zeta_{t}(\lambda_{-,t}))\Big{)}^{-1/3}. \tag{2.11}\] **Theorem 2.13**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) be a test function satisfying \(\|f\|_{\infty}\leq C\) and \(\|\nabla f\|_{\infty}\leq C\) for a constant \(C\). Then we have for any \(X\) whose corresponding \(\Psi\) is good,_ \[\lim_{N\to\infty}\mathbb{E}\big{[}f\big{(}\gamma_{N}M^{2/3}(\lambda_{M}( \mathcal{S}(V_{t}))-\lambda_{-,t})\big{)}|X]=\lim_{N\to\infty}\mathbb{E}\big{[} f\big{(}M^{2/3}(\mu_{M}^{\mathsf{GOE}}+2)\big{)}\big{]}. \tag{2.12}\] _This further implies that if \(\Psi\) is good,_ \[\lim_{N\to\infty}\mathbb{E}_{\Psi}\big{[}f\big{(}\gamma_{N}M^{2/3}(\lambda_{M}( \mathcal{S}(V_{t}))-\lambda_{-,t})\big{)}\big{]}=\lim_{N\to\infty}\mathbb{E} \big{[}f\big{(}M^{2/3}(\mu_{M}^{\mathsf{GOE}}+2)\big{)}\big{]}, \tag{2.13}\] _where \(\mu_{M}^{\mathsf{GOE}}\) denotes the least eigenvalue of a \(M\) by \(M\) Gaussian Orthogonal Ensemble (GOE) with \(N(0,M^{-1})\) off-diagonal entries._ _Remark 3_.: The proof of the above theorem is essentially an adapt of the edge universality for the DBM in [46] and the analogue for the rectangle DBM in [23, 24]. More specifically, we shall extend the analysis in [23, 24] from the right edge of the covariance type matrix to the left edge. Based on the \(\eta_{*}\)-regularity, the proof is nearly the same as [23, 24], and thus we do not reproduce the details and only provide some remarks in the Appendix A.4. ### Distribution of \(\lambda_{-,t}\) **Theorem 2.14**.: _There exists a deterministic quantity \(\lambda_{\text{shift}}>0\) depending on \(N\) such that the following two properties hold._ 1. \[\frac{N^{\alpha/4}(\lambda_{-,t}-\lambda_{\text{shift}})}{\sigma_{\alpha}} \Rightarrow\mathcal{N}(0,1),\quad\sigma_{\alpha}^{2}=\frac{\mathsf{c}c_{N}^{(4 -\alpha)/4}(1-\sqrt{c_{N}})^{4}(\alpha-2)}{2}\Gamma\Big{(}\frac{\alpha}{2}+1 \Big{)}.\] 2. \[\lambda_{\text{shift}}=\lambda_{-}^{\mathsf{mp}}-\frac{\mathsf{c}N^{1-\alpha/2 }(1-\sqrt{c_{N}})^{2}}{c_{N}^{(\alpha-2)/4}}\Gamma\Big{(}\frac{\alpha}{2}+1 \Big{)}+\mathfrak{o}(N^{1-\alpha/2}).\] _Remark 4_.: Note that the leading order of \(\lambda_{\text{shift}}\) only depends on \(\alpha\). The size of the fluctuation of \(\lambda_{-,t}\) is also determined by \(\alpha\). The proof of Theorem 2.14 is given in the next section. ## 3. Proofs for Gaussian divisible model ### Preliminary estimates Before providing the preliminary estimates for the expansion of the least eigenvalue of \(\mathcal{S}(V_{t})\), we first state the following lemma, which characterizes the support of \(\rho_{t}\) and its edges using the local extrema of \(\Phi_{t}(\zeta)\) on \(\mathbb{R}\). **Lemma 3.1** (Proposition 3 of [66]).: _Fix any \(t>0\). The function \(\Phi_{t}(x)\) on \(\mathbb{R}\setminus\{0\}\) admits \(2q\) positive local extrema counting multiplicities for some integer \(q\geq 1\). The preimages of these extrema are denoted by \(0<\zeta_{1,-}(t)\leq\zeta_{1,+}(t)\leq\zeta_{2,-}(t)\leq\zeta_{2,+}(t)\leq \cdots\leq\zeta_{q,-}(t)\leq\zeta_{q,+}(t),\) and they belong to the set \(\{\zeta\in\mathbb{R}:1-c_{N}tm_{X}(\zeta_{t})>0\}.\) Moreover, \(\lambda_{-,t}=\Phi_{t}(\zeta_{1,-}(t))\), and \(\zeta_{1,-}(t)<\lambda_{M}(\mathcal{S}(X))<\zeta_{1,+}(t).\)_ _Remark 5_.: Here we remark that the model considered in [66] is slightly different in the sense that the model therein contains many \(0\) eigenvalues, which will force \(\zeta_{1,-}(t)\) to be negative. In our case, going through the same analysis as [66] will simply give \(0<\zeta_{1,-}(t)\). Next, we shall introduce the deterministic counterpart of \(\zeta_{-,t}\) (to be denoted by \(\bar{\zeta}_{-,t}\)). First, we notice that the MP law holds for both the matrix \(V_{t}\) and \(X\), but with slightly different scaling factors. Specifically, we have \(m_{V_{t}}(z)-\mathsf{m}_{\mathsf{mp}}(z)=\mathfrak{o}_{p}(1)\) and \(m_{X}(z)-\mathsf{m}_{\mathsf{mp}}^{(t)}(z)=\mathfrak{o}_{p}(1).\) Recall the definitions of \(\zeta_{t}(z)\) in (2.3) and \(\Phi_{t}(\zeta)\) in (2.10). It is important to note that these two quantities are random, and we can also define their deterministic counterparts using the Stieltjes transform of the MP Law. We denote them as follows: \[\bar{\zeta}_{t}(z)\coloneqq(1+c_{N}t\mathsf{m}_{\mathsf{mp}}(z))^ {2}z-t(1-c_{N}t)(1-c_{N}t\mathsf{m}_{\mathsf{mp}}(z)), \tag{3.1}\] \[\bar{\Phi}_{t}(\zeta)\coloneqq(1-c_{N}t\mathsf{m}_{\mathsf{mp}} ^{(t)}(\zeta))^{2}\zeta+(1-c_{N}t(1-c_{N}t\mathsf{m}_{\mathsf{mp}}^{(t)}( \zeta)). \tag{3.2}\] To further simplify the notation, we let \(\zeta_{-,t}=\zeta_{t}(\lambda_{-,t})\) and \(\bar{\zeta}_{-,t}=\bar{\zeta}_{t}(\lambda_{-}^{\mathsf{mp}})\). Let \(\beta=(\alpha-2)/24\). **Lemma 3.2**.: _The following preliminary estimates hold:_ 1. \(\zeta_{-,t}-\lambda_{M}(\mathcal{S}(X))\leq 0\)_, and_ \(\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t}\sim t^{2}\) _holds on_ \(\Omega_{\Psi}\)_._ 2. _There exist some sufficiently small constant_ \(\tau>0\)_, such that for any_ \(z\in\mathbb{C}^{+}\) _satisfying_ \(|z-\zeta_{-,t}|\leq\tau t^{2}\)_, we have on_ \(\Omega_{\Psi}\) _that_ \[m_{X}(z)-\mathsf{m}_{\mathsf{mp}}^{(t)}(z)\prec N^{-\beta},\quad|m_{X}^{(k)}( \zeta)|\lesssim t^{-2k+1},\quad m_{X}^{(k)}(\zeta_{-,t})\sim t^{-2k+1},\quad k \geq 1.\] _._ * \(\bar{\zeta}_{-,t}-\zeta_{-,t}\prec N^{-\beta}t\)_._ Proof.: See the Appendix A.5. We also compute the following limits. **Lemma 3.3**.: _For any \(t=\mathfrak{o}(1)\), we have the following approximations:_ 1. \(\mathsf{m}_{\mathsf{mp}}^{(t)}(\bar{\zeta}_{-,t})=(\sqrt{c_{N}}-c_{N})^{-1}-tc _{N}^{-1/2}(1-\sqrt{c_{N}})^{-2}+\mathcal{O}(t^{3/2})\)_._ 2. \(t(\mathsf{m}_{\mathsf{mp}}^{(t)}(\bar{\zeta}_{-,t}))^{\prime}=c_{N}^{-1}(1- \sqrt{c_{N}})^{-2}/2+\mathcal{O}(t^{1/2})\)_._ 3. \(t^{3}(\mathsf{m}_{\mathsf{mp}}^{(t)}(\bar{\zeta}_{-,t}))^{\prime\prime}=c_{N}^ {-3/2}(1-\sqrt{c_{N}})^{-2}/4+\mathcal{O}(t^{1/2})\)_._ 4. \(\gamma_{N}-c_{N}^{-1/2}(1-\sqrt{c_{N}})^{-4/3}=\mathfrak{o}_{p}(1)\)_._ Proof.: It is easy to solve \(\bar{\zeta}_{-,t}=(1-t)\lambda_{-}^{\mathsf{mp}}-\sqrt{c_{N}}t^{2}\) from (3.1) and (1.3). The calculation is then elementary by the explicit formula (1.3). ### Proof of Theorem 2.14 Before giving the proof, we need the following pre-process. First, note that we have the following deterministic upper bound when \(\Psi\) is good: \[\zeta_{-,t}\cdot\mathbf{1}_{\Omega_{\Psi}}\leq\lambda_{M}(\mathcal{S}(X)) \cdot\mathbf{1}_{\Omega_{\Psi}}\leq\lambda_{M-|\mathcal{D}_{\mathsf{r}}|}( \mathcal{S}(\mathsf{B}^{(\mathcal{D}r)}))\cdot\mathbf{1}_{\Omega_{\Psi}}\leq N ^{2-2\epsilon_{b}}.\] This indicates that \(\mathbb{E}(\zeta_{-,t}\cdot\mathbf{1}_{\Omega_{\Psi}})\) is well-defined. We define \[\zeta_{\mathsf{e}}\coloneqq\mathbb{E}(\zeta_{-,t}\cdot\mathbf{1}_{\Omega_{ \Psi}}),\quad\Delta_{\zeta}\coloneqq\zeta_{-,t}-\zeta_{\mathsf{e}}. \tag{3.3}\] We also write for \(z\in\mathbb{C}^{+}\) and an integer \(k\geq 0\), \(\Delta_{m}(z)\coloneqq m_{X}(z)-\mathbb{E}m_{X}(z)\) and \(\Delta_{m}^{(k)}(z)\coloneqq m_{X}^{(k)}(z)-\mathbb{E}m_{X}^{(k)}(z)\), where we remark that \(\Delta_{m}(z)=\Delta_{m}^{(0)}(z)\). It is noteworthy that \(\mathbb{E}m_{X}(z)\) is well-defined when \(z\) possesses a non-zero imaginary part. To ensure that the expectation of \(m_{X}(\zeta_{\mathsf{e}})\) exist, we add a small imaginary part to \(\zeta_{\mathsf{e}}\), and define for any \(K_{\zeta}>0\), \(\hat{\zeta}_{\mathsf{e}}=\hat{\zeta}_{\mathsf{e}}(K_{\zeta})\coloneqq\zeta_{ \mathsf{e}}+1N^{-100K_{\zeta}}\). We will begin by stating some preliminary bounds useful to estimate \(\mathbb{E}\lambda_{-,t}\). **Lemma 3.4**.: _Recall that \(\beta=(\alpha-2)/24\). There exists some small \(\tau>0\), such that for any \(z\in\mathbb{C}^{+}\) satisfies \(|z-\zeta_{\mathsf{e}}|\leq\tau t^{2}\) and \(\operatorname{Im}z\geq N^{-100K_{\zeta}}\), the following a priori high probability bounds:_ \[\Delta_{m}^{(k)}(z)\prec N^{-\beta}t^{-2k},\quad\text{and}\quad\Delta_{\zeta} \prec N^{-\beta/2}t^{2} \tag{3.4}\] _Furthermore, we have the following a priori variance bounds:_ \[\operatorname{Var}(\Delta_{m}^{(k)}(z))\leq N^{-1+\epsilon}t^{-2k-4},\quad \text{and}\quad\operatorname{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}}) \leq N^{-1+\epsilon}. \tag{3.5}\] We postpone the proof of Lemma 3.4 to the end of this subsection. Let us prove Theorem 2.14 equipped with Lemma 3.4. Proof of Theorem 2.14.: Recall the expression of \(\lambda_{-,t}\) in (1.13). We shall switch \(\zeta_{-,t}\) and \(m_{X}(\zeta_{-,t})\) with \(\zeta_{\mathsf{e}}\) and \(\mathbb{E}m_{X}(\hat{\zeta}_{\mathsf{e}})\) respectively. First, expanding \(m_{X}(\zeta_{-,t})\) around \(m_{X}(\zeta_{\mathsf{e}})\), we have for sufficiently large \(s>0\), \[\lambda_{-,t}=\zeta_{-,t}\Big{(}1-\sum_{k=0}^{s}\frac{c_{N}t}{k!}m_{X}^{(k)}( \zeta_{\mathsf{e}})\Delta_{\zeta}^{k}\Big{)}^{2}+(1-c_{N})t\Big{(}1-\sum_{k=0} ^{s}\frac{c_{N}t}{k!}m_{X}^{(k)}(\zeta_{\mathsf{e}})\Delta_{\zeta}^{k}\Big{)} +\mathcal{O}_{\prec}(N^{-\alpha/4-\epsilon}).\] Note that for any integer \(k\geq 0\), it can be easily verified that w.h.p., \(|m_{X}^{(k)}(\zeta_{\mathsf{e}})-m_{X}^{(k)}(\hat{\zeta}_{\mathsf{e}})|\leq N^{- 50s}\), by chooinsg \(K_{\zeta}>0\) large enough. This means that we can replace \(m_{X}^{(k)}(\zeta_{\mathsf{e}})\) with \(m_{X}^{(k)}(\hat{\zeta}_{\mathsf{e}})\). Through an elementary calculation, we have \[\lambda_{-,t}=\lambda_{\text{shift}}-\Big{(}2c_{N}t\big{(}1-c_{N}t\mathbb{E}m_ {X}(\hat{\zeta}_{\mathsf{e}})\big{)}\zeta_{\mathsf{e}}-c_{N}t^{2}(1-c_{N}) \Big{)}\,\Delta_{m}(\hat{\zeta}_{\mathsf{e}})+\mathcal{Z}\mathsf{OT}_{\zeta} \Delta_{\zeta}+\mathcal{P}(\Delta_{\zeta},\{\Delta_{m}^{(k)}(\hat{\zeta}_{\mathsf{ e}})\}_{k\geq 0}).\] where \(\lambda_{\text{shift}}\coloneqq\big{(}1-c_{N}t\mathbb{E}m_{X}(\hat{\zeta}_{\text{ e}})\big{)}^{2}\zeta_{\text{e}}+(1-c_{N})t\big{(}1-c_{N}t\mathbb{E}m_{X}(\hat{\zeta}_{ \text{e}})\big{)}\), and we denote by \(\text{ZOT}_{\zeta}\) the collection of zero-th order terms, i.e., \[\text{ZOT}_{\zeta}\coloneqq\big{(}1-c_{N}t\mathbb{E}m_{X}(\hat{ \zeta}_{\text{e}})\big{)}\big{(}1-c_{N}t\mathbb{E}m_{X}(\hat{\zeta}_{\text{e}} )-2c_{N}t\zeta_{\text{e}}\mathbb{E}m_{X}^{\prime}(\hat{\zeta}_{\text{e}}) \big{)}-c_{N}(1-c_{N})t^{2}\mathbb{E}m_{X}^{\prime}(\hat{\zeta}_{\text{e}}), \tag{3.6}\] and \(\mathcal{P}(\Delta_{\zeta},\{\Delta_{m}^{(k)}(\hat{\zeta}_{\text{e}})\}_{k\geq 1})\) collects all the high order terms. We need to bound the last two terms. It can be easily obtained by prior bounds in Lemma 3.4 that \(\mathcal{P}(\Delta_{\zeta},\{\Delta_{m}^{(k)}(\hat{\zeta}_{\text{e}})\}_{k\geq 0 })=\mathcal{O}_{p}(N^{-\alpha/4-(4-\alpha)/8})\). Moreover, due to Remark 6 below, we find that \(\text{ZOT}_{\zeta}=\mathcal{O}(N^{-\alpha/4-(4-\alpha)/8})\). The following two propositions complete the proof. **Proposition 3.5**.: _Let \(\sigma_{\alpha}\) be as in Theorem 2.14. We have_ \[2c_{N}\big{(}1-c_{N}t\mathbb{E}m_{X}(\hat{\zeta}_{\text{e}})\big{)}\zeta_{ \text{e}}\cdot\left(\frac{t\Delta_{m}(\hat{\zeta}_{\text{e}})}{\sigma_{\alpha }}\right)\Rightarrow\mathcal{N}(0,1).\] **Proposition 3.6**.: _We have_ \[\lambda_{\text{shift}}=\lambda_{-}^{\text{mp}}-\frac{\mathsf{c}N^{1-\alpha/2}( 1-\sqrt{c_{N}})^{2}}{c_{N}^{(\alpha-2)/4}}\Gamma\Big{(}\frac{\alpha}{2}+1\Big{)} +\mathfrak{o}(N^{1-\alpha/2}).\] We shall prove the above propositions in the next subsections. Proof of Lemma 3.4.: Using Lemma 3.2 (ii), we can obtain that \(\Delta_{m}(z)=m_{X}(z)-\mathsf{m}_{\text{mp}}^{(t)}(z)+\mathbb{E}(\mathsf{m}_{ \text{mp}}^{(t)}(z)-m_{X}(z))\prec N^{-\beta}.\) The bound for \(\Delta_{m}^{(k)}(z)\) follows by a simple application of Cauchy integral formula. In order to bound \(\Delta_{\zeta}\), we first observe that \[\zeta_{\text{e}}-\bar{\zeta}_{-,t}=\mathbb{E}\big{[}(\zeta_{-,t}-\bar{\zeta} _{-,t})\cdot\mathbf{1}_{\Omega_{\Phi}}\big{]}-\bar{\zeta}_{-,t}\cdot\mathbb{P} (\Omega_{\Psi}^{c})\leq N^{-\beta/2}t^{2}. \tag{3.7}\] where the last step follows from Lemmas 2.5 and 3.2 (iii). Therefore, by Lemma 3.2 (iii) again, we can get the desired bound for \(\Delta_{\zeta}\). Next we consider \(\text{Var}(\Delta_{m}(z))\). We first let \(\mathcal{F}_{k}\) be the \(\sigma\)-field generated by the first \(k\) columns of \(X\). Then we define \(D_{k}^{+}\coloneqq\mathbb{E}\big{[}M^{-1}(\mathrm{Tr}G(X,z)-\mathrm{Tr}G(X^{( k)},z))\big{|}\mathcal{F}_{k}\big{]},D_{k}^{-}\coloneqq\mathbb{E}\big{[}M^{-1}( \mathrm{Tr}G(X^{(k)},z)-\mathrm{Tr}G(X,z))\big{|}\mathcal{F}_{k-1}\big{]}\), and \(D_{k}\coloneqq D_{k}^{+}+D_{k}^{-}\). By the Efron-Stein inequality, we have \[\mathrm{Var}(m_{X}(z))=\sum_{i=1}^{N}\mathbb{E}(|D_{i}|^{2})\leq 2\sum_{i=1}^{N} \mathbb{E}\big{(}|D_{i}^{+}|^{2}\big{)}+\mathbb{E}\big{(}|D_{i}^{-}|^{2}\big{)}.\] Using the resolvent expansion, we can obtain \[\mathbb{E}\big{(}|D_{k}^{+}|^{2}\big{)}\!\leq\!\frac{1}{M^{2}}\mathbb{E} \Big{[}\Big{|}\frac{x_{k}^{\top}G^{2}(X^{(k)},z)x_{k}}{1+x_{k}^{\top}G(X^{(k) },z)x_{k}}\Big{|}^{2}\cdot\mathbf{1}_{|z-\lambda_{M}(\mathcal{S}(X^{(k)}))| \geq ct^{2}}\Big{]}+N^{-D}\lesssim\frac{N^{\epsilon}}{N^{2}t^{4}},\] where in the first step, we used Lemma 3.2 (i) to derive, with high probability, that for \(|z-\zeta_{\text{e}}|\leq\tau t^{2}\) with sufficiently small \(\tau>0\), there exists some sufficiently small \(c>0\), \[|z-\lambda_{M}(\mathcal{S}(X^{(k)}))| \geq|\zeta_{-,t}-\lambda_{M}(\mathcal{S}(X))|-|z-\zeta_{\text{e}}| -|\Delta_{\zeta}|\] \[\quad-|\lambda_{M}(\mathcal{S}(X))-(1-t)\lambda_{-}^{\text{mp}}| -|\lambda_{M}(\mathcal{S}(X^{(k)}))-(1-t)\lambda_{-}^{\text{mp}}|\geq ct^{2}, \tag{3.8}\] which gives \(\mathbb{P}(|z-\lambda_{M}(\mathcal{S}(X^{(k)}))|\geq ct^{2})<N^{-D}\) for arbitrary large \(D>0\), and \(z\) has non-zero imaginary part which yields deterministic upper bound for the random variable. Similarly, we have \(\mathbb{E}\big{(}|D_{k}^{-}|^{2}\big{)}\lesssim N^{\epsilon}/(N^{2}t^{4}).\) This establishes the bound for \(\mathrm{Var}(\Delta_{m}(z))\). The bound for \(\mathrm{Var}(\Delta_{m}^{(k)}(z))\) follows by an application of Cauchy integral formula. Note that, since the contour of the Cauchy integral will cross real line, the integrand may not be well defined deterministically due to the possible singularity (although with tiny probability) of the Green function. Hence, we will need to cut off the part of the integral when the imaginary part of the variable is small. To elucidate the procedure, we will outline how to do the cutoff for the Cauchy integral representation of \(\mathbb{E}(m_{X}^{(k)}(z))\) only. The one for variance can be done similarly. Consider \(z\) that satisfies \(|z-\zeta_{\sf e}|\leq\tau t^{2}/2\) and \(\operatorname{Im}z\geq N^{-100K_{\zeta}}\), we first define \(\Omega_{z}\coloneqq\{|z-\lambda_{M}(\mathcal{S}(X))|\geq ct^{2}\}\). A similar argument as (3.8) leads to \(\mathbb{P}(\Omega_{z}^{c})\leq N^{-D}\) for arbitrary large \(D>0\). Then we may choose a contour \(\omega_{z}\coloneqq\{z^{\prime}:|z^{\prime}-z|=\tau t^{2}/10\}\) with sufficiently small \(\tau\), and set \(\mathfrak{w}\coloneqq\{z^{\prime}:|\operatorname{Im}z^{\prime}|\geq N^{-100K_{ \zeta}}\}\). Then we obtain \[\mathbb{E}(m_{X}^{(k)}(z))=\mathbb{E}(m_{X}^{(k)}(z)\cdot\mathbf{ 1}_{\Omega_{z}})+\mathbb{E}(m_{X}^{(k)}(z)\cdot\mathbf{1}_{\Omega_{z}^{c}})= \frac{k!}{2\pi\mathrm{i}}\mathbb{E}\Big{[}\oint_{\omega}\frac{m_{X}(a)}{(a-z) ^{k+1}}\mathrm{d}a\cdot\mathbf{1}_{\Omega_{z}}\Big{]}+N^{-D}\] \[=\frac{k!}{2\pi\mathrm{i}}\Big{(}\mathbb{E}\Big{[}\oint_{\omega \cap\mathfrak{w}}\frac{m_{X}(a)}{(a-z)^{k+1}}\mathrm{d}a\cdot\mathbf{1}_{ \Omega_{z}}\Big{]}+\mathbb{E}\Big{[}\oint_{\omega\cap\mathfrak{w}^{c}}\frac{m_ {X}(a)}{(a-z)^{k+1}}\mathrm{d}a\cdot\mathbf{1}_{\Omega_{z}}\Big{]}\Big{)}+N^{-D}\] \[=\frac{k!}{2\pi\mathrm{i}}\Big{(}\mathbb{E}\Big{[}\oint_{\omega \cap\mathfrak{w}}\frac{m_{X}(a)}{(a-z)^{k+1}}\mathrm{d}a\Big{]}+\mathbb{E} \Big{[}\oint_{\omega\cap\mathfrak{w}^{c}}\frac{m_{X}(a)}{(a-z)^{k+1}}\mathrm{d }a\cdot\mathbf{1}_{\Omega_{z}}\Big{]}\Big{)}+N^{-D}\] \[=\frac{k!}{2\pi\mathrm{i}}\mathbb{E}\Big{[}\oint_{\omega\cap \mathfrak{w}}\frac{m_{X}(a)}{(a-z)^{k+1}}\mathrm{d}a\Big{]}+\mathcal{O}(N^{-50 K_{\zeta}})+N^{-D}. \tag{3.9}\] For the remaining term, the effective imaginary part of \(a\) within \(\omega\cap\mathfrak{w}\) allows us to interchange \(\mathbb{E}\) with the contour integral. Then, the upper bound for \(\mathbb{E}(m_{X}(a))\) can be directly applied to estimate this term. Using the same cutoff of the contours, the bound for \(\operatorname{Var}(\Delta_{m}^{(k)}(z))\) is obtained through a double integral representation together with the Cauchy-Schwarz inequality. We omit further details for brevity. Lastly, we shall bound \(\operatorname{Var}(\Delta_{\zeta})\). Since \((\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t})\cdot\mathbf{1}_{\Omega_{\Psi}}\sim t ^{2}\cdot\mathbf{1}_{\Omega_{\Psi}}\) and \(\Delta_{\zeta}\prec N^{-\beta/2}t^{2}\), on the event \(\Omega_{\Psi}\), \(\lambda_{M}(\mathcal{S}(X))-\zeta_{\sf e}=\lambda_{M}(\mathcal{S}(X))-\zeta_{ -,t}+\Delta_{\zeta}\sim t^{2}\) with high probability. Using Lemma 2.3, the bound in the above display also implies that on the event \(\Omega_{\Psi}\), \[m_{X}^{(k)}(\zeta_{\sf e})\sim t^{-2k+1},\quad k\geq 1. \tag{3.10}\] Recall that \(\Phi_{t}^{\prime}(\zeta_{-,t})=0\), which reads \[(1-c_{N}tm_{X}(\zeta_{-,t}))^{2}-2c_{N}tm_{X}^{\prime}(\zeta_{-,t})\cdot \zeta_{-,t}\left(1-c_{N}tm_{X}(\zeta_{-,t})\right)-c_{N}(1-c_{N})t^{2}m_{X}^{ \prime}(\zeta_{-,t})=0. \tag{3.11}\] Replacing \(\zeta_{-,t}\) and \(m_{X}(\zeta_{-,t})\) with \(\zeta_{\sf e}\) and \(\mathbb{E}[m_{X}(\hat{\zeta}_{\sf e})]\), as in the proof of Theorem 2.14, it follows from (3.11) that \[\mathsf{ZOT}_{\zeta}+\mathsf{FOT}_{\zeta}+\mathcal{P}_{\zeta}(\Delta_{\zeta}, \{\Delta_{m}^{(k)}\}_{k\geq 0})=0, \tag{3.12}\] where the term \(\mathsf{ZOT}_{\zeta}\) is defined as in (3.6), \[\mathsf{FOT}_{\zeta} \coloneqq\big{(}2c_{N}^{2}t^{2}\zeta_{\sf e}\mathbb{E}m_{X}^{ \prime}(\hat{\zeta}_{\sf e})-2\mathsf{f}_{m}\big{)}\Delta_{m}(\hat{\zeta}_{\sf e })-\big{(}c_{N}(1-c_{N})t^{2}+2\mathsf{f}_{m}\zeta_{\sf e}\big{)}\Delta_{m}^{(1 )}(\hat{\zeta}_{\sf e})\] \[\quad-\big{(}4\mathsf{f}_{m}\mathbb{E}m_{X}^{\prime}(\hat{\zeta}_ {\sf e})+c_{N}(1-c_{N})t^{2}\mathbb{E}m_{X}^{(2)}(\hat{\zeta}_{\sf e})+2c_{N}^{ 2}t^{2}\zeta_{\sf e}(\mathbb{E}m_{X}^{\prime}(\hat{\zeta}_{\sf e}))^{2}+2 \mathsf{f}_{m}\zeta_{\sf e}\mathbb{E}m_{X}^{(2)}(\hat{\zeta}_{\sf e})\big{)} \Delta_{\zeta}\] with \(\mathsf{f}_{m}\coloneqq c_{N}t\big{(}1-c_{N}t\mathbb{E}m_{X}(\hat{\zeta}_{\sf e })\big{)}\), and \(\mathcal{P}_{\zeta}(\Delta_{\zeta},\Delta_{m}^{(k)})\) is the collection of high order terms. Note that \(\mathsf{f}_{m}\sim t\) and \(\mathcal{P}_{\zeta}(\Delta_{\zeta},\Delta_{m}^{(k)})\) is a polynomial in \(\Delta_{\zeta}\) and \(\Delta_{m}^{(k)}\)'s, containing monomials of order no smaller than \(2\). Hence, by Cauchy Schwarz and bounds in (3.4), one can get the following bounds \[\operatorname{Var}\Bigl{(}\mathcal{P}(\Delta_{\zeta},\Delta_{m}^{ (k)}(\hat{\zeta}_{\sf e}))\mathbf{1}_{\Omega_{\Psi}}\Bigr{)}\lesssim N^{-1/2- \beta/4}\sqrt{\operatorname{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})}+N^{- \beta/4}\operatorname{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})+N^{-D}, \tag{3.13}\] \[\mathbb{E}\Bigl{(}\mathcal{P}(\Delta_{\zeta},\Delta_{m}^{(k)}( \hat{\zeta}_{\sf e}))\mathbf{1}_{\Omega_{\Psi}}\Bigr{)}\lesssim N^{-1/2+\epsilon/ 2}t^{-3}\sqrt{\operatorname{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})}+t^{-2} \operatorname{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})+N^{-D}. \tag{3.14}\] Using (3.10), we can see that the leading order term of the coefficient of \(\Delta_{\zeta}\) in \(\mathsf{FOT}_{\zeta}\) is \(-2\mathsf{f}_{m}\zeta_{\sf e}\mathbb{E}(m_{X}^{(2)}(\hat{\zeta}_{\sf e}))\sim t ^{-2}\). Therefore, we can derive from (3.12) that \[C_{1}(t)\Delta_{\zeta}=C_{2}(t)\Delta_{m}(\hat{\zeta}_{\sf e})+C_{3}(t)\Delta_{m}^{ (1)}(\hat{\zeta}_{\sf e})+\frac{\mathsf{ZOT}_{\zeta}+\mathcal{P}_{\zeta}( \Delta_{\zeta},\Delta_{m}^{(k)}(\hat{\zeta}_{\sf e}))}{2\mathsf{f}_{m}\zeta_{ \sf e}\mathbb{E}m_{X}^{(2)}(\hat{\zeta}_{\sf e})}, \tag{3.15}\] where \(C_{i}(t),i=1,2,3\) are deterministic quantities satisfying \(C_{1}(t)=1+\mathcal{O}(t)\), \(C_{2}(t)=\mathcal{O}(t^{3})\), and \(C_{3}(t)=\mathcal{O}(t^{3})\). Multiplying \(\mathbf{1}_{\Omega_{\Psi}}\) at both sides and then compute the variance: \[\mathrm{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}}) \lesssim t^{6}\mathrm{Var}(\Delta_{m}(\hat{\zeta}_{\mathsf{e}}))+t ^{6}\mathrm{Var}(\Delta_{m}^{(1)}(\hat{\zeta}_{\mathsf{e}}))+t^{4}\mathrm{Var} \big{(}\mathcal{P}_{\zeta}(\Delta_{\zeta},\Delta_{m}^{(k)}(\hat{\zeta}_{ \mathsf{e}}))\big{)}\] \[\lesssim N^{-1+\epsilon}+N^{-1/2-\beta/4}\sqrt{\mathrm{Var}( \Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})}, \tag{3.16}\] Solving the above inequality for \(\mathrm{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})\) gives \(\mathrm{Var}(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}})\lesssim N^{-1+\epsilon},\) which completes the proof of Lemma 3.4. _Remark 6_ (Bound \(\mathsf{ZOT}_{\zeta}\)).: We start with (3.15). Multiplying \(\mathbf{1}_{\Omega_{\Psi}}\) at both sides and then taking expectation, we have \((\mathsf{ZOT}_{\zeta}+\mathbb{E}[\mathcal{P}_{\zeta}(\Delta_{\zeta},\Delta_{ m}^{(k)}(\hat{\zeta}_{\mathsf{e}}))\cdot\mathbf{1}_{\Omega_{\Psi}}])/(2t_{m}\zeta_{ \mathsf{e}}\mathbb{E}m_{X}^{(2)}(\hat{\zeta}_{\mathsf{e}}))+\mathcal{O}(N^{-D })=0\). Using (3.14) together with the variance bound for \(\Delta_{\zeta}\mathbf{1}_{\Omega_{\Psi}}\) in Lemma 3.4, we can obtain that \(\mathsf{ZOT}_{\zeta}=\mathcal{O}(N^{-1+\epsilon}t^{-3})\). By the fact \(t\gg N^{(\alpha-4)/32}\), it follows that \(\mathsf{ZOT}_{\zeta}=\mathcal{O}(N^{-\alpha/4-(4-\alpha)/8})\). ### Proof of Proposition 3.5 Proposition 3.5 follows from Lemma 3.3 and the following theorem together with some simple algebraic calculation. Recall that \(\Delta_{m}(\hat{\zeta}_{\mathsf{e}})=m_{X}(\hat{\zeta}_{\mathsf{e}})-\mathbb{E }(m_{X}(\hat{\zeta}_{\mathsf{e}}))\). **Theorem 3.7** (CLT of the linear eigenvalue statistics of \(\mathcal{S}(X)\)).: _For any \(2<\alpha<4\),_ \[\frac{N^{\alpha/4}t\Delta_{m}(\hat{\zeta}_{\mathsf{e}})}{\sigma_{m}}\Rightarrow \mathcal{N}(0,1),\] _where_ \[\sigma_{m}^{2} \coloneqq\mathsf{c}t^{2}c_{N}\int_{0}^{\infty}\int_{0}^{\infty} \partial_{z}\partial_{z^{\prime}}\Big{\{}\frac{e^{-s-s^{\prime}-sc_{N}\mathsf{ m}_{\mathsf{mp}}^{(t)}(z)-s^{\prime}c_{N}\mathsf{m}_{\mathsf{mp}}^{(t)}(z^{ \prime})}}{ss^{\prime}}\] \[\quad\times\Big{(}\big{(}s\mathsf{m}_{\mathsf{mp}}^{(t)}(z)+s^{ \prime}\mathsf{m}_{\mathsf{mp}}^{(t)}(z^{\prime})\big{)}^{\alpha/2}-\big{(}s \mathsf{m}_{\mathsf{mp}}^{(t)}(z)\big{)}^{\alpha/2}-\big{(}s^{\prime}\mathsf{ m}_{\mathsf{mp}}^{(t)}(z^{\prime})\big{)}^{\alpha/2}\Big{)}\Big{\}}\Big{|}_{z=z^{ \prime}=\hat{\zeta}_{\mathsf{e}}}\mathrm{d}s\mathrm{d}s^{\prime}.\] To prove Theorem 3.7, we will work on the truncated matrix \(\tilde{X}=(\tilde{x}_{ij})\) with \(\tilde{x}_{ij}=x_{ij}\mathbf{1}_{\sqrt{N}|x_{ij}|\leq N^{\delta}}\) and \(\vartheta=1/4+1/\alpha+\epsilon_{\vartheta}\) such that \(N^{-\alpha\epsilon_{\vartheta}}\ll t\) and \(\epsilon_{\vartheta}<(3\alpha-5)/(4\alpha)\). It will become clear from the following lemma that the fluctuations of \(m_{X}\) and \(m_{\tilde{X}}\) are asymptotically the same. **Lemma 3.8**.: _We have \(N^{\alpha/4}t\big{(}m_{X}(\hat{\zeta}_{\mathsf{e}})-m_{\tilde{X}}(\hat{\zeta}_ {\mathsf{e}})\big{)}=\mathfrak{o}_{p}(1)\)._ Proof.: This lemma simply follows from the rank inequality and Bennett's inequality together with Lemma 3.2 (i). Proof of Theorem 3.7.: By Lemma 3.8, it is enough to consider the convergence (in distribution) of \(\mathcal{M}_{N}(\tilde{X})\coloneqq N^{\alpha/4}t\big{(}m_{\tilde{X}}(\hat{ \zeta}_{\mathsf{e}})-\mathbb{E}m_{\tilde{X}}(\hat{\zeta}_{\mathsf{e}})\big{)}.\) We will use the Martingale approach. To this end, we define \(\mathcal{F}_{k}\) as the sigma-algebra generated by the first \(k\) columns of \(\tilde{X}\). Denoting conditional expectation w.r.t. \(\mathcal{F}_{k}\) by \(\mathbb{E}_{k}\), we obtain the following martingale difference decomposition of \(\mathcal{M}_{N}(\tilde{X})\) \[\mathcal{M}_{N}(\tilde{X})=\sum_{k=1}^{N}N^{\alpha/4}t(\mathbb{E}_{k}-\mathbb{E }_{k-1})\big{(}m_{\tilde{X}}(\hat{\zeta}_{\mathsf{e}})-m_{\tilde{X}^{(k)}}( \hat{\zeta}_{\mathsf{e}})\big{)}.\] Our aim is to show that \(\mathcal{M}_{N}(\tilde{X})\) converges in distribution to a Gaussian distribution \(\mathcal{N}(0,\sigma_{m}^{2})\) via the martingale CLT. **Theorem 3.9** (Martingale CLT, Theorem A.3 of [16]).: _Let \((\mathcal{F}_{k})_{k\geq 0}\) be a filtration such that \(\mathcal{F}_{0}=\{\emptyset,\Omega\}\) and let \((\mathcal{W}_{k})_{k\geq 0}\) be a square-integrable complex-valued martingale starting at zero w.r.t. this filtration. For \(k\geq 1\), we define the random variables \(Y_{k}\coloneqq\mathcal{W}_{k}-\mathcal{W}_{k-1}\), \(v_{k}\coloneqq\mathbb{E}_{k}[|Y_{k}|^{2}]\), \(\tau_{k}\coloneqq\mathbb{E}_{k}[Y_{k}^{2}]\), and we also define \(v(N)\coloneqq\sum_{k\geq 1}v_{k}\), \(\tau(N)\coloneqq\sum_{k\geq 1}\tau_{k}\), \(\sum_{k\geq 1}\mathbb{E}[|Y_{k}^{2}|\mathbf{1}_{|Y_{k}|\geq\varepsilon}]\). Suppose that for some constants \(v\geq 0\), \(\tau\in\mathbb{C}\), and for each \(\varepsilon>0\), \(v(N)\stackrel{{\mathbb{P}}}{{\to}}v\), \(\tau(N)\stackrel{{\mathbb{P}}}{{\to}}\tau\), \(L(\varepsilon,N)\to 0\). Then, the martingale \(\mathcal{W}_{N}\) converges in distribution to a centered complex Gaussian variable \(\mathcal{Z}\) such that \(\mathbb{E}(|\mathcal{Z}|^{2})=v\) and \(\mathbb{E}(\mathcal{Z}^{2})=\tau\) as \(N\to\infty\)._ We want to apply Theorem 3.9 with setting \(\mathcal{W}_{N}=\mathcal{M}_{N}(\hat{X})\). Using the resolvent identity, \[\mathcal{M}_{N}(\tilde{X})=\sum_{k=1}^{N}Y_{k}(\hat{\zeta}_{\mathsf{e}})\coloneqq \sum_{k=1}^{N}\frac{t}{N^{1-\alpha/4}}(\mathbb{E}_{k}-\mathbb{E}_{k-1})\frac{ \tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)},\hat{\zeta}_{\mathsf{e}}))^{2}\tilde{x} _{k}}{1+\tilde{x}_{k}^{\top}G(\tilde{X}^{(k)},\hat{\zeta}_{\mathsf{e}})\tilde{ x}_{k}}.\] First note that \(|Y_{k}(\hat{\zeta}_{\mathsf{e}})|\lesssim N^{-1+\alpha/4}t^{-1}\) with high probability. We also have the deterministic upper bound for \(Y_{k}(\hat{\zeta}_{\mathsf{e}})\) since \(\hat{\zeta}_{\mathsf{e}}\) possesses effective imaginary part. Combining these two facts, we can verify that the \(L(\varepsilon,N)\) goes to 0. In order to conclude the proof via Theorem 3.9, we need to check convergences of \(v(N)\) and \(\tau(N)\). This follows from Propositions 3.10 and 3.11 below. **Proposition 3.10**.: _Let_ \[\tilde{Y}_{k}(\zeta)\coloneqq\frac{t}{N^{1-\alpha/4}}(\mathbb{E}_{k}-\mathbb{ E}_{k-1})f_{k}(\zeta)\coloneqq\frac{t}{N^{1-\alpha/4}}(\mathbb{E}_{k}-\mathbb{E}_{k-1 })\frac{\tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)},\zeta))_{\mathrm{diag}}^{2} \tilde{x}_{k}}{1+\tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)},\zeta))_{\mathrm{diag} }\tilde{x}_{k}}.\] _Then there exists some constant \(\tau\), such that for any \(\zeta,\zeta^{\prime}\in\Xi(\tau)=\{\xi\in\mathbb{C}:|\xi-\hat{\zeta}_{ \mathsf{e}}|\leq\tau t^{2},|\mathrm{Im}\,\xi|\geq N^{-100}\}\), the summation \(\sum_{k=1}^{N}\mathbb{E}_{k-1}[Y_{k}(\zeta)Y_{k}(\zeta^{\prime})]-\mathbb{E}_ {k-1}[\tilde{Y}_{k}(\zeta)\tilde{Y}_{k}(\zeta^{\prime})]\) converges in probability to \(0\)._ Proof.: The proof is similar to the counterpart in [16]; see the Appendix A.6 for details. **Proposition 3.11**.: _For any \(k\in[N]\), there exists some constant \(\tau\), such that for any \(z,z^{\prime}\in\{\zeta\in\mathbb{C}:|\zeta-\hat{\zeta}_{\mathsf{e}}|\leq\tau t ^{2},|\mathrm{Im}\,\zeta|\geq N^{-100}\}\),_ \[\frac{N^{-1+\alpha/2}t^{2}\mathbb{E}_{k-1}\big{(}(\mathbb{E}_{k}-\mathbb{E}_{k -1})f_{k}(z)(\mathbb{E}_{k}-\mathbb{E}_{k-1})f_{k}(z^{\prime})\big{)}}{ \mathcal{K}(z,z^{\prime})}\stackrel{{\mathbb{P}}}{{\to}}1,\] _as \(N\to\infty\). The kernel \(\mathcal{K}(z,z^{\prime})\) is defined as_ \[\mathcal{K}(z,z^{\prime})\coloneqq \ \mathsf{c}N^{1-\alpha/2}t^{2}c_{N}\int_{0}^{\infty}\int_{0}^{ \infty}\partial_{z}\partial_{z^{\prime}}\Big{\{}\frac{e^{-s-s^{\prime}-sc_{N \mathsf{mp}}(t)}(z)-s^{\prime}c_{N\mathsf{mp}}(t)^{(t)}}{ss^{\prime}}\] \[\times\Big{(}\big{(}s\mathsf{m}_{\mathsf{mp}}^{(t)}(z)+s^{\prime} \mathsf{m}_{\mathsf{mp}}^{(t)}(z^{\prime})\big{)}^{\alpha/2}-\big{(}s\mathsf{ m}_{\mathsf{mp}}^{(t)}(z)\big{)}^{\alpha/2}-\big{(}s^{\prime}\mathsf{m}_{ \mathsf{mp}}^{(t)}(z^{\prime})\big{)}^{\alpha/2}\Big{)}\Big{\}}\mathrm{d}s \mathrm{d}s^{\prime}.\] Before giving the proof of Proposition 3.11, let us introduce the parameter \(\sigma_{N}\coloneqq\sqrt{N\mathbb{E}\hat{x}_{ij}^{2}}\) and Lemma 3.12 below. Note that \[\mathbb{E}\big{(}N\tilde{x}_{ij}^{2}\mathbf{1}_{\sqrt{N}x_{ij}>N^{\theta}} \big{)}=\int_{N^{2\theta}}^{\infty}\mathbb{P}(|\sqrt{N}x_{ij}|^{2}>x)\mathrm{d }x\sim N^{\theta(2-\alpha)}, \tag{3.17}\] which gives \(\sigma_{N}^{2}-(1-t)=\mathcal{O}(N^{\vartheta(2-\alpha)})\). The following lemma collects some useful properties of \(\tilde{x}_{ij}\) and the expansion for the characteristic function of \(x_{ij}\). **Lemma 3.12**.: _Then there exists constant \(C>0\), such that_ * \(\tilde{x}_{ij}\)_'s are i.i.d. centered, with variance_ \(\sigma_{N}^{2}/N\)_, third moment bound_ \(N^{3/2}\mathbb{E}[|\tilde{x}_{ij}|^{3}]\leq CN^{\vartheta(3-\alpha)_{+}}\)_, and fourth moment bound_ \(N^{2}\mathbb{E}[|\tilde{x}_{ij}|^{4}]\leq CN^{\vartheta(4-\alpha)}\)_,_ * _for any_ \(\lambda\in\mathbb{C}\) _such that_ \(\mathrm{Im}\,\lambda\leq 0\)_,_ \[\phi_{N}(\lambda)\coloneqq\mathbb{E}\big{(}e^{-\mathrm{i}\lambda|x_{ij}|^{2}} \big{)}=1-\frac{\mathrm{i}(1-t)\lambda}{N}+c\frac{(\mathrm{i}\lambda)^{\frac{ \alpha}{2}}}{N^{\frac{\alpha}{2}}}+\varepsilon_{N}(\lambda),\quad\text{and} \quad\varepsilon_{N}(\lambda)=\mathcal{O}\Big{(}\frac{|\lambda|^{(\alpha+\varrho) /2}}{N^{(\alpha+\varrho)/2}}\vee\frac{|\lambda|^{2}}{N^{2}}\Big{)}.\] Proof.: The proof of (i) is elementary. To prove (ii), we observe \[1-\phi_{N}(\lambda)=\int_{0}^{\infty}\big{(}\exp(-\mathrm{i}\lambda u/N)-1) \mathrm{d}F^{c}(u)=\frac{\mathrm{i}\lambda}{N}\int_{0}^{\infty}\exp(-\mathrm{i} \lambda u/N)F^{c}(u)\mathrm{d}u,\] where \(F\) be the distribution function of \(Nx_{ij}^{2}\) and let \(F^{c}=1-F\). Since \(\int_{0}^{\infty}F^{c}(u)\mathrm{d}u=1-t\), we notice \[1-\phi_{N}(\lambda)=\frac{\mathrm{i}\lambda(1-t)}{N}+\frac{\mathrm{i}\lambda}{N} \int_{0}^{\infty}\big{(}\exp(-\mathrm{i}\lambda u/N)-1\big{)}F^{c}(u)\mathrm{d}u.\] The estimate (ii) can be obtained using the tail density assumption on \(\sqrt{N}y_{ij}\) (cf. Assumption 1.1 (i)). Proof of Proposition 3.11.: Let \(\hat{f}_{k}\) be defined as \(f_{k}\), but with the matrix \(\tilde{X}\) replaced by a matrix \(\hat{X}\). The columns \(\hat{X}_{i}\) of \(\hat{X}\) are the same as those of \(\tilde{X}\) if \(i\leq k\), but are independent random vectors with the same distribution as the columns of \(\tilde{X}\) if \(i>k\). It is still valid to use the notation \(\mathbb{E}_{k}\) since \(\tilde{X}\) and \(\hat{X}\) share the same first \(k\) columns. By the following elementary identity \[\mathbb{E}_{k-1}\big{(}(\mathbb{E}_{k}-\mathbb{E}_{k-1})(f_{k}(z) )(\mathbb{E}_{k}-\mathbb{E}_{k-1})(f_{k}(z^{\prime}))\big{)}\] \[=\mathbb{E}_{k}\big{(}\mathbb{E}_{\tilde{x}_{k}}(f_{k}(z)\hat{f}_ {k}(z^{\prime}))\big{)}-\big{(}\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}f_{k}( z)\big{)}\big{(}\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}\hat{f}_{k}(z^{\prime}) \big{)}, \tag{3.18}\] it suffices to study the approximation for \(\mathbb{E}_{\tilde{x}_{k}}f_{k}(z)\) and \(\mathbb{E}_{\tilde{x}_{k}}f_{k}(z)\hat{f}_{k}(z^{\prime})\). In the sequel, we write \(f_{k}=f_{k}(z)\), \(\hat{f}_{k}^{\prime}=\hat{f}_{k}(z^{\prime})\), \(G_{k}=G(\tilde{X}^{(k)},z)\) and \(G_{k}^{\prime}=G(\tilde{X}^{(k)},z^{\prime})\) for simplicity. By a minor process argument, for any \(D>0\), there exists constant \(C_{k}>0\) such that \(|\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)}))-\hat{\zeta}_{\mathbf{e}}|\geq C_{k }t^{2}\), with probability at least \(1-N^{-D}\). This implies that there exists some constant \(C_{k}>0\) such that for any arbitrary large \(D>0\), \[\mathbb{P}(\tilde{\Omega}_{k}=\{\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)}))- \bar{\zeta}_{-,t}\geq C_{k}t^{2}\})\geq 1-N^{-D}.\] Then it is readily seen that \(\operatorname{Re}\big{[}G(\tilde{X}^{(k)},z)]_{jj}\cdot\mathbf{1}_{\tilde{ \Omega}_{k}}\geq 0\) for any \(z\in\{|z-\bar{\zeta}_{-,t}|\leq C_{k}t^{2}/10,|\operatorname{Im}z|\geq N^{-100}\}\). Since \(\tilde{\Omega}_{k}\) is independent of \(\tilde{x}_{k}\), we can write \(\mathbb{E}_{\tilde{x}_{k}}(f_{k})=\mathbb{E}_{\tilde{x}_{k}}(f_{k})\mathbf{1} _{\tilde{\Omega}_{k}}+\mathbb{E}_{\tilde{x}_{k}}(f_{k})\mathbf{1}_{\tilde{ \Omega}_{k}^{c}}\). Using the facts that \(|\tilde{x}_{jk}|\leq N^{1/\alpha+1/4+\epsilon_{\vartheta}}\) and \(|[G_{kj}]_{jj}|\leq|\operatorname{Im}z|^{-1}\leq N^{101}\), we have for some large constant \(K>0\) such that \(|\mathbb{E}_{\tilde{x}_{k}}(f_{k})\mathbf{1}_{\tilde{\Omega}_{k}^{c}}|\leq N^{ K}\mathbf{1}_{\tilde{\Omega}_{k}^{c}}\). Next, we will mainly focus on the estimation for \(\mathbb{E}_{\tilde{x}_{k}}(f_{k})\mathbf{1}_{\tilde{\Omega}_{k}}\). In the sequel, we omit the indicate function \(\mathbf{1}_{\tilde{\Omega}_{k}}\) from the display for simplicity, and keep in mind that all the estimates are done on the event \(\tilde{\Omega}_{k}\). Using the identity that for \(w\) with \(\operatorname{Re}w>0\), \(w^{-1}=\int_{0}^{\infty}e^{-sw}\mathrm{d}s\), we have \[\mathbb{E}_{\tilde{x}_{k}}f_{k}=\mathbb{E}_{\tilde{x}_{k}}\Big{(}\int_{0}^{ \infty}\sum_{j}\tilde{x}_{jk}^{2}[G_{k}^{2}]_{jj}e^{-s\big{(}1+\sum_{j}\tilde {x}_{jk}^{2}[G_{k}]_{jj}\big{)}}\mathrm{d}s\Big{)}=-\int_{0}^{\infty}\frac{e^{- s}}{s}\partial_{z}\Big{\{}\mathbb{E}_{\tilde{x}_{k}}\big{(}e^{-s\sum_{j}\tilde{x}_{ jk}^{2}[G_{k}]_{jj}}\big{)}\Big{\}}\mathrm{d}s.\] Recall \(\tilde{\phi}_{N}\) and \(\phi_{N}\) in Lemma 3.12. We have \[\mathbb{E}_{\tilde{x}_{k}}f_{k}=-\int_{0}^{\infty}\frac{e^{-s}}{s}\partial_{z} \Big{\{}\prod_{j}\phi_{N}\big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}\Big{\}} \mathrm{d}s+\text{Diff},\] where \(\text{Diff}\coloneqq\int_{0}^{\infty}\frac{e^{-s}}{s}\partial_{z}\Big{\{} \prod_{j}\phi_{N}\big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}-\prod_{j}\tilde{ \phi}_{N}\big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}\Big{\}}\mathrm{d}s.\) Note by the definition of \(\tilde{x}_{jk}\)'s for any \(j\in[N]\), the following estimate holds uniformly for all \(\lambda\) with \(\operatorname{Im}\lambda\leq 0\), \[\big{|}\phi_{N}(\lambda)-\tilde{\phi}_{N}(\lambda)\big{|}=\Big{|}\mathbb{E} \Big{[}\big{(}e^{-\mathrm{i}\lambda|x_{ij}|^{2}}-1\big{)}\cdot\mathbf{1}_{ \sqrt{N}|x_{ij}|>N^{\vartheta}}\Big{]}\Big{|}\leq 2\mathbb{P}\big{(}\sqrt{N}|x_{ ij}|>N^{\vartheta}\big{)}\lesssim N^{-\alpha\vartheta}.\] Therefore, by a Cauchy integral argument with contour radius equals to \(ct^{2}\) for some sufficiently small \(c>0\), we have for sufficiently large \(K\), \[\Big{|}\int_{N^{-K}}^{\infty}\frac{e^{-s}}{s}\partial_{z}\Big{\{}\prod_{j} \phi_{N}\big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}-\prod_{j}\tilde{\phi}_{N} \big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}\Big{\}}\mathrm{d}s\Big{|}\lesssim t^{-2} N^{-\alpha\vartheta}\int_{N^{-K}}^{\infty}\frac{e^{-s}}{s}\mathrm{d}s\lesssim N^{1- \alpha/2-\epsilon}.\] With the prescribe \(K\), we also have \[\Big{|}\int_{0}^{N^{-K}}\frac{e^{-s}}{s}\partial_{z}\Big{\{}\prod_{j}\tilde{ \phi}_{N}\big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}\Big{\}}\mathrm{d}s\Big{|}= \Big{|}\mathbb{E}_{\tilde{x}_{k}}\Big{(}\int_{0}^{N^{-K}}\sum_{j}\tilde{x}_{jk}^{2 }[G_{k}^{2}]_{jj}e^{-s\big{(}1+\sum_{j}\tilde{x}_{jk}^{2}[G_{k}]_{jj}\big{)}} \mathrm{d}s\Big{)}\Big{|}\lesssim N^{-K/2},\] and similar estimate holds if we replace \(\tilde{\phi}_{N}\) by \(\phi_{N}\). Combining the above two displays, we can obtain that \[\mathbb{E}_{\tilde{x}_{k}}f_{k}=-\int_{0}^{\infty}\frac{e^{-s}}{s}\partial_{z} \Big{\{}\prod_{j}\Big{(}1+\frac{1-t}{N}u_{j}(z,s)\Big{)}\Big{\}}\mathrm{d}s+ \mathcal{O}_{\prec}(N^{1-\alpha/2-\epsilon}),\] where \[u_{j}(z,s)=\frac{N}{1-t}\Big{(}\phi\big{(}-\mathrm{i}s[G_{k}]_{jj}\big{)}-1\Big{)} =-s[G_{k}]_{jj}+\mathsf{c}\frac{(s[G_{k}]_{jj})^{\frac{\alpha}{2}}}{N^{\frac{ \alpha-2}{2}}(1-t)}+\frac{N}{1-t}\varepsilon_{N}(-\mathrm{i}s[G_{k}]_{jj}).\] We introduce the approximation \(\mathsf{K}_{1}(z,s)\) for the integrand as follows: \[\mathsf{K}_{1}(z,s):=\frac{e^{-s-s(1-t)\mathrm{Tr}G_{k}/N}}{s}\Big{(}1+\frac{ \mathsf{c}}{N^{\alpha/2}}\sum_{j=1}^{M}\big{(}s[G_{k}]_{jj}\big{)}^{\alpha/2} \Big{)}.\] Then our goal is to show on the event \(\tilde{\Omega}_{k}\) \[\int_{0}^{\infty}\partial_{z}\delta(z,s)\mathrm{d}s\lesssim N^{1-\alpha/2- \epsilon}, \tag{3.19}\] where \(\delta(z,s)=\frac{e^{-s}}{s}\prod_{j}\big{(}1+\frac{1-t}{N}u_{j}(z,s)\big{)}- \mathsf{K}(z,s)\), and \(\epsilon>0\) is a small constant. By the Cauchy integral formula, we have \(\Big{|}\int_{0}^{\infty}\partial_{z}\delta(z,s)\mathrm{d}s\Big{|}\lesssim t ^{-2}\int_{0}^{\infty}|\delta(z_{s},s)|\mathrm{d}s\), where \(z_{s}\) is the maximizer of \(|\delta(z,s)|\) on the contour \(\{z^{\prime}:|z^{\prime}-z|=C_{k}t^{2}/50\}\). To estimate the RHS of this inequality, we divide it into two parts, \[\frac{1}{t^{2}}\int_{0}^{\infty}|\delta(z_{s},s)|\mathrm{d}s=\frac{1}{t^{2}} \int_{0}^{N^{\varsigma}}|\delta(z_{s},s)|\mathrm{d}s+\frac{1}{t^{2}}\int_{N^{ \varsigma}}^{\infty}|\delta(z_{s},s)|\mathrm{d}s=I_{1}+I_{2},\] with \(\varsigma\) being chosen later. Using the fact that \([G_{k}]_{jj}\lesssim t^{-2}\) on the event \(\tilde{\Omega}_{k}\), we can obtain that \[I_{2}\lesssim\frac{1}{t^{2+\alpha}}\int_{N^{\varsigma}}^{\infty}s^{\alpha/2-1 }e^{-s}\mathrm{d}s\leq e^{-N^{\varsigma}/3}.\] For \(I_{1}\), we further decompose it into three parts, \[I_{1} =\frac{1}{t^{2}}\int_{0}^{N^{\varsigma}}\Big{|}\frac{e^{-s}}{s} \Big{(}\prod_{j}\Big{(}1+\frac{1-t}{N}u_{j}(z_{s},s)\Big{)}-e^{\sigma_{N}^{2}/ N\sum_{j}u_{j}(z_{s},s)}\Big{)}\Big{|}\mathrm{d}s\] \[\quad+\frac{1}{t^{2}}\int_{0}^{N^{\varsigma}}\Big{|}\frac{e^{-s}}{ s}\Big{(}e^{(1-t)/N\sum_{j}u_{j}(z_{s},s)}-e^{-s(1-t)\mathrm{Tr}G_{k}/N}\Big{(}1+ \sum_{j}\mathsf{c}\frac{(s[G_{k}]_{jj})^{\frac{\alpha}{2}}}{N^{\frac{\alpha}{ 2}}}+\sum_{j}\varepsilon_{N}(-\mathrm{i}s[G_{k}]_{jj})\Big{)}\Big{)}\Big{|} \mathrm{d}s\] \[\quad+\frac{1}{t^{2}}\int_{0}^{N^{\varsigma}}\Big{|}\frac{e^{-s}}{ s}\Big{(}e^{-s(1-t)\mathrm{Tr}G_{k}/N}\sum_{j}\varepsilon_{N}(-\mathrm{i}s[G_{k}]_{jj}) \Big{)}\Big{|}\mathrm{d}s=I_{11}+I_{12}+I_{13}.\] Notice that on the event \(\tilde{\Omega}_{k}\), \(M_{s}=\max_{j}|u_{j}(z_{s},s)|\sigma_{N}^{2}\lesssim st^{-2}\), and \(\mathrm{Re}\,u_{j}(z_{s},s)=N(\mathrm{Re}\,\phi_{N}(-\mathrm{i}s[G_{k}]_{jj})-1)\leq 0\). Then using [16, Lemma 4.5], we have on the event \(\tilde{\Omega}_{k}\), \[I_{11}\leq\frac{1}{t^{2}}\int_{0}^{N^{\varsigma}}\frac{e^{-s}}{s}\cdot\frac{s^ {2}}{Nt^{4}}e^{s^{2}/(Nt^{4})+\sum_{j}\mathrm{Re}\,((1-t)u_{j}(z_{s},s))/N} \mathrm{d}s\lesssim\frac{1}{Nt^{6}}\int_{0}^{N^{\varsigma}}e^{-s}se^{s^{2}/( Nt^{4})}\mathrm{d}s\lesssim\frac{1}{Nt^{6}}.\] By choosing \(\varsigma<1/3\) (say), we can obtain that \(I_{11}\lesssim N^{-1}t^{-6}\). Applying the simple inequality that \(|e^{x}-(1+x)|\leq 2|x|^{2}\) for \(|x|\leq 1/2\), we have \[I_{12} \lesssim\frac{1}{t^{2}}\int_{0}^{N^{\varsigma}}\frac{e^{-s}}{s} \Big{|}\sum_{j}\mathsf{c}\frac{(s[G_{k}]_{jj})^{\frac{\alpha}{2}}}{N^{\frac{ \alpha}{2}}}+\sum_{j}\varepsilon_{N}(-\mathrm{i}s[G_{k}]_{jj})\Big{|}^{2} \mathrm{d}s\] \[\lesssim\frac{1}{N^{\alpha-2}t^{2+2\alpha}}\int_{0}^{N^{\varsigma}} s^{\alpha-1}\mathrm{d}s\lesssim N^{\alpha-\alpha+2}t^{-2-2\alpha}\lesssim N^{-3( \alpha-2)/5},\] where in the last step, we chose \(\varsigma<(\alpha-2)/(4\alpha)\). Finally, for \(I_{13}\), we can use Lemma 3.12 (ii) to obtain that, \(I_{13}\lesssim N^{-(\alpha-2)\vartheta}t^{-2-2\alpha}\lesssim N^{-3(\alpha-2)/4}\). Now we may conclude the proof of (3.19) by combining the above estimates and possibly adjusting the constants. This gives \[\mathbb{E}_{\tilde{x}_{k}}(f_{k})=-\int_{0}^{\infty}\partial_{z}\mathsf{K}_{1}(z,s)\mathrm{d}s+O_{\prec}(N^{1-\alpha/2-\epsilon}).\] Similarly, we can obtain that \[\mathbb{E}_{\bar{x}_{k}}(f_{k}\hat{f}^{\prime}_{k})=\int_{0}^{\infty}\int_{0}^{ \infty}\partial_{z}\partial_{z^{\prime}}\mathsf{K}_{2}(z,z^{\prime},s,s^{\prime })\mathrm{d}s\mathrm{d}s^{\prime}+O_{\prec}(N^{1-\alpha/2-\epsilon}),\] where \[\mathsf{K}_{2}(z,z^{\prime},s,s^{\prime})\coloneqq\frac{e^{-s-s^{\prime}-s(1-t )\mathrm{Tr}G_{k}/N-s(1-t)\mathrm{Tr}G^{\prime}_{k}/N}}{ss^{\prime}}\Big{(}1+ \frac{\mathsf{c}}{N^{\alpha/2}}\sum_{j=1}^{M}\big{(}s[G_{k}]_{jj}+s^{\prime}[G^ {\prime}_{k}]_{jj}\big{)}^{\alpha/2}\Big{)}.\] Notice that the estimate in Proposition 2.12 can be obtained for our \(G(\tilde{X}^{(k)},z)\) as well in the same manner. Suppose that \(\max\{|\mathcal{D}_{r}|,|\mathcal{D}_{\mathsf{c}}|\}\leq N^{1-\epsilon_{d}}\) for some \(\epsilon_{d}\). Notice that \(\epsilon_{d}\geq\epsilon_{\alpha}\) by definition. Hence, the claim now follows by (i) employing (3.18), then substituting \(\sigma_{N}^{2}[G_{k}]_{jj}(z)\) with \(\mathsf{m}_{\mathsf{mp}}(z/\sigma_{N}^{2})\) for \(j\in\mathcal{T}_{r}\), and utilizing the bound \([G_{k}]_{jj}\sim 1/t\) for \(j\in\mathcal{D}_{r}\) with the fact \(t\gg N^{-\epsilon_{d}/4}\); (ii) considering the estimates \(\sigma_{N}^{2}-(1-t)=\mathcal{O}(N^{\vartheta(2-\alpha)})\) (refer to Eqn. (3.17)), \(\partial_{z}\mathsf{m}_{\mathsf{mp}}(z/\sigma_{N}^{2})\sim t^{-1}\), and \(\partial_{z}^{2}\mathsf{m}_{\mathsf{mp}}(z/\sigma_{N}^{2})\sim t^{-3}\) for \(z\) within the specified domain. This enables us to further replace \(\mathsf{m}_{\mathsf{mp}}(z/\sigma_{N}^{2})\) and \(\mathsf{m}_{\mathsf{mp}}(z/\sigma_{N}^{2})\) with \(\mathsf{m}_{\mathsf{mp}}^{(t)}(z)\) and \(\mathsf{m}_{\mathsf{mp}}^{(t)}(z^{\prime})\), respectively. ### Proof of Proposition 3.6 Let us first define \[\mathfrak{p}(z) :=\mathsf{c}N^{1-\alpha/2}c_{N}\int_{0}^{\infty}e^{-s-sc_{N} \mathsf{m}_{\mathsf{mp}}^{(t)}(z)}\big{(}s\mathsf{m}_{\mathsf{mp}}^{(t)}(z) \big{)}^{\alpha/2}\mathrm{d}s,\] \[m_{\mathsf{shift}}(z) \coloneqq\frac{\mathrm{i}(\frac{z}{1-t}-c_{N}+1)\mathfrak{p}(z)}{ 2c_{N}z\sqrt{(\frac{z}{1-t}-\lambda_{-}^{\mathsf{mp}})}(\lambda_{+}^{\mathsf{ mp}}-\frac{z}{1-t})}.\] Then we have the following proposition concerning the expansion of \(\mathbb{E}m_{X}(z)\). **Proposition 3.13**.: _There exists some sufficiently small constant \(\tau>0\), such that for any \(z\in\{\zeta:|\zeta-\bar{\zeta}_{-,t}|\leq\tau t^{2},|\mathrm{Im}\,\zeta|\geq N ^{-100}\}\), we have \(m_{\mathsf{shift}}(z)=\mathcal{O}(t^{-1}N^{1-\alpha/2})\) and_ \[\mathbb{E}m_{X}(z)=\mathsf{m}_{\mathsf{mp}}^{(t)}(z)+m_{\mathsf{shift}}(z)- \frac{\mathfrak{p}(z)}{2c_{N}z}+\mathcal{O}(N^{1-\alpha/2-\epsilon(4-\alpha)( \alpha-2)/50}). \tag{3.20}\] _Furthermore, for any \(z\in\{\zeta:|\zeta-\bar{\zeta}_{-,t}|\ll t^{2},|\mathrm{Im}\,\zeta|\geq N^{-100}\}\),_ \[tm_{\mathsf{shift}}(z)=\frac{\mathsf{c}N^{1-\alpha/2}\int_{0}^{\infty}e^{-s- sc_{N}\mathsf{m}_{\mathsf{mp}}(\lambda_{-}^{\mathsf{mp}})}\big{(}s\mathsf{m}_{ \mathsf{mp}}(\lambda_{-}^{\mathsf{mp}})\big{)}^{\alpha/2}\mathrm{d}s}{2\sqrt {c_{N}}(1-\sqrt{c_{N}})}+\mathfrak{o}(N^{1-\alpha/2}). \tag{3.21}\] Proof.: By the resolvent expansion, we have for any \(z\in\{\zeta:|\zeta-\bar{\zeta}_{-,t}|\leq\tau t^{2},|\mathrm{Im}\,\zeta|\geq N ^{-100}\}\), \[\big{[}G(X^{\top},z)\big{]}_{ii}=-\big{(}z+zx_{i}^{\top}G(X^{(i)},z)x_{i} \big{)}^{-1}. \tag{3.22}\] Let \(Q=Q_{\mathsf{diag}}+Q_{\mathsf{off}}\) with \(Q_{\mathsf{diag}}:=\sum_{j=1}^{M}x_{ji}^{2}\big{[}G(X^{(i)},z)\big{]}_{jj}\) and \(Q_{\mathsf{off}}:=\sum_{\ell\neq k}x_{ki}x_{\ell i}\big{[}G(X^{(i)},z)\big{]}_{k\ell}\). Then, we can rewrite (3.22) as: \[\big{[}G(X^{\top},z)\big{]}_{ii}=-\frac{1}{z(1+Q_{\mathsf{diag}})}+\frac{Q_{ \mathsf{off}}}{z(1+Q_{\mathsf{diag}})^{2}}-\frac{Q_{\mathsf{off}}^{2}}{z(1+Q_ {\mathsf{diag}})^{2}(1+Q)}. \tag{3.23}\] Taking expectation at both sides gives \[\mathbb{E}\big{[}G(X^{\top},z)\big{]}_{ii}=-\frac{1}{z}\mathbb{E}\Big{[}\frac{ 1}{1+Q_{\mathsf{diag}}}\Big{]}-\frac{1}{z}\mathbb{E}\Big{[}\frac{Q_{\mathsf{ off}}^{2}}{(1+Q_{\mathsf{diag}})^{2}(1+Q)}\Big{]}=I_{1}+I_{2},\] where the second term at the right hand side of (3.23) vanished due to symmetry. Notice that when \(\Psi^{(i)}\) is good, we have w.h.p. that \[|\lambda_{M}(\mathcal{S}(X^{(i)}))-z| =|(1-t)\lambda_{-}^{\mathsf{mp}}-\bar{\zeta}_{-,t}|-|\lambda_{M} (\mathcal{S}(X^{(i)}))-(1-t)\lambda_{-}^{\mathsf{mp}}|-|\bar{\zeta}_{-,t}-z|\] \[\geq\sqrt{c_{N}}t^{2}-\sqrt{c_{N}}t^{2}/4-\tau t^{2}\geq\sqrt{c_{N} }t^{2}/2,\] where in the last step we used the fact that \(|\lambda_{M}(\mathcal{S}(X^{(i)}))-(1-t)\lambda_{-}^{\mathsf{mp}}|\leq N^{- \epsilon_{b}}\) w.h.p., and we also chose \(\tau<\sqrt{c_{N}}t^{2}/4\). This together with the fact that \(\Psi^{(i)}\) is good w.h.p. gives \(\big{\{}|\lambda_{M}(\mathcal{S}(X^{(i)}))-z|\geq\sqrt{c_{N}}t^{2}/2\big{\}} \big{)}\geq 1-N^{-D}\). Notice that \(\operatorname{Re}Q_{\text{diag}}\geq 0\) and \(\operatorname{Re}Q\geq 0\) hold on \(\Omega_{i}\). Then for \(I_{2}\), with the smallness of \(\mathbb{P}(\Omega_{i}^{c})\), we have \(I_{2}=\mathbb{E}[Q_{\text{off}}^{2}\mathbf{1}_{\Omega_{i}}/[(1+Q_{\text{diag} })^{2}(1+Q)]]+\mathcal{O}(N^{-D})\). We then bound \(I_{2}\) as \[|I_{2}|\leq\mathbb{E}|Q_{\text{off}}|^{2}\mathbf{1}_{\Omega_{i}}+\mathcal{O}(N ^{-D})=2N^{-2}\mathbb{E}\Big{[}\mathrm{Tr}G(X^{(i)},z)\overline{G(X^{(i)},z)} \mathbf{1}_{\Omega_{i}}\Big{]}+\mathcal{O}(N^{-1})=\mathcal{O}_{\prec}(t^{-4} N^{-1}).\] Next, we estimate \(I_{1}\). Due to the smallness of \(\mathbb{P}(\Omega_{i}^{c})\), we only have to do the estimation on the event \(\Omega_{i}\). Specially, we have \(I_{1}=-\mathbb{E}[\mathbf{1}_{\Omega_{i}}/(z+zQ_{\text{diag}})]+\mathcal{O}(N ^{-D})\). Notice that \(\operatorname{Re}Q_{\text{diag}}\geq 0\) on the event \(\Omega_{i}\). Using the identity that for \(w\) with \(\operatorname{Re}w>0\), \(w^{-1}=\int_{0}^{\infty}e^{-sw}\mathrm{d}s\) and setting \(w=1+Q_{\text{diag}}\), we have \[I_{1} =-\frac{1}{z}\mathbb{E}\Big{[}\int_{0}^{\infty}e^{-s(1+Q_{\text{ diag}})}\mathrm{d}s\cdot\mathbf{1}_{\Omega_{i}}\Big{]}+\mathcal{O}(N^{-D})=- \frac{1}{z}\mathbb{E}\Big{(}\mathbb{E}_{x_{i}}\Big{[}\int_{0}^{\infty}e^{-s(1 +Q_{\text{diag}})}\mathrm{d}s\Big{]}\cdot\mathbf{1}_{\Omega_{i}}\Big{)}+ \mathcal{O}(N^{-D})\] \[=-\frac{1}{z}\mathbb{E}\Big{(}\int_{0}^{\infty}e^{-s}\prod_{j} \phi_{N}\big{(}-\mathrm{i}s\big{[}G(X^{(i)},z)\big{]}_{jj}\big{)}\mathrm{d}s \cdot\mathbf{1}_{\Omega_{i}}\Big{)}+\mathcal{O}(N^{-D}).\] Then we may proceed as the estimation in the proof of Proposition 3.11 to obtain that \[I_{1}=-\frac{1}{z}\mathbb{E}\Big{[}\frac{1}{1+(1-t)\mathrm{Tr}G(X^{(i)},z)/N} \cdot\mathbf{1}_{\Omega_{i}}\Big{]}-\frac{\mathfrak{p}(z)}{z}+\mathcal{O}(N^{ -3(\alpha-2)/5}).\] Further using the \(\mathcal{O}_{\prec}(t^{-4}N^{-1})\) bound for \(\mathrm{Var}(M^{-1}\mathrm{Tr}G(X^{(i)},z))\) and the fact \(\mathrm{Tr}G(X^{(i)},z)-\mathrm{Tr}G(X,z)\prec t^{-4}\), we arrive at \[I_{1}=-\frac{1}{z}\Big{(}\frac{1}{1+(1-t)\mathbb{E}\mathrm{Tr}G(X,z)/N}\Big{)} -\frac{\mathfrak{p}(z)}{z}+\mathcal{O}(N^{-3(\alpha-2)/5})+\mathcal{O}_{\prec }(t^{-4}N^{-1}).\] Collecting the estimates for \(I_{1}\) and \(I_{2}\), and then summing over \(i\), we have \[N^{-1}\mathbb{E}\mathrm{Tr}G(X^{\top},z)=-\frac{1}{z}\Big{(}\frac{1}{1+(1-t) \mathbb{E}\mathrm{Tr}G(X,z)/N}\Big{)}-\frac{\mathfrak{p}(z)}{z}+\mathcal{O}(N ^{-3(\alpha-2)/5}).\] Using the simple equation \(\mathrm{Tr}G(X,z)-\mathrm{Tr}G(X^{\top},z)=(N-M)/z\), the above equation can be rewritten as: \[c_{N}\mathbb{E}m_{X}(z)=-\frac{1}{z}\Big{(}\frac{1}{1+(1-t)c_{N}\mathbb{E}m_{ X}(z)}\Big{)}-\frac{\mathfrak{p}(z)+1-c_{N}}{z}+\mathcal{O}(N^{-3(\alpha-2)/5}). \tag{3.24}\] Notice that for \(z=\bar{\zeta}_{-,t}+\mathrm{i}N^{-100K_{\zeta}}\), we have \((\frac{z}{1-t}+c_{N}-1)^{2}-\frac{4c_{N}z}{1-t}=\frac{c_{N}t^{2}(2-t)^{2}}{(1- t)^{2}}+\mathcal{O}(N^{-90K_{\zeta}})\). Then by continuity, we may choose \(\tau\) sufficiently small such that for any \(z\in\{\zeta:|\zeta-\bar{\zeta}_{-,t}|\leq\tau t^{2},|\mathrm{Im}\,\zeta|\geq N ^{-100K_{\zeta}}\}\), we have \((\frac{z}{1-t}+c_{N}-1)^{2}-\frac{4c_{N}z}{1-t}\sim t^{2}\). Having this bound, we may solve the quadric equation (3.24) and then compare it with (1.2) to obtain that \[\mathbb{E}m_{X}(z)=\mathsf{m}_{\text{mp}}^{(t)}(z)+m_{\text{shift}}(z)-\frac{ \mathfrak{p}(z)}{2c_{N}z}+\mathcal{O}(N^{-11(\alpha-2)/20}),\] which proves (3.20). Using the fact that \(\mathsf{m}_{\text{mp}}^{(t)}(\bar{\zeta}_{-,t}+\mathrm{i}N^{-100K_{\zeta}})- \mathsf{m}_{\text{mp}}(\mathsf{\Lambda}_{-}^{\text{mp}})\leq t\), we may further derive that \[tm_{\text{shift}}(\bar{\zeta}_{-,t}+\mathrm{i}N^{-100})=\frac{\mathsf{c}N^{1- \alpha/2}\int_{0}^{\infty}e^{-s-sc_{N}\mathsf{m}_{\text{mp}}(\mathsf{\Lambda}_{-} ^{\text{mp}})}\big{(}s\mathsf{m}_{\text{mp}}(\mathsf{\Lambda}_{-}^{\text{mp}}) \big{)}^{\alpha/2}\mathrm{d}s}{2\sqrt{c_{N}}(1-\sqrt{c_{N}})}+\mathcal{O}(tN^{ 1-\alpha/2}).\] This together with the crude bound \(\mathsf{m}_{\text{mp}}^{(t)}(\bar{\zeta}_{-,t}+\mathrm{i}N^{-100K_{\zeta}})- \mathsf{m}_{\text{mp}}^{(t)}(\bar{\zeta}_{-,t})=\mathcal{O}(N^{-90K_{\zeta}})\) proves (3.21), which completes the proof of Proposition 3.13. The following corollary is a direct consequence of Proposition 3.13. **Corollary 3.14**.: _Let \(\tau\) be chosen as in Proposition 3.13. Then for any \(z\in\{\zeta:|\zeta-\bar{\zeta}_{-,t}|\leq\tau t^{2}/2,|\mathrm{Im}\,\zeta|\geq N^{-1 00}\}\), we have \(\mathbb{E}m_{X}^{(k)}(z)-(\mathsf{m}_{\text{mp}}^{(t)}(z))^{(k)}=\mathcal{O}(t^{ -(2k+1)}N^{1-\alpha/2}),\)_ Proof.: The claim follows from Proposition 3.13 with Cauchy integral. We omit further details. Proof of Proposition 3.6.: Replacing \(\mathbb{E}[m_{X}(\hat{\zeta}_{\mathbf{e}})]\) by \(\mathsf{m}_{\mathsf{mp}}^{(t)}(\hat{\zeta}_{\mathbf{e}})\) in the expression of \(\lambda_{\mathsf{shift}}\), we can obtain \[\lambda_{\mathsf{shift}}=\bar{\Phi}_{t}(\zeta_{\mathbf{e}})+\big{(}2c_{N}t \lambda_{-}^{\mathsf{mp}}+O(t^{2})\big{)}\cdot\big{(}\mathsf{m}_{\mathsf{mp}}^{ (t)}(\hat{\zeta}_{\mathbf{e}})-\mathbb{E}[m_{X}(\hat{\zeta}_{\mathbf{e}})] \big{)}+\mathcal{O}(|\zeta_{\mathbf{e}}-\hat{\zeta}_{\mathbf{e}}|). \tag{3.25}\] Expanding \(\bar{\Phi}_{t}(\zeta_{\mathbf{e}})\) around \(\bar{\zeta}_{-,t}\) and using the fact that \(\bar{\Phi}_{t}^{\prime}(\bar{\zeta}_{-,t})=0\), we have that there exists \(\tilde{\zeta}\in[\bar{\zeta}_{-,t},\zeta_{\mathbf{e}}]\) such that \(\bar{\Phi}_{t}(\zeta_{\mathbf{e}})=\bar{\Phi}_{t}(\bar{\zeta}_{-,t})+\bar{ \Phi}_{t}^{\prime\prime}(\tilde{\zeta})(\zeta_{\mathbf{e}}-\bar{\zeta}_{-,t}) ^{2}=\lambda_{-}^{\mathsf{mp}}+\bar{\Phi}_{t}^{\prime\prime}(\tilde{\zeta})( \zeta_{\mathbf{e}}-\bar{\zeta}_{-,t})^{2}.\) Substituting this expansion back into (3.25), and using the bound in Corollary 3.14, (3.25) becomes \[\lambda_{\mathsf{shift}}=\lambda_{-}^{\mathsf{mp}}+2c_{N}t\lambda_{-}^{\mathsf{ mp}}\big{(}\mathsf{m}_{\mathsf{mp}}^{(t)}(\hat{\zeta}_{\mathbf{e}})-\mathbb{E}[m_{X} (\hat{\zeta}_{\mathbf{e}})]\big{)}+\bar{\Phi}_{t}^{\prime\prime}(\tilde{\zeta })(\zeta_{\mathbf{e}}-\bar{\zeta}_{-,t})^{2}+\mathfrak{o}(N^{1-\alpha/2}).\] Note by considering that \(\tilde{\zeta}-(1-t)\lambda_{-}^{\mathsf{mp}}\sim t^{2}\), it can be easily verified that \(\bar{\Phi}_{t}^{\prime\prime}(\tilde{\zeta})\sim t^{-2}\). By employing Corollary 3.14 along with the variance bounds for \(m_{X}^{(k)}(\hat{\zeta}_{\mathbf{e}})\) in Lemma 3.4, we can conclude that \[\bar{\Delta}_{m}^{(k)}(\bar{\zeta}_{-,t})\coloneqq m_{X}^{(k)}(\bar{\zeta}_{ -,t})-(\mathsf{m}_{\mathsf{mp}}^{(t)}(\bar{\zeta}_{-,t}))^{(k)}=\mathcal{O}_{p }(N^{-1/2+\epsilon/2}t^{-2-k}+N^{1-\alpha/2}t^{-2k-1}).\] With the above probabilistic bounds in place, we may now proceed to follow the expansion detailed in the proof of Lemma 3.4, but this time substitute \(\zeta_{\mathbf{e}}\) with \(\bar{\zeta}_{-,t}\) and \(\mathbb{E}(m_{X}^{(k)}(\hat{\zeta}_{\mathbf{e}}))\) with \(\bar{m}_{X}^{(k)}(\bar{\zeta}_{-,t})\) (cf. (3.11)-(3.16)). It becomes evident that the \(\mathsf{ZOT}_{\zeta}\) therein vanishes due to the fact that \(\bar{\Phi}_{t}^{\prime}(\bar{\zeta}_{-,t})=0\). This eventually leads to \(\bar{\Delta}_{\zeta}\coloneqq\zeta_{-,t}-\bar{\zeta}_{-,t}=\mathcal{O}_{p}(N^ {-1/2+\epsilon/2}+N^{1-\alpha/2})\). Therefore, with \(\Delta_{\zeta}=\mathcal{O}_{p}(N^{-1/2+\epsilon/2}t^{6})\), we have \(\bar{\Phi}_{t}^{\prime\prime}(\tilde{\zeta})(\zeta_{\mathbf{e}}-\bar{\zeta}_ {-,t})^{2}\sim t^{-2}(\bar{\Delta}_{\zeta}-\Delta_{\zeta})^{2}=\mathfrak{o}(N^ {1-\alpha/2})\). Consequently, we arrive at \[\lambda_{\mathsf{shift}}=\lambda_{-}^{\mathsf{mp}}+2c_{N}t\lambda_{-}^{\mathsf{ mp}}\big{(}\mathsf{m}_{\mathsf{mp}}^{(t)}(\hat{\zeta}_{\mathbf{e}})- \mathbb{E}m_{X}(\hat{\zeta}_{\mathbf{e}})\big{)}+\mathfrak{o}(N^{1-\alpha/2}).\] Recalling from (3.7) that \(\bar{\zeta}_{-,t}-\zeta_{\mathbf{e}}\prec N^{-\beta/2}t^{2}\), we can deduce that \(\bar{\zeta}_{-,t}-\hat{\zeta}_{\mathbf{e}}\prec N^{-\beta/2}t^{2}\). The claim now follows by (3.21) in Proposition 3.13 and the fact \(\mathsf{m}_{\mathsf{mp}}(\lambda_{-}^{\mathsf{mp}})=(\sqrt{c_{N}}-c_{N})^{-1}\). ## 4. Beyond Gaussian divisible model In this section, we present three Green function function comparison results, as we mentioned in the Section 1. Their proofs will be postponed to the next section. Recall the notations in (1.14). ### Entry-wise bound We first introduce the following shorthand notation: for any \(a,b\in[M]\) and \(u,v\in[N]\), \[\mathfrak{X}_{ab}=\mathfrak{X}_{ab}(\Psi)\coloneqq\begin{cases}1&\text{if }a \text{ or }b\in\mathcal{T}_{r},\\ t^{2}&\text{if }a\in\mathcal{D}_{r},b\in\mathcal{D}_{r}\end{cases},\quad\mathfrak{Y}_{w }=\mathfrak{Y}_{w}(\Psi)\coloneqq\begin{cases}1&\text{if }u\text{ or }v\in\mathcal{T}_{c},\\ t^{2}&\text{if }u\in\mathcal{D}_{c},v\in\mathcal{D}_{c}\end{cases},\] \[\mathfrak{Z}_{au}=\mathfrak{Z}_{au}(\Psi)\coloneqq\begin{cases}1&\text{if }a\in \mathcal{T}_{r}\text{ or }u\in\mathcal{T}_{c}\\ t^{2}&\text{if }a\in\mathcal{D}_{r},u\in\mathcal{D}_{c}\end{cases}.\] **Proposition 4.1** (Entry-wise bound).: _Recall \(\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\) defined in (1.22). Let \(\mathsf{D}_{\leq}=\{z=E+\mathrm{i}\eta\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}):\eta\leq N^{-\varepsilon}\}\). Set \(10\epsilon_{a}\leq\varepsilon_{1}\leq\epsilon_{b}/500\), and set \(\varepsilon_{2},\varepsilon_{3}\) sufficiently small, and \(3\epsilon_{1}<\varepsilon\leq\epsilon_{b}/100\). Suppose that \(\Psi\) is good. Let \(\mathbb{P}_{\Psi}\) be the probability conditioned on the event that the \((\psi_{ij})\) matrix is a given \(\Psi\). Suppose that \(\Psi\) is good (cf. (2.4)). Then for each \(\delta>0\) and \(D>0\), there exists a large constant \(C>0\) such that_ \[\mathbb{P}_{\Psi}\Big{(}\sup_{0\leq\gamma\leq 1}\sup_{z\in \mathsf{D}_{\leq}}\sup_{a,b\in[M]} |\mathfrak{X}_{ab}[G^{\gamma}(z)]_{ab}|\vee\sup_{0\leq\gamma\leq 1}\sup_{z\in \mathsf{D}_{\leq}}\sup_{u,v\in[N]}|\mathfrak{Y}_{w}[G^{\gamma}(z)]_{uv}|\] \[\vee\sup_{0\leq\gamma\leq 1}\sup_{z\in\mathsf{D}_{\leq}}\sup_{a\in[M],u\in[N]}| \mathfrak{Z}_{au}[G^{\gamma}(z)Y^{\gamma}]_{au}|\geq N^{\delta}\Big{)}\leq CN^{-D},\] The proof of Proposition 4.1 follows a similar approach to the one demonstrated in [5, Proposition 3.17]. It relies on the entry-wise bounds for the Green functions of \(Y^{0}\) as provided in Theorem 2.10, which serve as an input for the subsequent comparison theorem. We defer the proof to Section 5.2. **Theorem 4.2**.: _Let \(F:\mathbb{R}\to\mathbb{R}\) be a function such that_ \[\sup_{0\leq\mu\leq d}F^{(\mu)}(x)\leq(|x|+1)^{C_{0}},\qquad\sup_{ \begin{subarray}{c}0\leq\mu\leq d\\ |x|\leq 2N^{2}\end{subarray}}F^{(\mu)}(x)\leq N^{C_{0}},\] _for some real number \(C_{0},d>0\). For any \(0-1\) matrix \(\Psi\) and complex number \(z\), we define for any \(a,b\in[M]\) and \(u,v\in[N]\),_ \[\mathfrak{I}_{0,ab} =\mathfrak{I}_{0,ab}(\Psi,z)\coloneqq\max_{0\leq\mu\leq d}\sup_{ 0\leq\gamma\leq 1}\mathbb{E}_{\Psi}\big{(}|F^{(\mu)}(\mathfrak{X}_{ab} \mathrm{Im}\,[G^{\gamma}(z)]_{ab})|\big{)},\] \[\mathfrak{I}_{1,uv} =\mathfrak{I}_{1,uv}(\Psi,z)\coloneqq\max_{0\leq\mu\leq d}\sup_{ 0\leq\gamma\leq 1}\mathbb{E}_{\Psi}\big{(}|F^{(\mu)}(\mathfrak{I}_{uv} \mathrm{Im}\,[G^{\gamma}(z)]_{uv})|\big{)},\] \[\mathfrak{I}_{2,au} =\mathfrak{I}_{2,au}(\Psi,z)\coloneqq\max_{0\leq\mu\leq d}\sup_{ 0\leq\gamma\leq 1}\mathbb{E}_{\Psi}\big{(}|F^{(\mu)}(\mathfrak{I}_{au} \mathrm{Im}\,[G^{\gamma}(z)Y^{\gamma}]_{au})|\big{)},\] _and \(\Omega=\Omega_{0}\cap\Omega_{1}\cap\Omega_{2}\cap\Omega_{w}\), \(Q_{0}=Q_{0}(\varepsilon,z)\coloneqq 1-\mathbb{P}_{\Psi}(\Omega)\) with_ \[\Omega_{0}=\Omega_{0}(\varepsilon,z)\coloneqq\Big{\{}\sup_{ \begin{subarray}{c}a,b\in[M]\\ 0\leq\gamma\leq 1\end{subarray}}|\mathfrak{X}_{ab}[G^{\gamma}(z)]_{ab}|\leq N ^{\varepsilon}\Big{\}},\Omega_{1}=\Omega_{1}(\varepsilon,z)\coloneqq\Big{\{} \sup_{\begin{subarray}{c}u,v\in[N]\\ 0\leq\gamma\leq 1\end{subarray}}|\mathfrak{I}_{uv}[\mathcal{G}^{\gamma}(z)]_{ uv}|\leq N^{\varepsilon}\Big{\}},\] \[\Omega_{2}=\Omega_{2}(\varepsilon,z)\coloneqq\Big{\{}\sup_{ \begin{subarray}{c}a\in[M],u\in[N]\\ 0\leq\gamma\leq 1\end{subarray}}|\mathfrak{I}_{au}[G^{\gamma}(z)Y^{\gamma}]_{au}| \leq N^{\varepsilon}\Big{\}},\Omega_{w}=\Omega_{w}(\varepsilon)\coloneqq \Big{\{}\sup_{i\in[M],j\in[N]}|w_{ij}|\leq N^{-1/2+\varepsilon}t\Big{\}}.\] _Suppose that \(\Psi\) is good. There exist sufficiently small positive constants \(\varepsilon\leq\epsilon_{b}/100\) and \(\omega\), and a large constant \(C>0\) such that for_ \[(\#_{1},\#_{2},\#_{3})\in\{ (\mathfrak{X}_{ab}\mathrm{Im}\,[G^{\gamma}(z)]_{ab},\ \mathfrak{X}_{ab}\mathrm{Im}\,[G^{0}(z)]_{ab},\ \mathfrak{I}_{0,ab}),\] \[(\mathfrak{I}_{uv}\mathrm{Im}\,[G^{\gamma}(z)]_{uv},\ \mathfrak{I}_{uv} \mathrm{Im}\,[\mathcal{G}^{0}(z)]_{uv},\ \mathfrak{I}_{1,uv}),\] \[(\mathfrak{I}_{au}\mathrm{Im}\,[G^{\gamma}(z)Y^{\gamma}]_{au},\ \mathfrak{I}_{au} \mathrm{Im}\,[G^{0}(z)Y^{0}]_{au},\ \mathfrak{I}_{2,au})\},\] _we have_ \[\sup_{0\leq\gamma\leq 1}\big{|}\mathbb{E}_{\Psi}\big{(}F(\#_{1}) \big{)}-\mathbb{E}_{\Psi}\big{(}F(\#_{2})\big{)}\big{|}<CN^{-\omega}(\#_{3}+1)+ CQ_{0}N^{C+C_{0}}, \tag{4.1}\] _for any \(a,b\in[M]\) and \(u,v\in[N]\). The same estimates hold if \(\mathrm{Im}\,\)'s are replaced by \(\mathrm{Re}\,\)'s._ ### Average local law In this section, we write \(m^{\gamma}(z)=m_{Y^{\gamma}}(z)\), \(G^{\gamma}(z)=G(Y^{\gamma},z)\), and \(\tilde{G}^{\gamma}(z)=G(Y^{\gamma},\bar{z})\) for simplicity. Let \(z_{t}:=\lambda_{-,t}+E+\mathrm{i}\eta\). Then we have the following theorem. **Theorem 4.3**.: _Suppose that \(\Psi\) is good. Let us define \(z_{t}\coloneqq\lambda_{-,t}+E+\mathrm{i}\eta\). We assume that \(\eta\in[N^{-\frac{2}{3}-\epsilon},N^{-\frac{2}{3}}]\), \(E\in[-N^{-\varepsilon_{1}},N^{-\frac{2}{3}+\epsilon}]\) for a sufficiently small \(\epsilon>0\). Then there exists a constant \(\delta_{0}>0\) such that for all integer \(p\geq 3\),_ \[\sup_{0\leq\gamma\leq 1}\mathbb{E}_{\Psi}\big{(}\big{|}N\eta\big{(}\mathrm{Im} \,m^{\gamma}(z_{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\big{)} \leq(1+\mathfrak{o}(1))\mathbb{E}_{\Psi}\big{(}\big{|}N\eta\big{(}\mathrm{Im} \,m^{0}(z_{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\big{)}+N^ {-\delta_{0}p},\] _where \(\tilde{m}^{0}(z)=m_{X+t^{1/2}\tilde{W}}(z)\). Here \(\tilde{W}\) is an i.i.d. copy of \(W\) and it is also independent of \(X\). Further, the same estimate holds if \(\mathrm{Im}\,\)'s are replaced by \(\mathrm{Re}\,\)'s._ The above comparison inequality directly leads to the following theorem, which is crucial for the rigidity estimate for the \(\lambda_{M}(\mathcal{S}(Y))\), serving as a key component in proving the universality result. **Theorem 4.4** (Rigidity estimate).: _Suppose \(\Omega_{\Psi}\) holds. Then, with high probability,_ \[|\lambda_{M}(\mathcal{S}(Y))-\lambda_{-,t}|\leq N^{-2/3+\epsilon}.\] Proof.: By Markov's inequality, Theorem 4.3 and the following local law for \(m^{0}\) \[|m^{0}(\lambda_{-,t}+E+\mathrm{i}\eta)-m_{t}(\lambda_{-,t}+E+ \mathrm{i}\eta)|\prec\left\{\begin{array}{ll}\frac{1}{N\eta},&E\geq 0,\\ \frac{1}{N(|E|+\eta)}+\frac{1}{(N\eta)^{2}\sqrt{|E|+\eta}},&E\leq 0,\end{array}\right. \tag{4.2}\] we can obtain (4.2) with \(m^{0}\) replaced by \(m^{1}\) and further for the case \(E\leq 0\) the following \[\operatorname{Im}m^{1}(\lambda_{-,t}+E+\mathrm{i}\eta)-\operatorname{Im}m_{t}( \lambda_{-,t}+E+\mathrm{i}\eta)\prec\frac{1}{N(|E|+\eta)}+\frac{1}{(N\eta)^{2} \sqrt{|E|+\eta}}+\frac{1}{N^{1+\delta_{0}/2}\eta}. \tag{4.3}\] We remark here that the local law in (4.2) has been proved in [23] around the right edge for the deformed rectangular matrices, under the assumption that the original rectangular matrices satisfy the \(\eta_{*}\)-regularity. The argument can be adapted to our model, but around the left edge, again with the \(\eta_{*}\)-regularity as the input. The derivation is almost the same, and thus we do not reproduce it here. Further, similarly to Lemma 2.8, we can prove \(|\lambda_{M}(\mathcal{S}(V_{t}))-\lambda_{-}^{\mathsf{mp}}|\prec N^{-2\epsilon _{b}}\) and \(|\lambda_{M}(\mathcal{S}(Y))-\lambda_{-}^{\mathsf{mp}}|\prec N^{-2\epsilon_{b}}\). By (4.2), and the crude lower bound on \(\lambda_{M}(\mathcal{S}(V_{t}))\) implied by [63], we also have \(|\lambda_{M}(\mathcal{S}(V_{t}))-\lambda_{-,t}|\prec N^{-\frac{2}{3}+\epsilon}\). Hence, we have \[|\lambda_{M}(\mathcal{S}(Y))-\lambda_{-,t}|\prec N^{-2\epsilon_{b}}. \tag{4.4}\] With the aid of the \(m^{1}\) analogue of (4.2), (4.3) and (4.4), the remaining reasoning is routine and thus we omit it; see the proof of Theorem 1.4 in [36], for instance. ### Green function comparison for edge universality **Theorem 4.5** (Green function comparison).: _Let \(F:\mathbb{R}\to\mathbb{R}\) be a function whose derivatives satisfy_ \[\max_{x}|F^{\alpha}(x)|(|x|+1)^{-C_{1}}\leq C_{1},\quad\alpha=1,\cdots,d\] _for some constant \(C_{1}>0\) and sufficiently large integer \(d>0\). Let \(\Psi\) be good. Then there exist \(\epsilon_{0}>0\), \(N_{0}\in\mathbb{N}\) and \(\delta_{1}>0\) depending on \(\epsilon_{a}\) such that for any \(\epsilon<\epsilon_{0}\), \(N\geq N_{0}\) and real numbers \(E,E_{1}\) and \(E_{2}\) satisfying \(|E|,\ |E_{1}|,\ |E_{2}|\ \leq N^{-2/3+\epsilon}\), \(\eta_{0}=N^{-2/3-\epsilon}\), we have_ \[\Big{|}\mathbb{E}_{\Psi}\Big{[}F\Big{(}N\int_{E_{1}}^{E_{2}} \operatorname{Im}m^{1}(\lambda_{-,t}+y+\mathrm{i}\eta_{0})\,\mathrm{d}y\Big{)} \Big{]}-\mathbb{E}_{\Psi}\Big{[}F\Big{(}N\int_{E_{1}}^{E_{2}}\operatorname{ Im}m^{0}(\lambda_{-,t}+y+\mathrm{i}\eta_{0})\,\mathrm{d}y\Big{)}\Big{]}\Big{|} \leq CN^{-\delta_{1}}, \tag{4.5}\] _for some constant \(C>0\), and in the case \(\alpha=8/3\), (4.5) holds with \(\lambda_{-,t}\) replaced by \(\lambda_{\text{shift}}\)._ Employing the above comparison inequality along with the rigidity estimate int Theorem 4.4, we can deduce the following universality result around the random edge \(\lambda_{-,t}\) (and deterministic edge \(\lambda_{\text{shift}}\) if \(\alpha=8/3\)), whose proof will be stated in the Appendix B.2. **Corollary 4.6**.: _For all \(s\in\mathbb{R}\), we have_ \[\lim_{N\to\infty}\mathbb{P}\Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(V_{t}))- \lambda_{-,t})\leq s\Big{)}=\lim_{N\to\infty}\mathbb{P}\Big{(}N^{2/3}(\lambda _{M}(\mathcal{S}(Y))-\lambda_{-,t})\leq s\Big{)}. \tag{4.6}\] _Moreover, if \(\alpha=8/3\), we have_ \[\lim_{N\to\infty}\mathbb{P}\Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(V_{t}))- \lambda_{\text{shift}})\leq s\Big{)}=\lim_{N\to\infty}\mathbb{P}\Big{(}N^{2/3}( \lambda_{M}(\mathcal{S}(Y))-\lambda_{\text{shift}})\leq s\Big{)}. \tag{4.7}\] Now we can prove our main theorem: Theorem 1.2. Proof of Theorem 1.2.: The conclusions (i)-(iii) in Theorem 1.2 follows from (4.6) in Corollary 4.6 and Theorem 2.14. To prove the critical case when \(\alpha=8/3\), i.e., (iv), from (2.12) in Theorem 2.13, it is easy to show that the distribution of \(\lambda_{-,t}\) is asymptotically independent of the fluctuation of \(\lambda_{M}\big{(}\mathcal{S}(V_{t})\big{)}-\lambda_{-,t}\) since the former is a function of \(X\) only. It can be shown by a standard characteristic function argument that for any \(s\in\mathbb{R}\), \[\lim_{N\to\infty}\mathbb{P}\Big{(}\gamma_{N}M^{2/3}(\lambda_{M}\big{(}\mathcal{ S}(V_{t}))-\lambda_{\text{shift}}\big{)}\leq s\Big{)}=\lim_{N\to\infty} \mathbb{P}\Big{(}M^{2/3}(\mu_{M}^{\text{GOE}}+2+\gamma_{N}\mathcal{X}_{\alpha}) \leq s\Big{)}.\] where in the RHS \(\mathcal{X}_{\alpha}\) is independent of GOE. Then further, together with the comparison (4.7) we conclude (iv). Hence, we complete the proof of Theorem 1.2. ## 5. Proofs for the Green function comparisons In this section, we will mainly prove the Green function comparisons stated in the last section. We will show the details for Theorems 4.2 and 4.3 only. The proof of Theorem 4.5 is similar to Theorem 4.3, and thus will only be discussed briefly here and the details are stated in the Appendix B.3. ### Some further notations Let us introduce some additional notations. We denote by \(\mathsf{E}_{(ij)}\) the standard basis for \(\mathbb{R}^{M\times N}\), i.e., \([\mathsf{E}_{(ij)}]_{ab}\coloneqq\delta_{ia}\delta_{jb}\). Replacement matrix notation: For any \(A\in\mathbb{R}^{M\times N}\), the replacement matrix \(A^{\lambda}_{(ij)}=A_{(ij)}(\lambda)\in\mathbb{R}^{M\times N}\) is defined as, \[\big{[}A_{(ij)}(\lambda)\big{]}_{ab}\coloneqq\begin{cases}\lambda&\text{if }(i,j)=(a,b)\\ A_{ab}&\text{if }(i,j)\neq(a,b)\end{cases},\quad a\in[M],\quad b\in[N]. \tag{5.1}\] Let \(G^{\gamma,\lambda}_{(ij)}(z):=(\mathcal{S}(Y^{\gamma,\lambda}_{(ij)})-z)^{-1}\) be the resolvent of \(\mathcal{S}(Y^{\gamma,\lambda}_{(ij)})\) with \(Y^{\gamma,\lambda}_{(ij)}=(Y^{\gamma})_{(ij)}(\lambda)\). We define \[d_{ij}(\gamma,w_{ij}) :=\gamma(1-\chi_{ij})a_{ij}+\chi_{ij}b_{ij}+(1-\gamma^{2})^{1/2} t^{1/2}w_{ji},\] \[e_{ij}(\gamma,w_{ij}) :=c_{ij}+(1-\gamma^{2})^{1/2}t^{1/2}w_{ij},\qquad i\in[M],\quad j \in[N]. \tag{5.2}\] In the sequel, for brevity, we also write \(\sum_{i,j}=\sum_{i=1}^{M}\sum_{j=1}^{N}\). ### Proof of Proposition 4.1 Let us prove Proposition 4.1 assuming that Theorem 4.2 holds. The proof of Theorem 4.2 is deferred to the next subsection. For \(\delta>0\) and \(z=E+\mathrm{i}\eta\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\) (cf. (1.22)), we define \[\mathfrak{P}_{0}(\delta,z,\Psi) \coloneqq\mathbb{P}_{\Psi}\Big{(}\sup_{0\leq\gamma\leq 1} \sup_{a,b\in[M]}|z^{1/2}\mathfrak{X}_{ab}[G^{\gamma}(z)]_{ab}|>N^{\delta}\Big{)},\] \[\mathfrak{P}_{1}(\delta,z,\Psi) \coloneqq\mathbb{P}_{\Psi}\Big{(}\sup_{0\leq\gamma\leq 1} \sup_{u,v\in[N]}|z^{1/2}\mathfrak{Y}_{uv}[G^{\gamma}(z)]_{uv}|>N^{\delta}\Big{)},\] \[\mathfrak{P}_{2}(\delta,z,\Psi) \coloneqq\mathbb{P}_{\Psi}\Big{(}\sup_{0\leq\gamma\leq 1} \sup_{a\in[M],u\in[N]}|\mathfrak{J}_{au}[G^{\gamma}(z)Y^{\gamma}]_{au}|>N^{ \delta}\Big{)}.\] The following monotonicity lemma will be a useful tool. **Lemma 5.1**.: _Suppose that \(\Psi\) is good. Fix \(\varepsilon\) and \(\omega\) as in Theorem 4.2. For all \(z=E+\mathrm{i}\eta\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{ 3})\), we set \(z^{\prime}=E^{\prime}+\mathrm{i}\eta^{\prime}\) by_ \[E^{\prime}=E+\frac{(1-N^{\varepsilon/3})(\sqrt{E^{2}+\eta^{2}}-E)}{2},\quad \eta^{\prime}=N^{\varepsilon/6}\eta. \tag{5.3}\] _Then for any \(\delta>0\) and \(D>0\), there exists a large constant \(C>0\) such that_ \[\max_{k\in\{0,1,2\}}\mathfrak{P}_{k}(\delta,z,\Psi)\leq CN^{C}\max_{k\in\{0,1,2\}}\mathfrak{P}_{k}(\varepsilon/2,z^{\prime},\Psi)+CN^{-D}. \tag{5.4}\] Proof.: This is a minor modification of [5, Lemma 4.3]. The proof requires Theorem 4.2. For brevity, the detail is provided in the Appendix B.1. With the above lemma, we can prove Proposition 4.1. Proof of Proposition 4.1.: The proof is similar to the proof of Proposition 3.17 in [5]. Let \(\varepsilon\) be as in Theorem 4.2. It follows from Lemma 5.1 that for any \(z_{0}=\lambda_{-}^{\mathsf{mp}}+E_{0}+\mathrm{i}\eta_{0}\in\mathsf{D}(2 \varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\) and \(\eta_{0}\leq N^{-\varepsilon}\), we may find \(z_{1}=\lambda_{-}^{\mathsf{mp}}+E_{1}+\mathrm{i}\eta_{1}\) defined through (5.3) such that for any \(\delta>0\), \[\max_{k\in[0:2]}\mathfrak{P}_{k}(\delta,z_{0},\Psi)\leq C_{1}N^{C_{1}}\max_{k \in[0:2]}\mathfrak{P}_{k}(\varepsilon/2,z_{1},\Psi)+C_{1}N^{-D}. \tag{5.5}\] Now it suffices to bound \(\max_{k\in[0:2]}\mathfrak{P}_{k}(\varepsilon/2,z_{1},\Psi)\). Notice that for \(\varepsilon>3\varepsilon_{1}\) \[|E_{1}|\lesssim|E_{0}|+N^{\varepsilon/3}|\eta_{0}|\lesssim N^{-2\varepsilon_{1} }+N^{-2/3\varepsilon}\ll N^{-\varepsilon_{1}}.\] This means that \(z_{1}\in(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\). Applying Lemma 5.1 again with \(\delta=\varepsilon/2\), we can find \(z_{2}=\lambda_{-}^{\mathsf{mp}}+E_{2}+\mathrm{i}\eta_{2}\) \[\max_{k\in[0:2]}\mathfrak{P}_{k}(\varepsilon/2,z_{1},\Psi)\leq C_{2}N^{C_{2}} \max_{k\in[0:2]}\mathfrak{P}_{k}(\varepsilon/2,z_{2},\Psi)+C_{2}N^{-D},\] where \(\eta_{2}=N^{\varepsilon/6}\eta_{1}\) and \(|E_{2}|\ll N^{-\varepsilon_{1}}\). We may now repeat the above procedure until \(z_{m}=\lambda_{-}^{\mathsf{mp}}+E_{m}+\mathrm{i}\eta_{m}\) with \(\eta_{m}\geq KN^{-\varepsilon/2}\) for some sufficiently large \(K\). It can be computed that \[\eta_{m}\lesssim N^{-\varepsilon/2}N^{\varepsilon/6}=N^{-\varepsilon/3}, \quad\text{and}\quad|E_{m}|\lesssim|E_{0}|+\sum_{i=1}^{m-1}N^{\varepsilon/3} \eta_{i},\quad\eta_{i}=N^{\varepsilon i/6}\eta_{0}.\] This implies that \(|E_{m}|\lesssim|E_{0}|+N^{-\varepsilon/2}\ll N^{-\varepsilon_{1}}\). Then using the fact that \(\max_{k\in[0:2]}\mathfrak{P}_{k}(\varepsilon/2,z_{m},\Psi)=0\), we can obtain that \[\max_{k\in[0:2]}\mathfrak{P}_{k}(\delta,z_{0},\Psi)\leq C_{1}N^{C_{1}}\max_{k \in[0:2]}\mathfrak{P}_{k}(\varepsilon/2,z_{1},\Psi)+C_{1}N^{-D}\leq C_{m}N^{-D}.\] The claim now follows by adjusting constants. ### Proof of Theorem 4.2 We need the following elementary resolvent expansion formula. **Lemma 5.2**.: _For any deterministic matrix \(A\in\mathbb{R}^{M\times N}\), let its linearisation \(\mathcal{L}(A)\) be defined as_ \[\mathcal{L}(A)=\left(\begin{array}{cc}0&A\\ A^{\top}&0\end{array}\right). \tag{5.6}\] _Let \(\mathcal{R}(A,z)=(z^{1/2}\mathcal{L}(A)-z)^{-1}\) be the resolvent of \(\mathcal{L}(A)\). The Schur complement formula also gives_ \[\mathcal{R}(A,z)=\left(\begin{array}{cc}G(A,z)&z^{-1/2}G(A,z)A\\ z^{-1/2}A^{\top}G(A,z)&G(A^{\top},z)\end{array}\right).\] _Then for any \(B=A+\Delta\in\mathbb{R}^{M\times N}\), we have for any integer \(s\geq 0\)_ \[\mathcal{R}(A,z)=\sum_{j=0}^{s}\big{(}\mathcal{R}(B,z)\mathcal{L}(z^{1/2} \Delta)\big{)}^{j}\mathcal{R}(B,z)+\big{(}\mathcal{R}(B,z)\mathcal{L}(z^{1/2} \Delta)\big{)}^{s+1}\mathcal{R}(A,z).\] Proof of Theorem 4.2.: During the proof, we omit the \(z\) dependence and write \(d_{ij}=d_{ij}(\gamma,w_{ij})\) and \(e_{ij}=e_{ij}(\gamma,w_{ij})\) for simplicity. We only show the proof for \((\#_{1},\#_{2},\#_{3})=(\mathfrak{X}_{ab}\mathrm{Im}\,[G^{\gamma}(z)]_{ab},\ \mathfrak{X}_{ab} \mathrm{Im}\,[G^{0}(:\ We also define \(\tilde{V}_{(ij)}(\lambda)=\mathrm{Im}\left([G_{(ij)}^{\gamma,\lambda}]_{ai}[(Y_{( ij)}^{\gamma,\lambda})^{\top}G_{(ij)}^{\gamma,\lambda}]_{jb}\right)\). Then for any \(i\in[M],j\in[N]\), \((I)_{ij}\) can be rewritten as \[(I)_{ij} =\mathbb{E}_{\Psi}\Big{[}f_{(ij)}([Y^{\gamma}]_{ij})\Big{(} \mathsf{A}_{ij}-\frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}}\Big{)}\cdot( \mathbf{1}_{\psi_{ij}=0}+\mathbf{1}_{\psi_{ij}=1})\Big{]}\] \[=\mathbb{E}_{\Psi}\Big{[}f_{(ij)}(d_{ij})\Big{(}\mathsf{A}_{ij}- \frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}}\Big{)}\Big{]}\cdot\mathbf{1 }_{\psi_{ij}=0}+\mathbb{E}_{\Psi}\Big{[}f_{(ij)}(e_{ij})\Big{(}\mathsf{A}_{ij} -\frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}}\Big{)}\Big{]}\cdot\mathbf{1 }_{\psi_{ij}=1}\] \[\stackrel{{(*)}}{{=}}\mathbb{E}_{\Psi}\Big{[}f_{(ij )}(d_{ij})\Big{(}\mathsf{A}_{ij}-\frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1 /2}}\Big{)}\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}-\frac{\gamma}{(1-\gamma^{2})^ {1/2}}t^{1/2}\mathbb{E}_{\Psi}\Big{[}w_{ij}f_{(ij)}(e_{ij})\Big{]}\cdot \mathbf{1}_{\psi_{ij}=1}\] \[=(J_{1})_{ij}-(J_{2})_{ij},\] where in \((*)\), we used the fact that \(\mathsf{A}_{ij}=0\) if \(\psi_{ij}=1\). Let us consider \((J_{2})_{ij}\) first. Applying Gaussian integration by parts on \(w_{ij}\), we have \[|(J_{2})_{ij}|=\Big{|}\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}N }\mathbb{E}_{\Psi}\Big{[}\partial_{w_{ij}}f_{(ij)}(e_{ij})\Big{]}\mathbf{1}_{ \psi_{ij}=1}\Big{|}\] \[\leq\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}N}\mathbb{E}_{\Psi} \Big{[}|\partial_{w_{ij}}f_{(ij)}(e_{ij})\mathbf{1}_{\Omega}|\Big{]}\mathbf{1 }_{\psi_{ij}=1}+\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}N}\mathbb{E}_{\Psi} \Big{[}|\partial_{w_{ij}}f_{(ij)}(e_{ij})\mathbf{1}_{\Omega^{c}}|\Big{]} \mathbf{1}_{\psi_{ij}=1}.\] Notice that \[\partial_{w_{ij}}f_{(ij)}(e_{ij})=U_{(ij)}(e_{ij})\cdot\partial_{w_{ij}}V_{( ij)}(e_{ij})+V_{(ij)}(e_{ij})\cdot\partial_{w_{ij}}U_{(ij)}(e_{ij}), \tag{5.8}\] and \[\partial_{w_{ij}}U_{(ij)}(e_{ij})=-(1-\gamma^{2})^{1/2}t^{1/2}F^{ (2)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ab}\big{)}\big{(} V_{(ij)}(e_{ij})+\tilde{V}_{(ij)}(e_{ij})\big{)},\] \[\partial_{w_{ij}}V_{(ij)}(e_{ij})=-(1-\gamma^{2})^{1/2}\mathrm{Im }\left(\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ai}\big{(}(Y_{(ij)}^{\gamma,e_{ ij}})^{\top}G_{(ij)}^{\gamma,e_{ij}}\big{)}_{jb}[G_{(ij)}^{\gamma,e_{ij}}Y_{(ij)}^{ \gamma,e_{ij}}]_{aj}\right.\] \[\quad+\left[G_{(ij)}^{\gamma,e_{ij}}Y_{(ij)}^{\gamma,e_{ij}} \right]_{ij}\big{[}G_{(ij)}^{\gamma,e_{ij}}\big{]}_{ib}[G_{(ij)}^{\gamma,e_{ij }}Y_{(ij)}^{\gamma,e_{ij}}]_{aj}-[G_{(ij)}^{\gamma,e_{ij}}]_{ib}[G_{(ij)}^{ \gamma,e_{ij}}]_{ai}\] \[\quad+\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ib}[G_{(ij)}^{\gamma, e_{ij}}]_{ai}[(Y_{(ij)}^{\gamma,e_{ij}})^{\top}G_{(ij)}^{\gamma,e_{ij}}Y_{(ij)}^{ \gamma,e_{ij}}]_{jj}+[G_{(ij)}^{\gamma,e_{ij}}]_{ib}[G_{(ij)}^{\gamma,e_{ij}}Y _{(ij)}^{\gamma,e_{ij}}]_{aj}[G_{(ij)}^{\gamma,e_{ij}}Y_{(ij)}^{\gamma,e_{ij}}]_ {ij}\Big{)}. \tag{5.9}\] When \(\psi_{ij}=1\), we have \(i\in\mathcal{D}_{r}\) and \(j\in\mathcal{D}_{c}\). Then \(\mathbf{1}_{\Omega}\mathbf{1}_{\psi_{ij}=1}|V_{(ij)}(e_{ij})|\leq N^{2\varepsilon }t^{-2}\) and \(\mathbf{1}_{\Omega}\mathbf{1}_{\psi_{ij}=1}|\partial_{w_{ij}}V_{(ij)}(e_{ij})| \leq N^{3\varepsilon}t^{-7/2}\). Therefore, we may find a large constant \(K_{1}>0\) such that \[|(J_{2})_{ij}| \lesssim\frac{1}{N^{1-3\varepsilon}t^{3}}\mathbb{E}_{\Psi}\Big{[}| F^{(1)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ab}\big{)}|\Big{]}+ \frac{1}{N^{1-4\varepsilon}t^{3}}\mathbb{E}_{\Psi}\Big{[}|F^{(2)}\big{(}\mathrm{ Im}\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ab}\big{)}|\Big{]}\] \[\lesssim\frac{1}{N^{1-3\varepsilon}t^{3}}\mathbb{E}_{\Psi}\Big{[}| F^{(1)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ab}\big{)}|\Big{]}+ \frac{1}{N^{1-4\varepsilon}t^{3}}\mathbb{E}_{\Psi}\Big{[}|F^{(2)}\big{(}\mathrm{ Im}\left[G_{(ij)}^{\gamma,e_{ij}}\right]_{ab}\big{)}|\Big{]}+N^{K_{1}}Q_{0} \mathbf{1}_{\psi_{ij}=1},\] where in the second step, we used the crude bound that \(|\partial_{w_{ij}}f_{(ij)}(e_{ij})|\leq N^{K_{1}}\) for some sufficiently large \(K_{1}\), which can be obtained by the fact that \(\mathrm{Im}\,z>N^{-1}\). By the facts that \(\sum_{i,j}\mathbf{1}_{\psi_{ij}=1}\leq N^{1-\epsilon_{\alpha}}\) and \(t\gg N^{-\epsilon_{\alpha}/4}\), we can choose \(\varepsilon<\epsilon_{\alpha}/16\) to obtain that \[|(J_{2})_{ij}|\lesssim\frac{\mathbf{1}_{\psi_{ij}=1}}{N^{1-\epsilon_{\alpha}/2}} \mathbb{E}_{\Psi}\Big{[}|F^{(1)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,e_{ij}} \right]_{ab}\big{)}|+\big{|}F^{(2)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,e_{ ij}}\right]_{ab}\big{)}|\Big{]}+N^{K_{1}}Q_{0}\mathbf{1}_{\psi_{ij}=1}.\] Next, we consider \((J_{1})_{ij}\). Recall that \(d_{ij}=\gamma(1-\chi_{ij})a_{ij}+\chi_{ij}b_{ij}+(1-\gamma^{2})^{1/2}t^{1/2}w_{ji}\). Applying Taylor expansion on \(f(d_{ij})\) around \(0\), for an \(s_{1}\) to be chosen later, we have \[(J_{1})_{ij} =\sum_{k=0}^{s_{1}}\frac{\mathbf{1}_{\psi_{ij}=0}}{k!}\mathbb{E} _{\Psi}\Big{[}(d_{ij})^{k}f_{(ij)}^{(k)}(0)\Big{(}\mathsf{A}_{ij}-\frac{\gamma t ^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}}\Big{)}\Big{]}\] \[\quad+\frac{\mathbf{1}_{\psi_{ij}=0}}{(s_{1}+1)!}\mathbb{E}_{\Psi }\Big{[}(d_{ij})^{s_{1}+1}f_{(ij)}^{(s_{1}+1)}(\tilde{d}_{ij})\Big where \(\tilde{d}_{ij}\in[0,d_{ij}]\). Before proceeding to the estimation of \((J_{1})_{ij,k}\) and \(\mathsf{Rem}\), we first establish perturbation bounds for the entries of the resolvents, which are useful for the estimation of \(f^{(k)}_{(ij)}(0)\) and \(f^{(k)}_{(ij)}(\tilde{d}_{ij})\). Using Lemma 5.2 and the notation therein, we have for any \(\mathsf{u},\mathsf{v}\in[M+N]\), \[\mathbf{1}_{\Omega}\big{[}R(Y^{\gamma,d_{ij}}_{(ij)},z)-R(Y^{ \gamma,0}_{(ij)},z)\big{]}_{\mathsf{uv}} =\sum_{j=0}^{s}\mathbf{1}_{\Omega}\big{[}\big{(}R(Y^{\gamma,d_{ ij}}_{(ij)},z)\mathcal{L}(z^{1/2}d_{ij}\mathsf{E}_{ij})\big{)}^{j}R(Y^{\gamma,d_{ ij}}_{(ij)},z)\big{]}_{\mathsf{uv}}\] \[\quad+\mathbf{1}_{\Omega}\big{[}\big{(}R(Y^{\gamma,d_{ij}}_{(ij)}, z)\mathcal{L}(z^{1/2}d_{ij}\mathsf{E}_{ij})\big{)}^{s+1}R(Y^{\gamma,0}_{(ij)},z) \big{]}_{\mathsf{uv}}.\] Further using the fact that \(\mathbf{1}_{\Omega}|d_{ij}|\leq N^{-\epsilon_{b}}\), \(\mathbf{1}_{\Omega}|\big{[}R(Y^{\gamma,d_{ij}}_{(ij)},z)\big{]}_{\mathsf{uv}}| \leq N^{\varepsilon}/t^{2}\), and the crude bound \(\|R(Y^{\gamma,0}_{(ij)},z)\|\leq N\) when \(\mathrm{Im}\,z\geq N^{-1}\), we may choose \(s\) large enough to obtain that \[\mathbf{1}_{\Omega}\big{|}\big{[}R(Y^{\gamma,d_{ij}}_{(ij)},z)-R(Y ^{\gamma,0}_{(ij)},z)\big{]}_{\mathsf{uv}}\big{|}\lesssim\sum_{j=1}^{s}\Big{(} \frac{N^{2\varepsilon}}{t^{4}N^{\epsilon_{b}}}\Big{)}^{j}+\Big{(}\frac{N^{ \varepsilon}}{t^{2}N^{\epsilon_{b}}}\Big{)}^{s+1}N\lesssim 1, \tag{5.10}\] which yields directly a control of \(G^{\gamma,0}_{(ij)}\), \(G^{\gamma,0}_{(ij)}Y^{\gamma,0}_{(ij)}\), and \((Y^{\gamma,0}_{(ij)})^{\top}G^{\gamma,0}_{(ij)}Y^{\gamma,0}_{(ij)}\) on the event \(\Omega\). Here we used the fact that \(Y^{\top}GY\) can be written in terms of \(\mathcal{G}\), which can be seen easily by singular value decomposition. Similar estimates hold if \(Y^{\gamma,0}_{(ij)}\) is replaced by \(Y^{\gamma,\tilde{d}_{ij}}_{(ij)}\), we omit repetitive details. By taking derivatives repeatedly similar to (5.8) and (5.9), it can be easily seen that for any integer \(k\geq 0\), \[f^{(k)}_{(ij)}(d_{ij})\cdot\mathbf{1}_{\psi_{ij}=0}\cdot\mathbf{ 1}_{\Omega}\lesssim\frac{N^{(C_{0}+2k+2)\varepsilon}}{t^{2k+2}}. \tag{5.11}\] Combining the above estimate with the perturbation bounds in (5.10), we have for any \(x\in[0,d_{ij}]\), \[f^{(k)}_{(ij)}(x)\cdot\mathbf{1}_{\psi_{ij}=0}\cdot\mathbf{1}_{ \Omega}\lesssim\frac{N^{(C_{0}+2k+2)\varepsilon}}{t^{2k+2}}. \tag{5.12}\] Now we may start to estimate \((J_{1})_{ij,k}\) and \(\mathsf{Rem}\). Using the above perturbation bounds on the event \(\Omega\), we have that there exists some large \(K_{2}>0\), such that \[|\mathsf{Rem}| \leq\frac{\mathbf{1}_{\psi_{ij}=0}}{(s_{1}+1)!}\mathbb{E}_{\Psi} \Big{[}\Big{|}(d_{ij})^{s_{1}+1}f^{(s_{1}+1)}_{(ij)}(\tilde{d}_{ij})\Big{(} \mathbf{A}_{ij}-\frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}}\Big{)}\Big{|} \cdot\mathbf{1}_{\Omega}\Big{]}\] \[\quad+\frac{\mathbf{1}_{\psi_{ij}=0}}{(s_{1}+1)!}\mathbb{E}_{\Psi }\Big{[}\Big{|}(d_{ij})^{s_{1}+1}f^{(s_{1}+1)}_{(ij)}(\tilde{d}_{ij})\Big{(} \mathbf{A}_{ij}-\frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}}\Big{)} \Big{|}\cdot\mathbf{1}_{\Omega^{c}}\Big{]}\] \[\lesssim\frac{N^{(C_{0}+2s_{1}+4)\varepsilon}\mathbf{1}_{\psi_{ ij}=0}}{t^{2s_{1}+4}N^{1/2+\epsilon_{a}+(s_{1}+1)\epsilon_{b}}}+N^{K_{2}}Q_{0} \mathbf{1}_{\psi_{ij}=0}. \tag{5.13}\] Therefore, with the fact that \(t\gg N^{-\epsilon_{b}/8}\), we may choose \(\varepsilon<\epsilon_{b}/8\) and \(s_{1}>C_{0}/4+6/\epsilon_{b}\) to obtain that \[|\mathsf{Rem}|\lesssim N^{-3}\cdot\mathbf{1}_{\psi_{ij}=0}+N^{K_{ 2}}Q_{0}\cdot\mathbf{1}_{\psi_{ij}=0}.\] We estimate \((J_{1})_{ij,k}\) for different \(k\) separately. For the case when \(k\) is even, it follows from the symmetric condition that \((J_{1})_{ij,k}=0\). Thus we mainly focus on the estimation for \(k\) is odd. **Case 1:**\(k\geq 5\). First note by symmetry condition, we can obtain \[|(J_{1})_{ij,k}|\lesssim\sum_{\begin{subarray}{c}u_{1}+u_{2}\geq 1,u_{3} \geq 0\\ u_{1}+u_{2}+u_{3}=(k+1)/2\end{subarray}}\mathbb{E}_{\Psi}\Big{[}|\mathbf{A}_{ij }|^{2u_{1}}|t^{1/2}w_{ij}|^{2u_{2}}|b_{ij}|^{2u_{3}}|f^{(k)}_{(ij)}(0)|\Big{]} \mathbf{1}_{\psi_{ij}=0}\lesssim\frac{t\mathbf{1}_{\psi_{ij}=0}\mathbb{E}_{\Psi} [|f^{(k)}_{(ij)}(0)|]}{N^{2+(k-3)\epsilon_{b}}},\] where in the last step we also used the fact that \(\mathbb{E}_{\Psi}(b^{k}_{ij})\leq N^{-\epsilon_{b}(k-2)}\mathbb{E}_{\Psi}(x^{2}_{ ij})\lesssim N^{-1-\epsilon_{b}(k-2)}\) for \(k\geq 2\). We need to estimate \(f^{(k)}_{(ij)}(0)\) again by Taylor expansion. For an \(s_{2}\) to be chosen later, there exists \(\hat{d}_{ij}\in[0,d_{ij}]\) such that \[|(J_{1})_{ij,k}|\lesssim\sum_{\ell=0}^{s_{2}}\frac{t\mathbf{1}_{\psi_{ij}=0} \mathbb{E}_{\Psi}[|f^{(k+\ell)}_{(ij)}(d_{ij})|]}{N^{2+(k+\ell-3)\epsilon_{b}} }+\frac{t\mathbf{1}_{\psi_{ij}=0}\mathbb{E}_{\Psi}[|f^{(k+s_{2}+1)}_{(ij)}( \hat{d}_{ij})|]}{N^{2+(k+s_{2}-2)\epsilon_{b}}}. \tag{5.14}\] On the event \(\Omega^{c}\), we may estimate the RHS in the above display as in the last step in (5.13), which gives \[\Big{(}\sum_{\ell=0}^{s_{2}}\frac{t\mathbb{E}_{\Psi}[|f^{(k+\ell)}_{(ij)}(d_{ ij})|\mathbf{1}_{\Omega^{c}}]}{N^{2+(k+\ell-3)\epsilon_{b}}}+\frac{t\mathbb{E}_{ \Psi}[|f^{(k+s_{2}+1)}_{(ij)}(\hat{d}_{ij})|\mathbf{1}_{\Omega^{c}}]}{N^{2+(k +s_{2}-2)\epsilon_{b}}}\Big{)}\mathbf{1}_{\psi_{ij}=0}\lesssim N^{K_{3}}Q_{0} \mathbf{1}_{\psi_{ij}=0}, \tag{5.15}\] for some large \(K_{3}>0\). On the event \(\Omega\), we may choose \(s_{2}>C_{0}+30+4/\epsilon_{b}\) and \(\varepsilon<\epsilon_{b}/8\) to obtain that \[\Big{(}\sum_{\ell=0}^{s_{2}}\frac{t}{N^{2+(k+\ell-3)\epsilon_{b} }}\mathbb{E}_{\Psi}\Big{[}|f^{(k+\ell)}_{(ij)}(d_{ij})|\mathbf{1}_{\Omega} \Big{]}+\frac{t}{N^{2+(k+s_{2}-2)\epsilon_{b}}}\mathbb{E}_{\Psi}\Big{[}|f^{(k+ s_{2}+1)}_{(ij)}(\hat{d}_{ij})|\mathbf{1}_{\Omega}\Big{]}\Big{)}\cdot\mathbf{1}_{ \psi_{ij}=0}\] \[\lesssim\sum_{\ell=0}^{s_{2}}\frac{t\mathbf{1}_{\psi_{ij}=0}}{N^{ 2+(k+\ell-3)\epsilon_{b}}}\mathbb{E}_{\Psi}\Big{[}|f^{(k+\ell)}_{(ij)}(d_{ij} )|\mathbf{1}_{\Omega}\Big{]}+N^{-3}\mathbf{1}_{\psi_{ij}=0}\] \[\lesssim\sum_{\ell=0}^{s_{2}}\frac{N^{2(k+\ell+1)\varepsilon} \mathbf{1}_{\psi_{ij}=0}}{N^{2+(k+\ell-3)\epsilon_{b}}t^{2(k+\ell)+1}}\sum_{ m=1}^{k+\ell+1}\mathbb{E}_{\Psi}\Big{[}|F^{(m)}\big{(}\mathrm{Im}\left[G^{ \gamma,d_{ij}}_{(ij)}\right]_{ab}\big{)}|\Big{]}+N^{-3}\mathbf{1}_{\psi_{ij}= 0}. \tag{5.16}\] Collecting the above estimates and choosing \(\varepsilon<\epsilon_{b}/100\), we have \[|(J_{1})_{ij,k}|\lesssim\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{2+\epsilon_{b}/2}} \sum_{m=1}^{k+s_{2}+1}\mathbb{E}_{\Psi}\Big{[}|F^{(m)}\big{(}\mathrm{Im}\left[ G^{\gamma,d_{ij}}_{(ij)}\right]_{ab}\big{)}|\Big{]}+\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{ 2+\epsilon_{b}/2}}+N^{K_{3}}Q_{0}\mathbf{1}_{\psi_{ij}=0}.\] **Case 2:**\(k=3\). By direct calculation, we have \[(J_{1})_{ij,3}\asymp\mathbb{E}_{\Psi}\Big{[}\Big{(}\mathsf{A}_{ij}^{4}+t \mathsf{A}_{ij}^{2}w_{ij}^{2}+t^{2}w_{ij}^{4}+\mathsf{A}_{ij}^{2}\mathsf{B}_{ ij}^{2}+tw_{ij}^{2}\mathsf{B}_{ij}^{2}\Big{)}f^{(3)}_{(ij)}(0)\Big{]} \mathbf{1}_{\psi_{ij}=0}.\] The term \(\mathsf{A}_{ij}^{2}\mathsf{B}_{ij}^{2}\) becomes null due to the definitions of \(\mathsf{A}_{ij}\) and \(\mathsf{B}_{ij}\). Concerning the remaining terms, we only show how to estimate the term involving \(tw_{ij}^{2}\mathsf{B}_{ij}^{2}\) while the others can be handled similarly. Applying Taylor expansion, and then estimating terms on \(\Omega^{c}\) and \(\Omega\) separately as in (5.14)-(5.16), we have for \(s_{3}>C_{0}/4+\epsilon_{b}/2+4\), \[\Big{|}\mathbb{E}_{\Psi}\Big{[}tw_{ij}^{2}\mathsf{B}_{ij}^{2}f^{(3)}_{(ij)}(0) \Big{]}\mathbf{1}_{\psi_{ij}=0}\Big{|}\lesssim\sum_{\ell=0}^{s_{3}}\frac{t \mathbf{1}_{\psi_{ij}=0}\mathbb{E}_{\Psi}[|f^{(3+\ell)}_{(ij)}(d_{ij})|\mathbf{ 1}_{\Omega}]}{N^{2+\epsilon\epsilon_{b}}}+N^{-3}\mathbf{1}_{\psi_{ij}=0}+N^{K _{5}}Q_{0}\mathbf{1}_{\psi_{ij}=0},\] for some large \(K_{5}>0\). Then it remains to estimate the first term of the RHS of the above inequality. For \(\ell\geq 1\), the estimate is similar to (5.16), we omit further details. Here we focus on the non-trivial term when \(\ell=0\). It is straightforward to compute that \(f^{(3)}_{(ij)}(d_{ij})\) is the products of \(F^{(\ell)}\big{(}\mathrm{Im}\left[G^{\gamma,d_{ij}}_{(ij)}\right]_{ab}\big{)},\ell \in[4]\), and the entries of \(G^{\gamma,d_{ij}}_{(ij)}\), \(G^{\gamma,d_{ij}}_{(ij)}Y^{\gamma,d_{ij}}_{(ij)}\), and \((Y^{\gamma,d_{ij}}_{(ij)})^{\top}G^{\gamma,d_{ij}}_{(ij)}Y^{\gamma,d_{ij}}_{(ij)}\), where the entries' indices can be \((i,i),(j,j),(i,j),(a,i),(a,j)\), \((i,b),(j,b)\). Therefore, \[\frac{t}{N^{2}}\mathbb{E}_{\Psi}\Big{[}|f^{(3)}_{(ij)}(d_{ij})| \cdot\mathbf{1}_{\Omega}\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0} \leq\frac{tN^{6\varepsilon}}{N^{2}}\sum_{\ell=1}^{4}\mathbb{E}_{ \Psi}\Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G^{\gamma,d_{ij}}_{(ij)}\right]_{ab }\big{)}|\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}\cdot\mathbf{1}_{i\in\mathcal{T}_ {r},j\in\mathcal{T}_{c}}\] \[\quad+\frac{N^{6\varepsilon}}{N^{2}t^{7}}\sum_{\ell=1}^{4} \mathbb{E}_{\Psi}\Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G^{\gamma,d_{ij}}_{( ij)}\right]_{ab}\big{)}|\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}\cdot(1- \mathbf{1}_{i\in\mathcal{T}_{r},j\in\mathcal{T}_{c}}).\] This eventually leads to \[|(J_{1})_{ij,3}| \lesssim\frac{tN^{6\varepsilon}\mathbf{1}_{\psi_{ij}=0}\mathbf{1}_{i \in\mathcal{T}_{r},j\in\mathcal{T}_{c}}}{N^{2}}\sum_{\ell=1}^{4}\mathbb{E}_{ \Psi}\Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,d_{ij}}\right]_{ ab}\big{)}|\Big{]}\] \[\quad+\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{2+\epsilon_{b}/2}}\sum_{ \ell=1}^{s_{3}+4}\mathbb{E}_{\Psi}\Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G_ {(ij)}^{\gamma,d_{ij}}\right]_{ab}\big{)}|\Big{]}+N^{-3}\cdot\mathbf{1}_{\psi_ {ij}=0}+N^{K_{5}}Q_{0}\mathbf{1}_{\psi_{ij}=0}.\] The second term in the above display can be estimated by the fact that \(|\mathcal{D}_{r}|\vee|\mathcal{D}_{c}|\leq N^{1-\epsilon_{d}}\) for some \(\epsilon_{d}>0\). By the fact that \(t\gg N^{-\epsilon_{d}/20}\lor N^{-\epsilon_{b}/20}\), we then have \[|(J_{1})_{ij,3}| \lesssim\frac{t^{1/2}}{N^{2}}\sum_{\ell=1}^{4}\mathbb{E}_{\Psi} \Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma,d_{ij}}\right]_{ab} \big{)}|\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}\cdot\mathbf{1}_{i\in\mathcal{T}_ {r},j\in\mathcal{T}_{c}}\] \[\quad+\frac{\mathbf{1}_{\psi_{ij}=0}\cdot(1-\mathbf{1}_{i\in \mathcal{T}_{r},j\in\mathcal{T}_{c}})}{N^{2-\epsilon_{d}/2}}\sum_{\ell=1}^{4} \mathbb{E}_{\Psi}\Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G_{(ij)}^{\gamma, d_{ij}}\right]_{ab}\big{)}|\Big{]}\] \[\quad+\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{2+\epsilon_{b}/2}}\sum_{ \ell=1}^{s_{3}+4}\mathbb{E}_{\Psi}\Big{[}|F^{(\ell)}\big{(}\mathrm{Im}\left[G _{(ij)}^{\gamma,d_{ij}}\right]_{ab}\big{)}|\Big{]}+N^{-3}\cdot\mathbf{1}_{ \psi_{ij}=0}+N^{K_{5}}Q_{0}\cdot\mathbf{1}_{\psi_{ij}=0}.\] **Case 3:**\(k=1\). In this case, using the fact that \(\mathbb{E}_{\Psi}[b_{ij}]=0\), we may compute \[(J_{1})_{ij,1}=\gamma\mathbb{E}_{\Psi}\Big{[}\Big{(}(1-\chi_{ij})^{2}a_{ij}^{ 2}-tw_{ij}^{2}\Big{)}f_{(ij)}^{(1)}(0)\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}.\] Recall that \(t=N\mathbb{E}(\mathsf{A}_{ij}^{2})=N\mathbb{E}\big{(}(1-\psi_{ij})^{2}(1-\chi_ {ij})^{2}a_{ij}^{2}\big{)}\). This gives \[\big{|}\mathbb{E}\big{(}(1-\chi_{ij})^{2}a_{ij}^{2}\big{)}-\mathbb{E}\big{(}tw _{ij}^{2}\big{)}\big{|}\lesssim tN^{-1-\alpha/2+\alpha\epsilon_{b}}. \tag{5.17}\] Therefore, following the same procedure as in (5.14)-(5.16), we can also obtain that for sufficiently large constant \(s_{4}\), \[|(J_{1})_{ij,1}|\lesssim\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{2+ \epsilon_{b}/2}}\sum_{\ell=1}^{s_{4}}\mathbb{E}_{\Psi}\Big{[}|F^{(\ell)}\big{(} \mathrm{Im}\left[G_{(ij)}^{\gamma,d_{ij}}\right]_{ab}\big{)}|\Big{]}+N^{-3} \cdot\mathbf{1}_{\psi_{ij}=0}+N^{K_{5}}Q_{0}\cdot\mathbf{1}_{\psi_{ij}=0}.\] By combining the estimates of \((J_{1})_{ij,k}\)'s with \((J_{2})_{ij}\)'s, we can conclude that (5.7) holds when we choose \(\varepsilon\leq\min\{\epsilon_{a},\epsilon_{b},\epsilon_{d}\}/(100)\). ### Proof of Theorem 4.3 Since we need to perform the comparison at a random edge, we begin with some preliminary estimates for the derivatives w.r.t. the matrix entries of the random edge. **Lemma 5.3** ([23], Lemma 5).: _Denote \(a_{k,\pm}(t)=\Phi_{t}(\zeta_{k,\pm}(t))\), \(1\leq k\leq q\). Then \((a_{k,\pm}(t),\zeta_{k,\pm}(t))\) are real solutions of_ \[F_{t}(z,\zeta)=0,\quad\text{and}\quad\frac{\partial F_{t}}{ \partial\zeta}(z,\zeta)=0,\] _where_ \[F_{t}(z,\zeta)=1+\frac{t(1-c_{N})-\sqrt{t^{2}(1-c_{N})^{2}+4 \zeta z}}{2\zeta}-c_{N}tm_{X}(\zeta).\] Using the lemma above, we can derive bounds for the derivatives of the random edge \(\lambda_{-,t}\) w.r.t. the matrix entries \(b_{ij}\). **Lemma 5.4**.: _Suppose that \(\Psi\) is good. If we view \(\lambda_{-,t}\) as a function of \(\mathsf{B}_{ij},i\in[M],j\in[N]\). For any \(i\in[M]\) and \(j\in[N]\), write \(\lambda_{-,t}(x)=\lambda_{-,t}(\mathsf{B}_{ij}=x)\). Then for any integer \(k\geq 1\) and for any \(b\in[0,\mathsf{B}_{ij}]\), we have_ \[\Big{|}\frac{\partial^{k}\lambda_{-,t}}{\partial\mathsf{B}_{ij}^{k}}(b)\Big{|} \mathbf{1}_{\psi_{ij}=0}\prec\frac{1}{Nt^{2k+1}},\quad\quad\Big{|}\frac{ \partial^{k}\zeta_{-,t}}{\partial\mathsf{B}_{ij}^{k}}(b)\Big{|}\mathbf{1}_{ \psi_{ij}=0}\prec\frac{1}{Nt^{2k+1}}. \tag{5.18}\] _Further, there exists some constants \(C_{k}>0\) such that the following deterministic bounds hold,_ \[\Big{|}\frac{\partial\lambda_{-,t}}{\partial\mathsf{B}_{ij}}(b)\Big{|} \mathbf{1}_{\psi_{ij}=0}\leq N^{C_{1}},\quad\quad\Big{|}\frac{\partial^{k} \lambda_{-,t}}{\partial\mathsf{B}_{ij}^{k}}(b)\Big{|}\mathbf{1}_{\psi_{ij}=0} \cdot\Xi(\lambda_{-,t})\leq N^{C_{k}}, \tag{5.19}\] _where \(\Xi(x)\) is a smooth cut off function which equals \(0\) when \(x<\lambda_{-}^{\text{mp}}/100\) and \(1\) when \(x>\lambda_{-}^{\text{mp}}/2\) and \(|\Xi^{(n)}(x)|=\mathcal{O}(1)\) for all \(n\geq 1\)._ _Remark 7_.: Here we remark that in the second estimate of (5.19), we added a cutoff function, in order to get a deterministic bound for the \(\lambda_{-,t}\) derivatives, which is needed when we take expectation \(\mathbb{E}_{\Psi}\). Hence, actually, we should work with \(\mathbb{E}_{\Psi}\big{(}|N\eta\big{(}\mathrm{Im}\,m^{\gamma}(z_{t})-\mathrm{Im }\,\tilde{m}^{0}(z_{t})\big{)}\Xi(\lambda_{-,t})|^{2p})\) instead of \(\mathbb{E}_{\Psi}\big{(}|N\eta\big{(}\mathrm{Im}\,m^{\gamma}(z_{t})-\mathrm{ Im}\,\tilde{m}^{0}(z_{t})\big{)}|^{2p}\big{)}\) to make sure that all quantities in the expansions have bounded expectations. Adding such a cutoff factor will not complicates the expansions since again by the chain rule it boils down to the \(\lambda_{-,t}\) derivatives. Hence, additional technical inputs are not needed for the comparison of the modified quantity. However, in order to ease the presentation, we will state the reasoning for the original quantity and proceed as if all random factors in the expansion have deterministic upper bound. Proof.: Let \(\psi_{ij}=0\). To emphasis the dependence with \(X\), we first note that \(F_{t}(z,\zeta)\) can be rewritten as, \[F_{t}(z,\zeta,X)=1+\frac{t(1-c_{N})-\sqrt{t^{2}(1-c_{N})^{2}+4\zeta z}}{2\zeta }-\frac{c_{N}t}{M}\mathrm{Tr}G(X,\zeta).\] Using Lemma 5.3, we have \[F_{t}(\lambda_{-,t},\zeta_{-,t},X)=0,\quad\text{and}\quad\frac{\partial F_{t}} {\partial\zeta}(\lambda_{-,t},\zeta_{-,t},X)=0. \tag{5.20}\] Then taking derivative of (5.20) gives \[\frac{\partial\lambda_{-,t}}{\partial\mathsf{B}_{ij}}\frac{\partial F_{t}}{ \partial z}(\lambda_{-,t},\zeta_{-,t},X)+\frac{\partial F_{t}}{\partial x_{ij }}(\lambda_{-,t},\zeta_{-,t},X))=0.\] Therefore, we may solve the above equation to obtain that \[\frac{\partial\lambda_{-,t}}{\partial\mathsf{B}_{ij}}=\frac{2c_{N}t\sqrt{t^{2 }(1-c_{N})^{2}+4\lambda_{-,t}\zeta_{-,t}}}{M}\big{[}X^{\top}(G(X,\zeta_{-,t})) ^{2}\big{]}_{ji}. \tag{5.21}\] Notice that \[\big{|}\big{[}X^{\top}(G(X,\zeta_{-,t}))^{2}\big{]}_{ji}\big{|} \overset{\mathrm{(i)}}{\leq}\big{|}\big{[}X^{\top}(G(X,\zeta_{-,t} ))^{2}X\big{]}_{jj}\big{|}^{1/2}\cdot\big{|}\big{[}(G(X,\zeta_{-,t}))^{2}\big{]} _{ii}\big{|}^{1/2}\] \[=\big{|}\big{[}G(X^{\top},\zeta_{-,t})\big{]}_{jj}+\zeta_{-,t} \big{[}(G(X^{\top},\zeta_{-,t}))^{2}\big{]}_{jj}\big{|}^{1/2}\cdot\big{|}\big{[} (G(X,\zeta_{-,t}))^{2}\big{]}_{ii}\big{|}^{1/2}\] \[\lesssim\big{(}\|G(X^{\top},\zeta_{-,t})\|^{1/2}+|\zeta_{-,t}\| \|G(X^{\top},\zeta_{-,t})\|\big{)}\cdot\|G(X,\zeta_{-,t})\|\overset{\mathrm{(ii) }}{\ll}t^{-4}, \tag{5.22}\] where in \(\mathrm{(i)}\) we applied Cauchy-Schwarz inequality, and in \(\mathrm{(ii)}\) we used Lemma 3.2 (i). Therefore, we can obtain that \(\partial_{\mathsf{B}_{ij}}\lambda_{-,t}(b)\). Next we view \(\Phi_{t}(\zeta)\) as a function of \(X\), and write \(\Phi_{t}(\zeta,X)=\Phi_{t}(\zeta)\). By Lemma 3.1, we have \(\frac{\partial\Phi_{t}}{\partial\zeta}(\zeta_{-,t},X)=0\). Further taking derivative w.r.t \(\mathsf{B}_{ij}\) on this equation gives \[\frac{\partial^{2}\Phi_{t}}{\partial\zeta^{2}}(\zeta_{-,t},X)\frac{\partial \zeta_{-,t}}{\partial\mathsf{B}_{ij}}+\frac{\partial^{2}\Phi_{t}}{\partial \zeta\partial x_{ij}}(\zeta_{-,t},X)=0. \tag{5.23}\] By direct calculation, we have \[\frac{\partial^{2}\Phi_{t}}{\partial\zeta\partial x_{ij}}(\zeta_{-,t },X)=\frac{4c_{N}t}{M}[X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}-\frac{2c_{N}^{2}t^{ 2}m_{X}(\zeta_{-,t})}{M}[X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}\] \[\quad+\frac{8c_{N}t\zeta_{-,t}(1-c_{N}tm_{X}(\zeta_{-,t}))}{M}[X^ {\top}(G(X,\zeta_{-,t}))^{3}]_{ji}-\frac{4c_{N}^{2}t^{2}\zeta_{-,t}m_{X}^{\prime }(\zeta_{-,t})}{M}[X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}\] \[\quad+\frac{4c_{N}(1-c_{N})t^{2}}{M}[X^{\top}(G(X,\zeta_{-,t}))^{2 }]_{ji}.\] A similar argument as in (5.22) leads to \(\big{[}X^{\top}(G(X,\zeta_{-,t}))^{3}\big{]}_{ji}\prec t^{-6}\). This together with the fact that \(c_{N}tm_{X}(\zeta_{-,t})\prec t^{1/2}\) and \(m_{X}^{\prime}(\zeta_{-,t})\sim t^{-1}\) gives \(\frac{\partial^{2}\Phi_{t}}{\partial\zeta\partial x_{ij}}(\zeta_{-,t},X)\prec 1 /(Nt^{5}).\) We can also compute that \[\frac{\partial^{2}\Phi_{t}}{\partial\zeta^{2}}(\zeta_{-,t},X)=-2c _{N}tm_{X}^{\prime\prime}(\zeta_{-,t})\zeta_{-,t}(1-c_{N}tm_{X}(\zeta_{-,t})) -4c_{N}tm_{X}^{\prime}(\zeta_{-,t})(1-c_{N}tm_{X}(\zeta_{-,t}))\] \[\qquad\qquad\qquad\qquad+2\zeta_{-,t}(c_{N}tm_{X}^{\prime}(\zeta_{ -,t}))^{2}-c_{N}(1-c_{N})t^{2}m_{X}^{\prime\prime}(\zeta_{-,t}). \tag{5.24}\] Using Lemma 2.3 with the fact \(\zeta_{-,t}-\lambda_{M}(\mathcal{S}(X))\sim t^{2}\) w.h.p., we have w.h.p. that \(\frac{\partial^{2}\Phi_{t}}{\partial\zeta^{2}}(\zeta_{-,t},X)\sim t^{2}.\) Combining the above bounds gives \(\partial_{\mathsf{B}_{ij}}\zeta_{-,t}\prec 1/(Nt^{3}).\) It is worth noting that for any integer \(k\geq 2\), the \(\partial_{\mathsf{B}_{ij}}^{k}\lambda_{-,t}\) can be expressed as a function of \(\partial_{\mathsf{B}_{ij}}^{\ell}\lambda_{-,t}\) and \(\partial_{\mathsf{B}_{ij}}^{\ell}\zeta_{-,t}\), where \(\ell\) ranges from \(0\) to \(k-1.\) Similarly, \(\partial_{\mathsf{B}_{ij}}^{k}\zeta_{-,t}\) is solely dependent on \(\partial_{\mathsf{B}_{ij}}^{\ell}\zeta_{-,t}\), where \(\ell\) ranges from \(0\) to \(k-1.\) By employing the product rule and adopting a similar argument as used in (5.22) to bound the Green function entries, we can observe that the order of \(\partial_{\mathsf{B}_{ij}}^{\ell}\lambda_{-,t}\) is determined by the term that includes \(\partial_{\mathsf{B}_{ij}}^{k-1}[X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}\). Similarly, the order of \(\partial_{\mathsf{B}_{ij}}^{\ell}\zeta_{-,t}\) is determined by the term that includes \(\partial_{\mathsf{B}_{ij}}^{k-1}[X^{\top}(G(X,\zeta_{-,t}))^{3}]_{ji}\). This allows us to conclude that for any \(k\geq 1\) \[\frac{\partial^{k}\lambda_{-,t}}{\partial\mathsf{B}_{ij}^{k}}\prec\frac{1}{Nt ^{2k+1}},\qquad\frac{\partial^{k}\zeta_{-,t}}{\partial\mathsf{B}_{ij}^{k}} \prec\frac{1}{Nt^{2k+1}}.\] The claim now follows by noting that the above bounds still hold when we replace \(\mathsf{B}_{ij}\) in \(X\) with some other \(b\in[0,\mathsf{B}_{ij}]\). The reason behind this is that the replacement matrix still satisfies the \(\eta^{*}\)-regularity condition, ensuring that the corresponding \(\zeta_{-,t}\) and \(\lambda_{M}\) still satisfy Lemma 3.2 (i). Next, we prove a deterministic upper bound for \(\partial_{\mathsf{B}_{ij}}\lambda_{-,t}\). For notational simplicity, we will only work on the original matrix \(X\), and the argument holds for the replacement matrix \(X_{(ij)}(b)\). In view of (5.21), it suffices to obtain deterministic upper bounds for \(\lambda_{-,t}\), \(\zeta_{-,t}\), and \([X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}\). We may first apply Cauchy interlacing theorem to obtain an upper bound for \(\zeta_{-,t}\) as follows: \[\zeta_{-,t}\leq\lambda_{M}(\mathcal{S}(X))\leq\lambda_{M-|\mathcal{D}_{t}|}( \mathcal{S}(\mathsf{B}^{(\mathcal{D}_{r})}))\leq N^{2-2\epsilon_{b}}, \tag{5.25}\] where in the last step we used the fact that the entries of \(\mathcal{S}(\mathsf{B}^{(\mathcal{D}_{r})})\) are bounded by \(N^{-\epsilon_{b}}\). From (2.2), we have \(c_{N}tm_{X}(\zeta_{-,t})=c_{N}tm_{t}(\lambda_{-,t})/(1+c_{N}tm_{t}(\lambda_{-,t})),\) which gives the deterministic bound \(m_{X}(\zeta_{-,t})\leq(c_{N}t)^{-1}\). Using this deterministic bound, we have that there exists some constant \(C>0\) such that \[\frac{1}{M}\leq\frac{1}{M}\sum_{i=1}^{M}\frac{\lambda_{M}(\mathcal{S}(X))- \zeta_{-,t}}{\lambda_{i}(\mathcal{S}(X))-\zeta_{-,t}}\leq Ct^{-1}(\lambda_{M} (\mathcal{S}(X))-\zeta_{-,t}).\] This together with the fact that \(\lambda_{M}(\mathcal{S}(X))\geq\zeta_{-,t}\) (cf. Lemma 3.2 (i)) gives \(\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t}\geq C^{-1}t/M\). Therefore, we are able to obtain deterministic bounds for the high order derivatives \(m_{X}^{(k)}(\zeta_{-,t})\) as well as the spectral norm of \(G(X,\zeta_{-,t})\). We can also obtain that \[|\lambda_{-,t}|=\big{|}\big{[}1-c_{N}tm_{X}(\zeta_{-,t})\big{]}^{2}\zeta_{-,t }+(1-c_{N})t\big{[}1-c_{N}tm_{X}(\zeta_{-,t})\big{]}\big{|}\lesssim N^{2-2 \epsilon_{b}}.\] For the upper bound of \(|[X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}|\), we have \[|[X^{\top}(G(X,\zeta_{-,t}))^{2}]_{ji}| \leq\big{|}[X^{\top}(G(X,\zeta_{-,t}))^{2}X]_{jj}\big{|}^{1/2}\cdot \big{|}[(G(X,\zeta_{-,t}))^{2}]_{ii}\big{|}^{1/2}\] \[\leq\big{\|}(G(X,\zeta_{-,t}))^{2}XX^{\top}\|^{1/2}\cdot\|(G(X, \zeta_{-,t}))^{2}\|^{1/2}\] \[\leq\big{(}\|G(X,\zeta_{-,t})\|^{1/2}+|\zeta_{-,t}|\|G(X,\zeta_{-,t})\|\big{)}\cdot\|G(X,\zeta_{-,t})\|\lesssim N^{2-2\epsilon_{b}}M^{2}t^{-2}.\] Collecting the above bounds proves the first bound in (5.19). To prove the second bound in (5.19), it suffices to provide a lower bound for \(\frac{\partial^{2}\Phi_{\sharp}}{\partial X^{2}}(\zeta_{-,t},X)\) (cf. (5.23)). When \(\lambda_{-,t}\geq\lambda_{-}^{\text{mp}}/100\), we have \[|c_{N}tm_{X}(\zeta_{-,t})|=\Big{|}\frac{c_{N}tm_{t}(\lambda_{-,t})}{1+c_{N}tm_ {t}(\lambda_{-,t})}\Big{|}\leq|c_{N}tm_{t}(\lambda_{-,t})|\lesssim\frac{t^{1/2 }}{|\lambda_{-t}|}\lesssim t^{1/2}.\] Therefore, using Cauchy-Schwarz inequality, we have \[(c_{N}tm_{X}^{\prime}(\zeta_{-,t}))^{2}\leq\frac{c_{N}tm_{X}(\zeta_{-,t})\cdot c _{N}tm_{X}^{\prime\prime}(\zeta_{-,t})}{2}\ll c_{N}tm_{X}^{\prime\prime}(\zeta _{-,t}).\] This implies that \(-2c_{N}tm_{X}^{\prime\prime}(\zeta_{-,t})\zeta_{-,t}(1-c_{N}tm_{X}(\zeta_{-,t} ))+2\zeta_{-,t}(c_{N}tm_{X}^{\prime}(\zeta_{-,t}))^{2}<0.\) Then using (5.24), we may lower bound \(\frac{\partial^{2}\Phi_{\sharp}}{\partial\zeta^{2}}(\zeta_{-,t},X)\) as follows: \[\Big{|}\frac{\partial^{2}\Phi_{t}}{\partial\zeta^{2}}(\zeta_{-,t},X)\Big{|}>c _{N}(1-c_{N})t^{2}m_{X}^{\prime\prime}(\zeta_{-,t})\geq\frac{2c_{N}(1-c_{N})t ^{2}}{M(\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t})^{3}}\geq\frac{2c_{N}(1-c_{N}) t^{2}}{MN^{6-6\epsilon_{b}}},\] where in the last step we used (5.25). Next, we start the proof of Theorem 4.3. Proof of Theorem 4.3.: We begin by collecting some notation to simplify the presentation of the proof. Consider \(\tilde{w}_{ij}\) as the \((i,j)\)-entry of \(\tilde{W}\), and define \(\tilde{Y}^{\gamma}\) analogously to \(Y^{\gamma}\), with the substitution of \(W\) by \(\tilde{W}\). Recall (5.2) and we write \(d_{ij}=d_{ij}(\gamma,w_{ij})\), \(e_{ij}=e_{ij}(\gamma,w_{ij})\), \(\tilde{d}_{ij}=d_{ij}(0,\tilde{w}_{ij})\), and \(\tilde{e}_{ij}=\tilde{e}_{ij}(0,\tilde{w}_{ij})\) in the sequel. To emphasize that \(\lambda_{-,t}\) is a function of \(X\), we introduce the notation \(\lambda_{-,t}^{(ij)}(\beta)=\lambda_{-,t}(X_{(ij)}^{\beta})\). Consequently, we define \(z_{t}^{(ij)}(\beta)=\lambda_{-,t}^{(ij)}(\beta)+E+\mathrm{i}\eta\). For simplicity, we use the shorthand notation \(G_{(ij)}^{\gamma,\lambda,\beta}\) as \(G_{(ij)}^{\gamma,\lambda}\big{(}z_{t}^{(ij)}(\beta)\big{)}\), and we define \(\tilde{G}_{(ij)}^{\gamma,\lambda,\beta}\) analogously, replacing \(W\) with \(\tilde{W}\). We will focus on the estimation of \(\frac{\partial\mathbb{E}_{\Psi}(|N\eta(\mathrm{Im}\,m^{\gamma}(z_{t})- \mathrm{Im}\,\tilde{m}^{0}(z_{t}))|^{2p})}{\partial\gamma}\). To this end, let us define \(f_{\gamma,(ab),(ij)}(\lambda,\beta)=\mathrm{Im}\,[G_{(ij)}^{\gamma,\lambda, \beta}]_{ab},\tilde{f}_{\gamma,(ab),(ij)}(\lambda,\beta)=\mathrm{Im}\,[ \tilde{G}_{(ij)}^{\gamma,\lambda,\beta}]_{ab}\,g_{(ij)}(\lambda,\beta)=\eta \mathrm{Im}\,[(G_{(ij)}^{\gamma,\lambda,\beta})^{2}Y_{(ij)}^{\gamma,\lambda, \beta}]_{ij}\), and \(F_{p}(\lambda,\tilde{\lambda},\beta)=(\eta\sum_{a}f_{\gamma,(aa),(ij)}(\lambda,\beta)-\eta\sum_{a}\tilde{f}_{0,(aa),(ij)}(\tilde{\lambda},\beta))^{p}\). Some elementary calculation gives \[\frac{\partial\mathbb{E}_{\Psi}\big{(}\big{|}N\eta(\mathrm{Im}\,m^{\gamma}(z _{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\big{)}}{\partial \gamma}=-2p\sum_{i,j}\Big{(}(J_{1})_{ij}+(J_{2})_{ij}\Big{)},\] where \[(J_{1})_{ij} =\mathbb{E}_{\Psi}\Big{[}g_{(ij)}(d_{ij},\chi_{ij}b_{ij})\mathcal{ E}_{ij}F_{2p-1}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\Big{]}\cdot\mathbf{1}_{\psi_{ ij}=0},\] \[(J_{2})_{ij} =-\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}}\mathbb{E}_{\Psi} \Big{[}w_{ij}g_{(ij)}(e_{ij},c_{ij})F_{2p-1}(e_{ij},\tilde{e}_{ij},c_{ij}) \Big{]}\cdot\mathbf{1}_{\psi_{ij}=1},\] \[\mathcal{E}_{ij} =(1-\chi_{ij})a_{ij}-\gamma t^{1/2}(1-\gamma^{2})^{-1/2}w_{ij}.\] For \((J_{2})_{ij}\), we may apply Gaussian integration by parts to obtain that \[(J_{2})_{ij} =-\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}N}\Big{(}\mathbb{E}_{ \Psi}\Big{[}\partial_{w_{ij}}\big{\{}g_{(ij)}(e_{ij},c_{ij})\big{\}}F_{2p-1}(e _{ij},\tilde{e}_{ij},c_{ij})\Big{]}\] \[\quad+(2p-1)\mathbb{E}_{\Psi}\Big{[}g_{(ij)}(e_{ij},c_{ij}) \partial_{w_{ij}}\big{\{}\eta\sum_{a}f_{\gamma,(aa),(ij)}(e_{ij},c_{ij})\big{\}}F_{2p -2}(e_{ij},\tilde{e}_{ij},c_{ij})\Big{]}\Big{)}\cdot\mathbf{1}_{\psi_{ij}=1}.\] Note by directly calculation, we have \[\partial_{w_{ij}}\big{\{}g_{(ij)}(e_{ij},c_{ij})\big{\}} =\Big{(}\mathrm{Im}\left[\big{(}G^{\gamma,e_{ij},c_{ij}}_{(ij)} \big{)}^{2}\right]_{ii}-2\mathrm{Im}\left(\big{[}\big{(}G^{\gamma,e_{ij},c_{ij} }_{(ij)}\big{)}^{2}\big{]}_{ii}\big{[}(Y^{\gamma,e_{ij}}_{(ij)})^{\top}G^{ \gamma,e_{ij}}_{(ij)}Y^{\gamma,e_{ij}}_{(ij)}\big{]}_{jj}\right)\] \[\quad-2\mathrm{Im}\left(\big{[}\big{(}G^{\gamma,e_{ij},c_{ij}}_{( ij)}\big{)}^{2}Y^{\gamma,e_{ij}}_{(ij)}\big{]}_{ij}\big{[}G^{\gamma,e_{ij},c_{ij}}_{( ij)}Y^{\gamma,e_{ij}}_{(ij)}\big{]}_{ij}\right)\Big{)}(1-\gamma^{2})^{1/2}t^{1/2}\eta, \tag{5.26}\] and \(\partial_{w_{ij}}\big{\{}\eta\sum_{a}f_{\gamma,(aa),(ij)}(e_{ij},c_{ij}) \big{\}}=-2t^{1/2}(1-\gamma^{2})^{1/2}g_{(ij)}(e_{ij},c_{ij})\). Using Wald's identity with the fact that \(i\in\mathcal{D}_{r}\) and \(j\in\mathcal{D}_{c}\) when \(\psi_{ij}=1\), we can obtain that \[|\big{[}\big{(}G^{\gamma,e_{ij},c_{ij}}_{(ij)}\big{)}^{2}\big{]}_{ii}|\leq \sum_{a}|\big{[}G^{\gamma,e_{ij},c_{ij}}_{(ij)}\big{]}_{ai}|^{2}=\frac{ \mathrm{Im}\left[G^{\gamma,e_{ij},c_{ij}}_{(ij)}\right]_{ii}}{\eta}\prec t^{ -2}\eta^{-1}, \tag{5.27}\] and \[|\big{[}(G^{\gamma,e_{ij},c_{ij}}_{(ij)})^{2}Y^{\gamma,e_{ij}}_{( ij)}\big{]}_{ij}| \leq\sum_{a}|\big{[}G^{\gamma,e_{ij},c_{ij}}_{(ij)}Y^{\gamma,e_{ij }}_{(ij)}\big{]}_{aj}|^{2}+\sum_{a}|\big{[}G^{\gamma,e_{ij},c_{ij}}_{(ij)} \big{]}_{ia}|^{2}\] \[\overset{\mathrm{(i)}}{=}\big{[}(Y^{\gamma,e_{ij}}_{(ij)})^{\top} |G^{\gamma,e_{ij},c_{ij}}_{(ij)}|^{2}Y^{\gamma,e_{ij}}_{(ij)}\big{]}_{jj}+\eta ^{-1}\mathrm{Im}\left[G^{\gamma,e_{ij},c_{ij}}_{(ij)}\right]_{ii}\] \[\overset{\mathrm{(ii)}}{=}\big{[}(Y^{\gamma,e_{ij}}_{(ij)})^{\top} Y^{\gamma,e_{ij}}_{(ij)}|\mathcal{G}^{\gamma,e_{ij},c_{ij}}_{(ij)}\big{]}_{jj}+\eta ^{-1}\mathrm{Im}\left[G^{\gamma,e_{ij},c_{ij}}_{(ij)}\right]_{ii}\] \[=\big{[}\mathcal{G}^{\gamma,e_{ij},c_{ij}}_{(ij)}\big{]}_{jj}+z \big{[}\big{[}\mathcal{G}^{\gamma,e_{ij},c_{ij}}_{(ij)}\big{]}_{jj}\big{]}_{jj }+\eta^{-1}\mathrm{Im}\left[G^{\gamma,e_{ij},c_{ij}}_{(ij)}\right]_{ii}\overset {\mathrm{(iii)}}{\prec}t^{-2}\eta^{-1}, \tag{5.28}\] where in \(\mathrm{(i)}\) we applied Wald's identity, in \(\mathrm{(ii)}\) we used the fact that for any \(A\in\mathbb{R}^{M\times N}\), \((AA^{\top}-z)^{-1}A=A^{\top}(AA^{\top}-z)^{-1}\) when \(z\) does not lie inside the spectrum of \(A\), and in \(\mathrm{(iii)}\) we estimate \(\big{[}|\mathcal{G}^{\gamma,e_{ij},c_{ij}}_{(ij)}|^{2}\big{]}_{jj}\) in a similar way as done in (5.27). Combining the above estimates with the fact that \(\sum_{i,j}\mathbf{1}_{\psi_{ij}=1}\leq N^{1-\epsilon_{\alpha}}\), we arrive at \[|(J_{2})_{ij}|\lesssim\frac{\gamma}{(1-\gamma^{2})^{1/2}N^{1-\epsilon_{\alpha} }}\sum_{k=1}^{2}\mathbb{E}_{\Psi}\Big{[}|\mathcal{O}_{\prec}(N^{-\epsilon_{ \alpha}}t^{-3})|\cdot|F_{2p-k}(e_{ij},\tilde{e}_{ij},c_{ij})|\Big{]}\cdot \mathbf{1}_{\psi_{ij}=1}.\] We may then apply Young's inequality as the following: \[\mathbb{E}_{\Psi}\Big{[}|\mathcal{O}_{\prec}(N^{-\epsilon_{\alpha }}t^{-3})|\cdot|F_{2p-1}(e_{ij},\tilde{e}_{ij},c_{ij})|\Big{]} \overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq expansion on \(g_{(ij)}(d_{ij},\chi_{ij}b_{ij})\) on the first variable around \(0\), we have for an \(s_{1}\) to be chosen later, there exists \(\tilde{d}_{ij}\in[0,d_{ij}]\) such that, \[(J_{1})_{ij} =\sum_{k=0}^{s_{1}}\mathbb{E}_{\Psi}\Big{[}\frac{d_{ij}^{k}g_{(ij) }^{(k,0)}(0,\chi_{ij}b_{ij})}{k!}\mathcal{E}_{ij}F_{2p-1}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[+\mathbb{E}_{\Psi}\Big{[}\frac{d_{ij}^{s_{1}+1}g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi_{ij}b_{ij})}{(s_{1}+1)!}\mathcal{E}_{ij}F_{2p-1}(d_{ ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}=\sum_{k=0}^{s_{1}} \frac{1}{k!}(J_{1})_{ij,k}+\mathsf{Rem}_{1}.\] By the entries bound in Proposition 4.1, (5.27), (5.28), and the perturbation argument in (5.10), we may crudely bound the above remainder term as follows: \[|\mathsf{Rem}_{1}|\lesssim\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{(s_{1}+2)\epsilon _{b}}}\mathbb{E}_{\Psi}\Big{[}|g_{(ij)}^{(s_{1}+1,0)}(\bar{d}_{ij},\chi_{ij}b_ {ij})|\cdot|F_{2p-1}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})|\Big{]}\lesssim \frac{N^{\epsilon}N^{2p}}{N^{(s_{1}+2)\epsilon_{b}}t^{2s_{1}+4}}\lesssim N^{-p},\] where in the second inequality, we used the deterministic bound \(|F_{2p-1}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})|\leq N^{2p-1}\), and in the last step, we chose \(s_{1}>6p/\epsilon_{b}\) and used the fact that \(t\gg N^{-\epsilon_{b}/8}\). We may apply similar argument to expand the first two variables of \(F_{2p-1}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\) in \((J_{1})_{ij,k}\) to obtain that \[(J_{1})_{ij,k} =\sum_{\ell=0}^{s_{2}}\sum_{m=0}^{\ell}\mathbb{E}_{\Psi}\Big{[} \frac{d_{ij}^{k+m}\tilde{d}_{ij}^{\ell-m}g_{(ij)}^{(k,0)}(0,\chi_{ij}b_{ij})}{ m!(\ell-m)!}\mathcal{E}_{ij}F_{2p-1}^{(m,\ell-m,0)}(0,0,\chi_{ij}b_{ij}) \Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}(N^{-p})\] \[=\sum_{\ell=0}^{s_{2}}(J_{1})_{ij,k\ell}+\mathcal{O}(N^{-p}).\] where \(s_{2}\) is a large integer satisfying \(s_{2}>6p/\epsilon_{b}\). To estimate \((J_{1})_{ij,k\ell}\), we start by introducing the notation \(t_{ij}=t^{2}\cdot(1-\mathbf{1}_{i\in\mathcal{T}_{j},j\in\mathcal{T}_{c}})+ \mathbf{1}_{i\in\mathcal{T}_{r},j\in\mathcal{T}_{c}}\) for presentation simplicity. Note by the chain rule, we have for any integer \(\ell\geq 0\) and \(m\leq\ell\), \[F_{2p-1}^{m,\ell-m,0}(0,0,\chi_{ij}b_{ij})=\sum_{k=1}^{\ell_{\Lambda}(2p-1)} \mathcal{C}_{k,m}^{\chi_{ij}b_{ij}}F_{2p-1-k}(0,0,\chi_{ij}b_{ij})+\mathcal{C }_{\ell+1,m}^{\chi_{ij}b_{ij}}\mathbf{1}_{\ell\geq(2p-1)}, \tag{5.31}\] where for all \(k\in[\ell+1]\), \(m\in[\ell]\), \(\mathcal{C}_{k,m}^{\chi_{ij}b_{ij}}\) are polynomials of the following terms \[\big{[}G_{(ij)}^{\gamma,0,\chi_{ij}b_{ij}}Y_{(ij)}^{\gamma,0}]_{ ij},\big{[}G_{(ij)}^{\gamma,0,\chi_{ij}b_{ij}}\big{]}_{ii},\big{[}(Y_{(ij)}^{ \gamma,0,\gamma}\Gamma_{(ij)}^{\gamma,0,\chi_{ij}b_{ij}}Y_{(ij)}^{\gamma,0}]_{ jj},\big{[}(G_{(ij)}^{\gamma,0,\chi_{ij}b_{ij}})^{2}Y_{(ij)}^{\gamma,0}\big{]}_{ ij},\big{[}(G_{(ij)}^{\gamma,0,\chi_{ij}b_{ij}})^{2}\big{]}_{ii},\] \[\big{[}\tilde{G}_{(ij)}^{0,0,\chi_{ij}b_{ij}}\tilde{Y}_{(ij)}^{0,0 }]_{ij},\big{[}\tilde{G}_{(ij)}^{0,0,\chi_{ij}b_{ij}}\big{]}_{ii},\big{[}( \tilde{Y}_{(ij)}^{0,0,\chi_{ij}b_{ij}})^{\top}\tilde{G}_{(ij)}^{0,0,\chi_{ij}b _{ij}}\tilde{Y}_{(ij)}^{0,0}\big{]}_{jj},\big{[}(\tilde{G}_{(ij)}^{0,0,\chi_{ij} b_{ij}})^{2}\big{]}_{ii}.\] After carrying out a similar derivation as shown in (5.26)-(5.28) and employing the perturbation argument described in (5.10), it can be easily verified that \(\mathcal{C}_{k,m}^{\chi_{ij}b_{ij}}\cdot\mathbf{1}_{\psi_{ij}=0}\prec t_{ij}^{-( \ell+1)}\). Plugging (5.31) into \((J_{1})_{ij,k\ell}\), we have \[(J_{1})_{ij,k\ell} =\sum_{n=1}^{\ell_{\Lambda}(2p-1)}\sum_{m=0}^{\ell}\mathbb{E}_{ \Psi}\Big{[}\frac{d_{ij}^{k+m}\tilde{d}_{ij}^{\ell-m}\mathcal{E}_{ij}}{m!(\ell- m)!}g_{(ij)}^{(k,0)}(0,\chi_{ij}b_{ij})\mathcal{C}_{n,m}^{\chi_{ij}b_{ij}}F_{2p-1-n}(0,0, \chi_{ij}b_{ij})\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[+\sum_{m=0}^{\ell}\mathbb{E}_{\Psi}\Big{[}\frac{d_{ij}^{k+m} \tilde{d}_{ij}^{\ell-m}\mathcal{E}_{ij}}{m!(\ell-m)!}g_{(ij)}^{(k,0)}(0,\chi_{ ij}b_{ij})\mathcal{C}_{\ell+1,m}^{\chi_{ij}b_{ij}}\mathbf{1}_{\ell\geq(2p-1)} \Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}=(\mathsf{T}_{1})_{ij,k\ell}+(\mathsf{T}_{2})_ {ij,k\ell}.\] For \((\mathsf{T}_{2})_{ij,k\ell}\), we only need to consider the case when \(\ell\geq 2p-1\). Using \(g_{(ij)}^{(k,0)}(0,\chi_{ij}b_{ij})\prec t_{ij}^{-(k+1)}\) with the fact that \(t\gg N^{-\epsilon_{b}/8}\), we can conclude that \(|(\mathsf{T}_{2})_{ij,k\ell}|\lesssim N^{-\epsilon_{b}p}\). Next, we focus on the estimation of \((\mathsf{T}_{1})_{ij,k\ell}\). When \(k+\ell\) is even, we have by the law of total expectation that, \[(\mathsf{T}_{1})_{ij,k\ell}=\sum_{n=1}^{\ell\wedge(2p-1)}\sum_{m=0}^ {\ell}\mathbb{E}_{\Psi}\Big{[} \frac{(\gamma a_{ij}+(1-\gamma^{2})^{1/2}t^{1/2}w_{ij})^{k+m}(t^{1/ 2}\tilde{w}_{ij})^{\ell-m}}{m!(\ell-m)!}g_{(ij)}^{(k,0)}(0,0)\] \[\times\Big{(}a_{ij}-\frac{\gamma t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/ 2}}\Big{)}\mathcal{C}_{n,m}^{0}F_{2p-1-n}(0,0,0)\Big{]}\cdot\mathbb{P}(\chi_{ ij}=0)\cdot\mathbf{1}_{\psi_{ij}=0}\] \[-\sum_{n=1}^{\ell\wedge(2p-1)}\sum_{m=0}^{\ell}\mathbb{E}_{\Psi} \Big{[} \frac{\gamma(b_{ij}+(1-\gamma^{2})^{1/2}t^{1/2}w_{ij})^{k+m}(b_{ ij}+t^{1/2}\tilde{w}_{ij})^{\ell-m}t^{1/2}w_{ij}}{(1-\gamma^{2})^{1/2}m!( \ell-m)!}\] \[\times g_{(ij)}^{(k,0)}(0,b_{ij})\mathcal{C}_{n,m}^{b_{ij}}F_{2p -1-n}(0,0,b_{ij})\Big{]}\cdot\mathbb{P}(\chi_{ij}=1)\cdot\mathbf{1}_{\psi_{ij} =0}. \tag{5.32}\] From the above equation, one can easily verify that \((\mathsf{T}_{1})_{ij,k\ell}=0\) when \(k+\ell\) is even. Therefore, in the rest of the estimation, we consider the case of \(k+\ell\) is odd. In this case, we need to further expand out \(\chi_{ij}b_{ij}\) in \(\mathcal{C}_{n,m}^{\chi_{ij}b_{ij}}\), \(g_{(ij)}^{(k,0)}(0,\chi_{ij}b_{ij})\) and \(F_{2p-1-n}(0,0,\chi_{ij}b_{ij})\). First note by Taylor expansion, for any \(s_{3}\geq 0\) there exists \(b_{ij}^{(1)}\in[0,\chi_{ij}b_{ij}]\) such that \[g_{(ij)}^{(k,0)}(0,\chi_{ij}b_{ij})=\sum_{q=0}^{s_{3}}\frac{(\chi_{ij}b_{ij}) ^{q}}{q!}g_{(ij)}^{(k,q)}(0,0)+\frac{(\chi_{ij}b_{ij})^{s_{3}+1}}{(s_{3}+1)!}g _{(ij)}^{(k,s_{3}+1)}(0,b_{ij}^{(1)}). \tag{5.33}\] By Faa di Bruno's formula, for \(q\geq 1\), \(g_{(ij)}^{(k,q)}(\lambda,\beta)\) can be expressed as \[g_{(ij)}^{(k,q)}(\lambda,\beta)=\sum_{(u_{1},\cdots,u_{q})}\frac{q!}{u_{1}!u_ {2}!\cdots u_{q}!}\partial_{z}^{u_{1}+\cdots+u_{q}}g_{(ij)}^{(k,0)}(\lambda, \beta)\cdot\prod_{v=1}^{q}\Big{(}\frac{\partial^{v}\lambda_{-,t}^{(ij)}}{ \partial\beta^{v}}(\beta)\Big{)}^{u_{v}}, \tag{5.34}\] where the sum \(\sum_{(u_{1},\cdots,u_{q})}\) is over all \(q\)-tuples of nonnegative integers \((u_{1},\cdots,u_{q})\) satisfying \(\sum_{i=1}^{q}iu_{i}=q\). We may then use (5.18) in Lemma 5.4 to bound the derivatives of \(\lambda_{-,t}^{(ij)}\) and a Cauchy integral argument to bound the derivatives of \(g_{(ij)}^{(k,0)}\) w.r.t \(z\), which gives \[g_{(ij)}^{(k,q)}(0,0)\cdot\mathbf{1}_{\psi_{ij}=0}\prec\sum_{(u_{1},\cdots,u_ {q})}\frac{1}{\eta^{u_{1}+\cdots+u_{q}}t_{ij}^{k+1}}\prod_{v=1}^{q}\frac{1}{N^ {u_{v}t(2v+1)u_{v}}}\prec\frac{1}{N\eta t^{3q}t_{ij}^{k+1}},\quad q\geq 1. \tag{5.35}\] and the same bound holds for \(g_{(ij)}^{(k,q)}(0,b_{ij}^{(1)})\). Therefore, by choosing \(s_{3}>6p/\epsilon_{b}\) together with the facts that \(\mathcal{C}_{n,m}^{\chi_{ij}b_{ij}}\prec t_{ij}^{-(k+1)}\), \(|F_{2p-1-n}(0,0,\chi_{ij}b_{ij})|\lesssim N^{2p-1-n}\), we can obtain that \[(\mathsf{T}_{1})_{ij,k\ell}=\sum_{n=1}^{\ell\wedge(2p-1)}\sum_{q= 0}^{s_{3}}\sum_{m=0}^{\ell}\mathbb{E}_{\Psi}\Big{[}\frac{d_{ij}^{k+m}\tilde{q} _{ij}^{\ell-m}(\chi_{ij}b_{ij})^{q}\mathcal{E}_{ij}}{q!m!(\ell-m)!}g_{(ij)}^{( k,q)}(0,0)\mathcal{C}_{n,m}^{\chi_{ij}b_{ij}}\] \[\times F_{2p-1-n}(0,0,\chi_{ij}b_{ij})\Big{]}\cdot\mathbf{1}_{ \psi_{ij}=0}+\mathcal{O}(N^{-\epsilon_{b}p})=\sum_{n=1}^{\ell\wedge(2p-1)} \sum_{q=0}^{s_{3}}(\mathsf{T}_{1})_{ij,k\ell,nq}+\mathcal{O}(N^{-\epsilon_{b} p}).\] For \((\mathsf{T}_{1})_{ij,k\ell,nq}\), the term \(\mathcal{C}_{n,m}^{\chi_{ij}b_{ij}}\) can be expanded in a similar way as done for \(g_{(ij)}^{(k,0)}(0,\chi_{ij}b_{ij})\) in (5.33) and (5.34), we omit the details. This leads to \[(\mathsf{T}_{1})_{ij,k\ell,nq}= \sum_{r=0}^{s_{4}}\sum_{m=0}^{\ell}\mathbb{E}_{\Psi}\Big{[}\frac{d _{ij}^{k+m}\tilde{q}_{ij}^{\ell-m}(\chi_{ij}b_{ij})^{q+r}\mathcal{E}_{ij}}{r!q!m!(\ell-m)!}g_{(ij)}^{(k,q)}(0,0)\] \[\times\mathcal{C}_{n,m}^{(r),0}F_{2p-1-n}(0,0,\chi_{ij}b_{ij}) \Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}(N^{-\epsilon_{b}p}),\] where \(s_{4}>6p/\epsilon_{b}\) and \[\mathcal{C}_{n,m}^{(r),0}=\frac{\partial^{r}\mathcal{C}_{n,m}^{\beta}}{\partial \beta^{r}}\Big{|}_{\beta=0},\quad\text{and}\quad\mathcal{C}_{n,m}^{(r),0} \prec\frac{1}{N\eta t^{3r}t_{ij}^{\ell+1}},\quad r\geq 1. \tag{5.36}\] Next, we deal with \(F_{2p-1-n}(0,0,\chi_{ij}b_{ij})\). For any \(s\geq 0\), we can compute that \[F_{2p-1-n}^{(0,0,s)}(0,0,0)=\sum_{(u_{1},\cdots,u_{s})}\frac{s!}{u_{1}!u_{2}!\cdots u _{s}!}\partial_{z}^{u_{1}+\cdots+u_{s}}F_{2p-1-n}(0,0,0)\cdot\prod_{\text{w}=1} ^{s}\Big{(}\frac{\partial^{\text{w}}\lambda_{-,t}^{(ij)}}{\partial\beta^{ \text{w}}}(0)\Big{)}^{u_{\text{w}}}, \tag{5.37}\] and for any integer \(\vartheta\geq 0\), \[\partial_{z}^{\vartheta}F_{2p-1-n}(0,0,0) =\sum_{\begin{subarray}{c}(v_{1},\cdots,v_{0})\\ v_{1}+\cdots+v_{\vartheta}\leq 2p-1-n\end{subarray}}\frac{\vartheta!}{u_{1}!v_{2}! \cdots v_{0}!}F_{2p-1-n-(v_{1}+\cdots+v_{\vartheta})}(0,0,0)\] \[\times\prod_{\text{w}=1}^{\vartheta}\Big{(}\eta\text{Im}\, \text{Tr}\big{(}G_{(ij)}^{\gamma,0,0}\big{)}^{\text{w}+1}-\eta\text{Im}\, \text{Tr}\big{(}\tilde{G}_{(ij)}^{0,0,0}\big{)}^{\text{w}+1}\Big{)}^{v_{\text {w}}}. \tag{5.38}\] Combining the above two expression, and using Lemma 5.4, we can estimate the remainder term as done for \(\text{Rem}_{3}\), which gives \[(\mathsf{T}_{1})_{ij,k\ell,nq} =\sum_{r=0}^{s_{4}}\sum_{s=0}^{s_{5}}\sum_{m=0}^{\ell}\mathbb{E}_ {\Psi}\Big{[}\frac{d_{ij}^{k+m}\tilde{d}_{ij}^{\ell-m}(\chi_{ij}b_{ij})^{q+r+s }\mathcal{E}_{ij}}{s!r!q!m!(\ell-m)!}\Big{]}\] \[\times\mathbb{E}_{\Psi}\Big{[}g_{(ij)}^{(k,q)}(0,0)C_{n,m}^{0,(r) }F_{2p-1-n}^{(0,0,s)}(0,0,0)\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}( N^{-\epsilon_{k}p}), \tag{5.39}\] for some large enough integer \(s_{5}\). Here we also used the independency between the random variables. Then it suffices to estimate \((\mathsf{T}_{1})_{ij,k\ell,nq}\) in two different cases, \(k+\ell=1\) and \(k+\ell\geq 3\) (recall that we only need to consider the case when \(k+\ell\) is odd, cf. (5.32)). **Case 1:**\(k+\ell\geq 3\). From (5.39), using the estimates (5.35) and (5.36), and the fact that \(\mathbb{E}(b_{ij}^{2})\lesssim N^{-1}\), \(\mathbb{E}((1-\chi_{ij})a_{ij}^{2})\asymp t\mathbb{E}(w_{ij}^{2})=t/N\), we have \[(\mathsf{T}_{1})_{ij,k\ell,nq} =\sum_{r=0}^{s_{4}}\sum_{s=0}^{s_{5}}\mathcal{O}\Big{(}\frac{t}{N ^{2+(k+\ell+q+r+s-3)\epsilon_{0}}}\Big{)}\] \[\times\mathbb{E}_{\Psi}\Big{[}\mathcal{O}_{\prec}\Big{(}\frac{1} {t^{3(q+r)}t_{ij}^{k+\ell+2}}\Big{)}F_{2p-1-n}^{(0,0,s)}(0,0,0)\Big{]}\cdot \mathbf{1}_{\psi_{ij}=0}+\mathcal{O}(N^{-\epsilon_{k}p}). \tag{5.40}\] Note that we have already derived the expression of \(F_{2p-1-n}^{(0,0,s)}(0,0,0)\) in (5.37) and (5.38). Then using the following inequality: \[\big{|}\eta\text{Im}\,\text{Tr}\big{(}G_{(ij)}^{\gamma,0,0}\big{)} ^{\text{w}+1}-\eta\text{Im}\,\text{Tr}\big{(}\tilde{G}_{(ij)}^{0,0,0}\big{)}^{ \text{w}+1}\big{|}^{v_{\text{w}}}\lesssim\big{|}\eta\text{Im}\,\text{Tr}\big{(} \tilde{G}_{(ij)}^{\gamma,0,0}\big{)}^{\text{w}+1}\big{|}^{v_{\text{w}}}\] \[\leq\eta^{-w_{\text{w}_{\text{w}_{\text{w}}}}}\big{(}\big{|}\eta \text{Im}\,\text{Tr}G_{(ij)}^{\gamma,0,0}\big{|}^{v_{\text{w}}}+\big{|}\eta \text{Im}\,\text{Tr}\tilde{G}_{(ij)}^{0,0,0}\big{|}^{v_{\text{w}}}\big{)} \lesssim\eta^{-w_{\text{w}_{\text{w}}}}\big{(}|F_{v_{\text{w}}}(0,0,0)|+\big{|} \eta\text{Im}\,\text{Tr}\tilde{G}_{(ij)}^{0,0,0}\big{|}^{v_{\text{w}}}\big{)}, \tag{5.41}\] together with Lemma 5.4 and the fact that \(\eta\text{Im}\,\text{Tr}\tilde{G}_{(ij)}^{0,0,0}\prec N\eta\sqrt{|E|+\eta} \leq N^{1-\varepsilon_{1}/2}\eta\) (this can be done by bounding \(\big{(}\eta\text{Im}\,\text{Tr}\tilde{G}_{(ij)}^{0,0,0}-\eta\text{Im}\,\text{ Tr}\tilde{G}_{(ij)}^{0,d_{ij},\chi_{ij}b_{ij}}\big{)}\cdot\mathbf{1}_{\psi_{ij}=0}\) through Taylor expansion and then using local law for the Gaussian divisible model (cf. (1.18)) that \(\big{(}\eta\text{Im}\,\text{Tr}\tilde{G}_{(ij)}^{0,d_{ij},\chi_{ij}b_{ij}}-N\eta \text{Im}\,m_{t}(z_{t})\big{)}\cdot\mathbf{1}_{\psi_{ij}=0}\prec 1\) with \(\text{Im}\,m_{t}(z_{t})\prec\sqrt{|E|+\eta}\) (cf. (2.8))), we can obtain that \[|(\mathsf{T}_{1})_{ij,k\ell,nq}|\leq\frac{1}{N^{2}}\sum_{\text{a}=0}^{2p-1-n} \mathbb{E}_{\Psi}\Big{[}\mathcal{O}_{\prec}\Big{(}\frac{t}{N^{(k+\ell+4-3) \epsilon_{k}}t^{3\text{a}}t_{ij}^{(k+\ell+2)}}\Big{)}|F_{2p-1-n-\text{a}}(0,0, 0)|\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}(N^{-\epsilon_{k}p}).\] Substituting this back into \((\mathsf{T}_{1})_{ij,k\ell}\) and considering that \(t\gg N^{-\epsilon_{k}/100}\lor N^{-\epsilon_{d}/20}\), a straightforward calculation yields that: if \(k+\ell\geq 5\), \[|(\mathsf{T}_{1})_{ij,k\ell}|\leq\frac{1}{N^{2}}\sum_{n=1}^{2p-1}\mathbb{E}_{ \Psi}\Big{[}\mathcal{O}_{\prec}\Big{(}\frac{1}{N^{(n+1)\epsilon_{k}/10}}\Big{)} |F_{2p-1-n}(0,0,0)|\Big{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}(N^{-\epsilon_ {k}p}), \tag{5.42}\] and if \(k+\ell\geq 3\), \[|(\mathsf{T}_{1})_{ij,k\ell}|\leq \sum_{n=1}^{\ell\wedge(2p-1)}\sum_{\mathbf{a}=0}^{2p-1-n}\Big{(} \frac{\mathbf{1}_{\psi_{ij}=0}(1-\mathbf{1}_{i\in\mathcal{T},j\in\mathcal{T}_{ c}})}{N^{2-\epsilon_{d}}}\mathbb{E}_{\Psi}\Big{[}\mathcal{O}_{\prec}\Big{(} \frac{1}{N^{\mathbf{a}\epsilon_{b}/10+\epsilon_{d}/2}}\Big{)}|F_{2p-1-n- \mathbf{a}}(0,0,0)|\Big{]}\] \[+\frac{\mathbf{1}_{\psi_{ij}=0}\mathbf{1}_{i\in\mathcal{T},j\in \mathcal{T}_{c}}}{N^{2}}\mathbb{E}_{\Psi}\Big{[}\mathcal{O}_{\prec}\Big{(} \frac{t}{N^{\mathbf{a}\epsilon_{b}/10}}\Big{)}|F_{2p-1-n-\mathbf{a}}(0,0,0)| \Big{]}\Big{)}+\mathcal{O}(N^{-\epsilon_{b}p}). \tag{5.43}\] Next, we shall replace \(F_{2p-1-n}(0,0,0)\cdot\mathbf{1}_{\psi_{ij}=0}\) back by \(F_{2p-1-n}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\cdot\mathbf{1}_{\psi_{ij}=0}\). Applying Taylor expansion on the third variable and then using (5.37)-(5.41), we can obtain that \[|F_{2p-1-n}(0,0,0)|\leq\sum_{\mathbf{a}=0}^{2p-1-n}\mathcal{O}_{\prec}\big{(} N^{-\epsilon_{b}\mathbf{a}/10}\big{)}\cdot|F_{2p-1-n-\mathbf{a}}(0,0,\chi_{ij}b_{ ij})|+\mathcal{O}_{\prec}(N^{-\epsilon_{b}p}).\] Therefore, we have that (5.42) and (5.43) remain valid, with \((0,0,0)\) replaced by \((0,0,\chi_{ij}b_{ij})\). Using Taylor expansion again, for a large enough integer \(s_{7}\), there exists \(d_{1,ij}\in[0,d_{ij}],d_{2,ij}\in[0,\tilde{d}_{ij}]\) such that \[F_{2p-1-n}(0,0,\chi_{ij}b_{ij}) =\sum_{u=0}^{s_{7}}\sum_{v=0}^{u}\frac{(-d_{ij})^{v}(-\tilde{d}_{ ij})^{u-v}}{v!(u-v)!}\cdot F_{2p-1-n}^{(v,u-v,0)}(d_{ij},\tilde{d}_{ij},\chi_{ij}b _{ij})\] \[\quad+\sum_{\ell=0}^{s_{7}+1}\frac{(-d_{ij})^{v}(-\tilde{d}_{ij}) ^{s_{7}+1-v}}{v!(s_{7}+1-v)!}\cdot F_{2p-1-n}^{(v,s_{7}+1-v,0)}(d_{1,ij},d_{2, ij},\chi_{ij}b_{ij}).\] Then we may use (5.31)(with minor modification that replace \((0,0,\chi_{ij}b_{ij})\) by \((d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\)) to transform \(F_{2p-1-n}^{(v,u-v,0)}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\) to \(F_{2p-1-r}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\) for some \(r\geq n\). It can also be easily checked that the resulting coefficients of \(F_{2p-1-r}\) can be compensated by bounding \(|d_{ij}|,|\tilde{d}_{ij}|\) by \(N^{-\epsilon_{b}}\) (w.h.p). This finally confirms that (5.42) and (5.43) still hold when \((0,0,0)\) are replaced by \((d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij})\). Therefore, using straightforward power counting and applying Young's inequality as shown in (5.29), we may conclude that when \(k+\ell\geq 3\), there exits some constants \(K=K(p)>0\) and \(\delta=\delta(\epsilon_{a},\epsilon_{b},\epsilon_{d})>0\), such that \[|(J_{1})_{ij,k\ell}|\lesssim\frac{\mathbf{1}_{\psi_{ij}=0}}{N^{2}} \Big{(}(\log N)^{-K}\mathbb{E}_{\Psi}\Big{[}F_{2p}(d_{ij},\tilde{d}_{ij},\chi_ {ij}b_{ij})\Big{]}+N^{-\delta p}\Big{)}\] \[\quad\quad\quad\quad\quad+\frac{\mathbf{1}_{\psi_{ij}=0}(1- \mathbf{1}_{i\in\mathcal{T}_{r},j\in\mathcal{T}_{c}})}{N^{2-\epsilon_{d}}} \Big{(}(\log N)^{-K}\mathbb{E}_{\Psi}\Big{[}F_{2p}(d_{ij},\tilde{d}_{ij},\chi _{ij}b_{ij})\Big{]}+N^{-\delta p}\Big{)}. \tag{5.44}\] **Case 2:**\(k+\ell=1\). Recall from (5.39) that \[(\mathsf{T}_{1})_{ij,k\ell,nq} =\sum_{r=0}^{s_{4}}\sum_{s=0}^{s_{5}}\sum_{m=0}^{\ell}\mathbb{E} _{\Psi}\Big{[}\frac{d_{ij}^{k+m}\tilde{d}_{ij}^{\ell-m}(\chi_{ij}b_{ij})^{q+r+ s}\mathcal{E}_{ij}}{s!r!q!m!(\ell-m)!}\Big{]}\] \[\quad\quad\times\mathbb{E}_{\Psi}\Big{[}g_{(ij)}^{(k,q)}(0,0) \mathcal{C}_{n,m}^{0,(r)}F_{2p-1-n}^{(0,0,s)}(0,0,0)\Big{]}\cdot\mathbf{1}_{ \psi_{ij}=0}+\mathcal{O}(N^{-\epsilon_{b}p}).\] **Case 2.1:**\(q+r+s\) is odd. In this case, we can directly compute that \[\sum_{m=0}^{\ell}\mathbb{E}_{\Psi}\Big{[}\frac{d_{ij}^{k+m}\tilde{d}_{ij}^{\ell-m} (\chi_{ij}b_{ij})^{q+r+s}\mathcal{E}_{ij}}{s!r!q!m!(\ell-m)!}\Big{]}=\mathbb{E }_{\Psi}\Big{[}\frac{\gamma((1-\chi_{ij})a_{ij}^{2}-tw_{ij}^{2})(\chi_{ij}b_{ij} )^{q+r+s}}{s!r!q!}\Big{]}=0.\] Thus, we have \((\mathsf{T}_{1})_{ij,k\ell,nq}=\mathcal{O}(N^{-\epsilon_{b}p})\) in this case. **Case 2.2:**\(q+r+s\geq 0\) is even. Using (5.17) and the simple facts that \(\chi_{ij}(1-\chi_{ij})=0\) and \(\mathbb{E}(b_{ij}^{2})\lesssim N^{-1}\), we have \[\sum_{m=0}^{\ell}\mathbb{E}_{\Psi}\Big{[}\frac{d_{ij}^{k+m}\tilde{d }_{ij}^{\ell-m}(\chi_{ij}b_{ij})^{q+r+s}\mathcal{E}_{ij}}{s!r!q!m!(\ell-m)!} \Big{]} =\frac{-\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}}\mathbb{E}_{\Psi}\Big{[} \frac{d_{ij}w_{ij}(\chi_{ij}b_{ij})^{q+r+s}}{s!r!q!}\Big{]}\] \[=\frac{t}{N^{2}}\Big{(}\mathcal{O}\Big{(}\frac{\mathbf{1}_{q+r+s \geq 2}}{N^{(q+r+s-2)\epsilon_{b}}}\Big{)}+\mathcal{O}\Big{(}\frac{\mathbf{1} _{q+r+s=0}}{N^{\epsilon_{b}}}\Big{)}\Big{)}.\] Further using (5.35) and (5.36), we can obtain that \[(\mathsf{T}_{1})_{ij,k\ell,nq} =\sum_{r=0}^{s_{4}}\sum_{s=0}^{s_{5}}\frac{t}{N^{2}}\Big{(}\mathcal{ O}\Big{(}\frac{\mathbf{1}_{q+r+s\geq 2}}{N^{(q+r+s-2)\epsilon_{b}}}\Big{)}+ \mathcal{O}\Big{(}\frac{\mathbf{1}_{q+r+s=0}}{N^{\epsilon_{b}}}\Big{)}\Big{)}\] \[\quad\times\mathbb{E}_{\Psi}\Big{[}\mathcal{O}_{\prec}\Big{(} \frac{1}{(N\eta\mathbf{1}_{q+r\geq 1}+\mathbf{1}_{q+r=0})t^{3(q+r)}t_{ij}^{3}} \Big{)}F_{2p-1-n}^{(0,0,s)}(0,0,0)\Big{]}\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}( N^{-\epsilon_{b}p}).\] Observing that the above equation has a similar form to (5.40), we may proceed in a similar manner as in Case 1 to estimate \((\mathsf{T}_{1})_{ij,k\ell,nq}\). We will omit the repetitive details for brevity. Consequently, we can conclude that, by possibly adjusting the constants, (5.44) also holds when \(k+\ell=1\). Combining Case 1, Case 2, and the estimates for \((J_{2})_{ij}\)'s, we arrive at \[\sum_{i,j}|(I)_{ij}| \lesssim\frac{\mathbf{1}_{\psi_{ij}=1}}{N^{1-\epsilon_{\alpha}}} \sum_{i,j}\Big{(}(\log N)^{\frac{2p}{1-2p}}\mathbb{E}_{\Psi}\Big{[}F_{2p}(e_{ ij},\tilde{e}_{ij},c_{ij})\Big{]}+N^{-\epsilon_{\alpha}p/2}\Big{)}\] \[\quad+\sum_{i,j}\frac{\mathbf{1}_{\psi_{ij}=0}(1-\mathbf{1}_{i \in\mathcal{T}_{i},j\in\mathcal{T}_{c}})}{N^{2-\epsilon_{d}}}\Big{(}(\log N)^ {-K}\mathbb{E}_{\Psi}\Big{[}F_{2p}(d_{ij},\tilde{d}_{ij},\chi_{ij}b_{ij}) \Big{]}+N^{-\delta p}\Big{)}\] \[\lesssim(\log N)^{-(K\wedge\frac{2p}{2p-1})}\mathbb{E}_{\Psi} \Big{[}\big{|}N\eta\big{(}\mathrm{Im}\,m^{\gamma}(z)-\mathrm{Im}\,\tilde{m}^{ 0}(z)\big{)}\big{|}^{2p}\Big{]}+N^{-\tilde{\delta}p},\] where \(\tilde{\delta}=\tilde{\delta}(\epsilon_{a},\epsilon_{b},\epsilon_{d})>0\). Therefore, for any \(0\leq\gamma\leq 1\), \[\mathbb{E}_{\Psi}\Big{(}\big{|}N\eta\big{(}\mathrm{Im}\,m^{ \gamma}(z_{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\Big{)}- \mathbb{E}_{\Psi}\Big{(}\big{|}N\eta\big{(}\mathrm{Im}\,m^{0}(z_{t})-\mathrm{ Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\Big{)}\] \[=\int_{0}^{\gamma}\frac{\partial\mathbb{E}\Big{(}\big{|}N\eta \big{(}\mathrm{Im}\,m^{\gamma^{\prime}}(z_{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t })\big{)}\big{|}^{2p}\Big{)}}{\partial\gamma^{\prime}}\mathrm{d}\gamma^{ \prime}. \tag{5.45}\] Taking supremum over \(\gamma\), and using the estimates above, we have \[\sup_{0\leq\gamma\leq 1}\mathbb{E}_{\Psi}\Big{(}\big{|}N\eta \big{(}\mathrm{Im}\,m^{\gamma}(z_{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)} \big{|}^{2p}\Big{)}-\mathbb{E}_{\Psi}\Big{(}\big{|}N\eta\big{(}\mathrm{Im}\,m^ {0}(z_{t})-\mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\Big{)}\] \[\lesssim(\log N)^{-(K\wedge\frac{2p}{2p-1})}\sup_{0\leq\gamma\leq 1 }\mathbb{E}_{\Psi}\Big{[}\big{|}N\eta\big{(}\mathrm{Im}\,m^{\gamma}(z_{t})- \mathrm{Im}\,\tilde{m}^{0}(z_{t})\big{)}\big{|}^{2p}\Big{]}+N^{-\tilde{\delta }p}. \tag{5.46}\] The claim now follows by rearranging the terms. ### Proof of Theorem 4.5 The proof of Theorem 4.5 is essentially the same as Theorem 4.3. We outline the proof here while the detailed proof can be found in Appendix B.3. Using the same notation as in the proof of Theorem 4.3 and further defining \(h_{\gamma,(ij)}(\lambda,\beta):=\eta_{0}\sum_{a}f_{\gamma,(aa),(ij)}(\lambda,\beta)\) and \(\mathsf{H}_{(ij)}(\lambda,\beta):=F^{\prime}\big{(}h_{\gamma,(ij)}(\lambda, \beta)\big{)}g_{(ij)}(\lambda,\beta)\). Observe that \[\frac{\partial\mathbb{E}_{\Psi}\big{(}F(N\eta_{0}\mathrm{Im}\,m^{ \gamma}(z_{t}))\big{)}}{\partial\gamma}=-2\Big{(}\sum_{i,j}(I_{1})_{ij}-(I_{2} )_{ij}\Big{)},\] where \((I_{1})_{ij}=\mathbb{E}_{\Psi}\big{[}\mathsf{A}_{ij}\mathsf{H}_{(ij)}([Y^{ \gamma}]_{ij},X_{ij})\big{]}\) and \((I_{2})_{ij}=\gamma(1-\gamma^{2})^{-1/2}t^{1/2}\mathbb{E}_{\Psi}\big{[}w_{ij} \mathsf{H}_{(ij)}([Y^{\gamma}]_{ij},X_{ij})\big{]}\). We estimate them by considering the cases \(\psi_{ij}=1\) and \(\psi_{ij}=0\) separately. For \((I_{2})_{ij}\), in both cases, we can estimate it by Gaussian integration by part, which leads to \[(I_{2})_{ij}=\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}N}\Big{(}\mathbb{E}_{ \Psi}\big{[}\partial_{w_{ij}}\big{\{}\mathsf{H}_{(ij)}(d_{ij},\chi_{ij}b_{ij}) \big{\}}\big{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathbb{E}_{\Psi}\big{[} \partial_{w_{ij}}\big{\{}\mathsf{H}_{(ij)}(e_{ij},c_{ij})\big{\}}\big{]}\cdot \mathbf{1}_{\psi_{ij}=1}\Big{)}.\] The term involving \(\mathbf{1}_{\psi_{ij}=1}\) can be estimated directly by the fact that \(t^{1/2}N^{-1}\cdot\sum_{i,j}\mathbf{1}_{\psi_{ij}=1}\sim t^{1/2}N^{-1}\cdot N^{1- \epsilon_{\alpha}}=\mathfrak{o}(1)\). Therefore, by the definition of \(d_{ij}\), we have \[(I_{2})_{ij}\approx\frac{\gamma t}{N}\mathbb{E}_{\Psi}\big{[}\partial_{d_{ij}} \big{\{}\mathsf{H}_{(ij)}(d_{ij},\chi_{ij}b_{ij})\big{\}}\big{]}\cdot\mathbf{1 }_{\psi_{ij}=0}. \tag{5.47}\] For \((I_{1})_{ij}\), we only need to consider the case \(\psi_{ij}=\chi_{ij}=0\) since \(\mathsf{A}_{ij}\mathbf{1}_{\psi_{ij}=1\text{ or }\chi_{ij}=1}=0\). Using Taylor expansion and the law of total expectation gives \[(I_{1})_{ij}\approx\sum_{k}\frac{1}{k!}\mathbb{E}_{\Psi}[a_{ij}d_{ij}^{k}| \chi_{ij}=0]\cdot\mathbb{E}_{\Psi}\big{[}\partial_{d_{ij}}^{k}\big{\{}\mathsf{ H}_{(ij)}(d_{ij},\chi_{ij}b_{ij})\big{\}}\big{]}|\chi_{ij}=0\big{]}\cdot\mathbb{P}( \chi_{ij}=0)\cdot\mathbf{1}_{\psi_{ij}=0}.\] For even values of \(k\), it holds that \(\mathbb{E}_{\Psi}[a_{ij}d_{ij}^{k}|\chi_{ij}=0]=0\). In the case where \(k\geq 3\), we have \(\mathbb{E}_{\Psi}[a_{ij}d_{ij}^{k}|\chi_{ij}=0]\sim N^{-2-\varepsilon}\) for some small \(\varepsilon>0\), effectively compensating for the size of the summation \(\sum_{i,j}\). Consequently, we arrive at \[(I_{1})_{ij}\approx\mathbb{E}_{\Psi}[\gamma a_{ij}^{2}]\mathbb{P}(\chi_{ij}=0 )\cdot\mathbb{E}_{\Psi}\big{[}\partial_{d_{ij}}\big{\{}\mathsf{H}_{(ij)}(d_{ ij},\chi_{ij}b_{ij})\big{\}}|\chi_{ij}=0]\cdot\mathbf{1}_{\psi_{ij}=0}. \tag{5.48}\] In view of (5.47) and (5.48), we can conclude the proof by leveraging the moment matching (5.17) and exploiting the smallness of \(|\mathbb{E}_{\Psi}\big{[}\partial_{d_{ij}}\big{\{}\mathsf{H}_{(ij)}(d_{ij}, \chi_{ij}b_{ij})\big{\}}\big{]}-\mathbb{E}_{\Psi}\big{[}\partial_{d_{ij}} \big{\{}\mathsf{H}_{(ij)}(d_{ij},\chi_{ij}b_{ij})\big{\}}|\chi_{ij}=0\big{]}|\). ## Acknowledgments The authors would like to thank Fan Yang for helpful discussion. ## Appendix A Remaining proofs for the Gaussian divisible model ### Proof of Lemma 2.11 Consider \[z=(\lambda_{-}^{\mathsf{mp}}+E)+\mathrm{i}\eta,\quad|E|\leq N^{-\varepsilon_{ 1}},\quad N^{-2/3-\varepsilon_{2}}\leq\eta\leq\varepsilon_{3}.\] (A.1) Recall that \[V_{t}=\sqrt{t}W+X,\] where \(t=N\mathbb{E}|\mathsf{A}_{ij}|^{2}\). By the eigenvalue rigidity (the left edge analog of [23, Theorem 2.13]), \[|\lambda_{M}(\mathcal{S}(V_{t}))-\lambda_{-,t}|\prec N^{-2/3}.\] As an analog of Lemma 2.8, \[|\lambda_{M}(\mathcal{S}(V_{t}))-\lambda_{-}^{\mathsf{mp}}|\prec N^{-2 \epsilon_{b}}.\] Thus, \[|\lambda_{-}^{\mathsf{mp}}-\lambda_{-,t}|\prec N^{-2/3}+N^{-2\epsilon_{b}} \lesssim N^{-2\varepsilon_{1}}.\] We write \[z=\{\lambda_{-,t}+(\lambda_{-}^{\mathsf{mp}}-\lambda_{-,t})+E\}+\mathrm{i} \eta=:(\lambda_{-,t}+E^{\prime})+\mathrm{i}\eta,\] where \(E^{\prime}\coloneqq E+(\lambda_{-}^{\mathsf{mp}}-\lambda_{-,t})\). Then, with high probability, there exists \(\kappa\in\mathbb{R}\) such that \[z=(\lambda_{-,t}+\kappa)+\mathrm{i}\eta,\quad|\kappa|\leq 2N^{-\varepsilon_{1}}, \quad N^{-2/3-\varepsilon_{2}}\leq\eta\leq\varepsilon_{3}.\] (A.2) Then, the desired result directly follows from the lemma below. Define \(b_{t}\equiv b_{t}(z)\coloneqq 1+c_{N}tm_{t}(z)\). Then we have \(\zeta_{t}(z)\coloneqq zb_{t}^{2}-tb_{t}(1-c_{N})\). **Lemma A.1**.: _Let \(z\) as in (A.2). There exist constants \(c,C>0\) such that the following holds:_ 1. _For_ \(|\kappa|+\eta\leq ct^{2}(\log N)^{-2C}\)_,_ \[\lambda_{M}(XX^{T})-\operatorname{Re}\zeta_{t}(z)\geq ct^{2},\quad\operatorname{ Im}\zeta_{t}(z)\geq ctN^{-2/3-\varepsilon_{2}}.\] 2. _For_ \(|\kappa|+\eta\geq ct^{2}(\log N)^{-2C}\)_,_ \[\operatorname{Im}\zeta_{t}(z)\geq ct^{2}(\log N)^{-C}.\] Proof.: This lemma is essentially a byproduct of Theorem 2.9 through some elementary calculations. Comparing \(\zeta_{t}(\lambda_{-,t})\) and \(\zeta_{t}(z)\), it boils down to the size of \(m_{t}(\lambda_{-,t})-m_{t}(z).\) We shall rely on the square root behavior of \(\rho_{t}.\) Case (1) \(|\kappa|\leq 2\eta.\) Notice that \[|m_{t}(\lambda_{-,t})-m_{t}(z)|\leq\int_{\lambda_{-,t}}^{\lambda_{+,t}}\frac{3 \eta}{|\lambda-\lambda_{-,t}||\lambda-z|}\rho_{t}(\lambda)d\lambda.\] By the square-root behavior of \(\rho_{t}\) near the left edge, \[\int_{\lambda_{-,t}}^{\lambda_{-,t}+6\eta}\frac{\eta}{|\lambda-\lambda_{-,t}|| \lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\int_{\lambda_{-,t}}^{\lambda_{-,t }+6\eta}\frac{\eta}{\eta\sqrt{\lambda-\lambda_{-,t}}}d\lambda\lesssim\sqrt{ \eta}.\] If \(\lambda\geq\lambda_{-,t}+6\eta,\) we have \(\lambda-\lambda_{-,t}-3\eta\geq(\lambda-\lambda_{-,t})/2.\) Thus, \[\int_{\lambda_{-,t}+6\eta}^{\lambda_{+,t}}\frac{\eta}{|\lambda-\lambda_{-,t}|| \lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\int_{\lambda_{-,t}+6\eta}^{ \lambda_{+,t}}\frac{\eta}{(\lambda-\lambda_{-,t})^{3/2}}d\lambda\lesssim\sqrt {\eta}.\] Case (2) \(\kappa>2\eta.\) We need to estimate \[\int_{\lambda_{-,t}}^{\lambda_{+,t}}\frac{\kappa}{|\lambda-\lambda_{-,t}|| \lambda-z|}\rho_{t}(\lambda)d\lambda.\] Due to the square-root decay, \[\int_{\lambda_{-,t}}^{\lambda_{-,t}+\eta}\frac{\kappa}{|\lambda-\lambda_{-,t} ||\lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\int_{\lambda_{-,t}}^{\lambda_{-,t}+\eta}\frac{\kappa}{\kappa\sqrt{\lambda-\lambda_{-,t}}}d\lambda\lesssim \sqrt{\eta}.\] We also observe \[\int_{\lambda_{-,t}+\eta}^{\lambda_{-,t}+\kappa-\eta}\frac{\kappa}{|\lambda- \lambda_{-,t}||\lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\int_{\eta}^{\kappa -\eta}\frac{\kappa}{\sqrt{x}(\kappa-x)}dx\lesssim\sqrt{\kappa}\log(\kappa/ \eta).\] If \(\lambda\in[\lambda_{-,t}+\kappa-\eta,\lambda_{-,t}+2\kappa],\) we have \(\lambda-\lambda_{-,t}\sim\kappa,\) which implies \[\int_{\lambda_{-,t}+\kappa-\eta}^{\lambda_{-,t}+2\kappa}\frac{\kappa}{| \lambda-\lambda_{-,t}||\lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\int_{0}^{ \kappa}\frac{\sqrt{\kappa}}{\sqrt{x^{2}+\eta^{2}}}dx\lesssim\sqrt{\kappa}\log (\kappa/\eta).\] For \(\lambda\in[\lambda_{-,t}+2\kappa,\lambda_{+,t}],\) \[\int_{\lambda_{-,t}+2\kappa}^{\lambda_{+,t}}\frac{\kappa}{|\lambda-\lambda_{-,t}||\lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\sqrt{\kappa}\] Case (3) \(\kappa<-2\eta.\) By splitting \([\lambda_{-,t},\lambda_{+,t}]\) into \([\lambda_{-,t},\lambda_{-,t}+|\kappa|]\) and \([\lambda_{-,t}+|\kappa|,\lambda_{+,t}],\) we find that \[\int_{\lambda_{-,t}}^{\lambda_{+,t}}\frac{\kappa}{|\lambda-\lambda_{-,t}|| \lambda-z|}\rho_{t}(\lambda)d\lambda\lesssim\sqrt{|\kappa|}.\] Note \(|b_{t}(\lambda_{-,t})|=O(1)=|b_{t}(z)|\) due to the fact that \(|m_{t}(u)|\lesssim(t|u|)^{-1/2}\). Thus, for \(|\kappa|+\eta\leq(\log N)^{-C}t^{2},\) \[|\zeta_{t}(z)-\zeta_{t}(\lambda_{-,t})|\ll t^{2}.\] By Lemma 2.8 and Lemma 3.2, \[(1-t)\lambda_{-}^{\text{mp}}-\text{Re}\,\zeta_{t}(z)=\big{(}(1-t)\lambda_{-}^{ \text{mp}}-\lambda_{M}(\mathcal{S}(X))\big{)}+(\lambda_{M}(\mathcal{S}(X))- \zeta_{t}(\lambda_{-,t}))+\text{Re}\,[\zeta_{t}(\lambda_{-,t}))-\zeta_{t}(z)] \sim t^{2}.\] Next, we consider the imaginary part of \(\zeta_{t}(z).\) Setting \[\Phi(\kappa,\eta)=\begin{cases}\sqrt{\kappa+\eta},&\kappa\geq 0,\\ \frac{\eta}{\sqrt{|\kappa|+\eta}},&\kappa<0,\end{cases}\] we have \(\text{Im}\,\zeta_{t}(z)\sim\eta+t\Phi(\kappa,\eta),\) which gives the desired estimates on the imaginary part of \(\zeta_{t}(z).\) ### Proof of Proposition 2.12 We estimate the size of \(G_{ij}(X,\zeta)\) only. We can bound \(G_{ij}(X^{\top},\zeta)\) in a similar way. Define \(H\coloneqq X/\sqrt{1-t}\) and denote \(\omega\coloneqq\zeta/(1-t)\). It is enough to find a constant \(c=c(\epsilon_{a},\epsilon_{a},\epsilon_{b})\) such that \[|G_{ij}(H,\omega)-\delta_{ij}\mathsf{m_{mp}}(\omega)|\prec N^{-c}\mathbf{1}_{i,j\in\mathcal{T}_{r}}+t^{-2}(1-\mathbf{1}_{i,j\in\mathcal{T}_{r}}).\] This can be proved by a minor modification of [56, Section 6]. In light of Lemma 2.8, the following two lemmas are trivial. We may use the rigidiy estimate, Lemma 2.8, to get Lemma A.3 below. **Lemma A.2** (Crude bound using the imaginary part).: _Consider \(\omega=E+\mathrm{i}\eta\in\mathbb{C}_{+}\). If \(\eta>C\),_ \[|G_{ij}(H,\omega)|\leq C^{-1}.\] **Lemma A.3** (Crude bound on the domain \(\mathsf{D}_{\zeta}\)).: _Let \(\mathsf{D}_{\zeta}=\mathsf{D}_{\zeta}(c_{0},C_{0})\) be as in Eq. (2.9). Let \(\zeta\in\mathsf{D}_{\zeta}\). Denote \(\omega=\zeta/(1-t)\). Then with high probability,_ \[|G_{ij}(H,\omega)|\lesssim(\log N)^{C_{0}}t^{-2}.\] Let us write \(H=(h_{ij})\). By Schur complement, \[G_{ii}(H,\omega)=-\frac{1}{\omega+\frac{\omega}{N}\sum_{k=1}^{N}G_{kk}((H^{(i) })^{\top},\omega)+Z_{i}}\] (A.3) where we denote by \(H^{(i)}\) the matrix obtained from \(H\) by removing \(i\)-th row and \[Z_{i}\coloneqq\omega\sum_{1\leq k,l\leq N}h_{ik}h_{il}G_{kl}((H^{(i)})^{\top },\omega)-\frac{\omega}{N}\sum_{k=1}^{N}G_{kk}((H^{(i)})^{\top},\omega).\] We define \(\Lambda_{d}(\omega)\), \(\Lambda_{o}(\omega)\) and \(\Lambda(\omega)\) by \[\Lambda_{d}(\omega)=\max_{i\in\mathcal{T}_{r}}|G_{ii}(H,\omega)-\mathsf{m_{mp }}(\omega)|,\ \ \Lambda_{o}(\omega)=\max_{\begin{subarray}{c}i\neq j\\ i,j\in\mathcal{T}_{r}\end{subarray}}|G_{ij}(H,\omega)|,\ \ \Lambda(\omega)=|m_{H}(\omega)-\mathsf{m_{mp }}(\omega)|.\] For \(\omega=E+\mathrm{i}\eta\), we define \[\Phi\equiv\Phi(\omega)\coloneqq\sqrt{\frac{\operatorname{Im}\mathsf{m_{mp}}( \omega)+\Lambda(\omega)}{N\eta}}+t^{-2}N^{-\epsilon_{\alpha}/2}+t^{-2}N^{- \epsilon_{b}}.\] Define the events \(\Omega(\omega,K)\), \(\mathbf{B}(\omega)\) and \(\Gamma(\omega,K)\) for \(K>0\) by \[\Omega(\omega,K)\coloneqq\Big{\{}\max\Big{(}\Lambda_{o}(\omega),\max_{i\in \mathcal{T}_{r}}|G_{ii}(H,\omega)-m_{H}(\omega)|,\max_{i\in\mathcal{T}_{r}}|Z_ {i}(\omega)|\Big{)}\geq K\Phi\Big{\}},\] \[\mathbf{B}(\omega)\coloneqq\{\Lambda_{o}(\omega)+\Lambda_{d}(\omega)>(\log N)^{-1 }\},\quad\Gamma(\omega,K)\coloneqq\Omega^{c}(\omega,K)\cup\mathbf{B}(\omega).\] We also introduce the logarithmic factor \(\varphi\equiv\varphi_{N}\coloneqq(\log N)^{\log\log N}\). **Lemma A.4**.: _Suppose \(\Psi\) is good. Recall \(\omega\equiv\omega(\zeta)=\zeta/(1-t)\). There exist a constant \(C>0\) such that the event_ \[\bigcap_{\zeta\in\mathsf{D}_{\zeta}}\Gamma(\omega,\varphi^{C})\] _holds with high probability._ Proof.: By a standard lattice argument, it is enough to show that \(\Gamma(\omega,\varphi^{C})\) holds with with high probability for any \(\omega=\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\). Fix \(\omega=\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\). We define \[\Omega_{o}(\omega,K) \coloneqq\big{\{}\Lambda_{o}(\omega)\geq K\Phi(\omega)\big{\}},\] \[\Omega_{d}(\omega,K) \coloneqq\Big{\{}\max_{i\in\mathcal{T}_{r}}|G_{ii}(H,\omega)-m_{H }(\omega)|\geq K\Phi(\omega)\Big{\}},\] \[\Omega_{Z}(\omega,K) \coloneqq\Big{\{}\max_{i\in\mathcal{T}_{r}}|Z_{i}|\geq K\Phi( \omega)\Big{\}}.\] Since \(\Omega=\Omega_{o}\cup\Omega_{d}\cup\Omega_{Z}\), it is sufficient to show \(\Omega_{o}^{c}\cup\mathbf{B}\), \(\Omega_{d}^{c}\cup\mathbf{B}\) and \(\Omega_{Z}^{c}\cup\mathbf{B}\) hold with high probability respectively. (1) Consider the event \(\Omega_{o}^{c}\cup\mathbf{B}\). Fix \(i\neq j\) with \(i,j\in\mathcal{T}_{r}\). On the event \(\mathbf{B}^{c}\), we have \(|G_{ii}(H,\zeta)|\sim 1\). Then, by the resolvent identity, \[G_{jj}(H^{(i)},\omega)=G_{jj}(H,\omega)-\frac{G_{ji}(H,\omega)G_{ij}(H,\omega)}{ G_{ii}(H,\omega)},\] (A.4) it follows that \(G_{jj}(H^{(i)},\omega)\sim 1\) on \(\mathbf{B}^{c}\). Thus, we can get \[\Lambda_{o}(\omega)\lesssim\max_{\begin{subarray}{c}i\neq j\\ i,j\in\mathcal{T}_{r}\end{subarray}}\left|\sum_{1\leq k,l\leq N}h_{ik}h_{jl}G_ {kl}((H^{(ij)})^{\top},\omega)\right|,\] where we denote by \(H^{(ij)}\) the matrix obtained from \(H\) by removing \(i\)-th and \(j\)-th rows. Since \(i,j\in\mathcal{T}_{r}\), applying the large deviation estimate [2, Corollary 25], the following estimate holds with high probability: \[\bigg{|}\sum_{1\leq k,l\leq N}h_{ik}h_{jl}G_{kl}((H^{(ij)})^{\top},\omega) \bigg{|}\leq\varphi^{C}\left(N^{-\epsilon_{b}}\max_{k,l}|G_{kl}((H^{(ij)})^{ \top},\omega)|+\frac{1}{N}\bigg{(}\sum_{k,l}|G_{kl}((H^{(ij)})^{\top},\omega)|^ {2}\bigg{)}^{1/2}\right).\] Note that \[\sum_{k,l}|G_{kl}((H^{(ij)})^{\top},\omega)|^{2}=\frac{\sum_{k}\operatorname{ Im}G_{kk}((H^{(ij)})^{\top},\omega)}{\eta},\] (A.5) and \[\sum_{k}G_{kk}((H^{(ij)})^{\top},\omega)-\sum_{\ell}G_{\ell\ell}(H^{(ij)}, \omega)=\frac{O(N)}{\omega}.\] (A.6) Using (A.4), (A.5) and (A.6), together with Lemma A.3, we conclude that on the event \(\mathbf{B}^{c}\), with high probability, for some constant \(C>0\) large enough, \[\Lambda_{o}(\omega)\leq\varphi^{C}\left(t^{-2}N^{-\epsilon_{b}}+\sqrt{\frac{ \operatorname{Im}\mathsf{m}_{\mathsf{mp}}+\Lambda+\Lambda_{o}^{2}+t^{-4}N^{- \epsilon_{\alpha}}}{N\eta}+\frac{1}{N}}\right),\] with high probability for some constant \(C>0\) large enough. The event \(\Omega_{o}^{c}\cap\mathbf{B}^{c}\) holds with high probability. (2) We claim that \(\Omega_{Z}^{c}\cup\mathbf{B}\) holds with high probability. In fact, the claim directly follows from the large deviation estimate [2, Corollary 25] repeating the same argument we used above; on the event \(\mathbf{B}^{c}\), for \(i\in\mathcal{T}_{r}\), we have \(|Z_{i}|\leq\varphi^{C}\Phi\) with high probability for some constant \(C>0\). (3) We shall prove \(\Omega_{d}^{c}\cup\mathbf{B}\) holds with high probability. For \(i\in\mathcal{T}_{r}\), \[G_{ii}(H,\omega)-m_{H}(\omega)\leq\max_{j\in\mathcal{T}_{r}}|G_{ii}(H,\omega)- G_{jj}(H,\omega)|+\varphi^{C}t^{-2}N^{-\epsilon_{\alpha}},\] where we use Lemma A.3 to bound \(G_{jj}\) with \(j\notin\mathcal{T}_{r}\). For \(i,j\in\mathcal{T}_{r}\) with \(i\neq j\), on the event \(\mathbf{B}^{c}\), with high probability, we can find that \[|G_{ii}(H,\omega)-G_{jj}(H,\omega)| \leq\bigg{|}\frac{1}{\omega+\frac{\omega}{N}\sum_{k=1}^{N}G_{kk} ((H^{(i)})^{\top},\omega)+Z_{i}}-\frac{1}{\omega+\frac{\omega}{N}\sum_{k=1}^{N} G_{kk}((H^{(j)})^{\top},\omega)+Z_{j}}\bigg{|}\] \[\lesssim\max_{i\in\mathcal{T}_{r}}|Z_{i}|+\Lambda_{o}^{2}+t^{-4}N ^{-\epsilon_{\alpha}}\] where we use \[\sum_{k}G_{kk}((H^{(i)})^{\top},\omega)-\sum_{\ell}G_{\ell\ell}(H^{(i)},\omega) =\frac{M-N+1}{\omega}\] (A.7) and the estimates we have shown above. The desired result follows. **Corollary A.5**.: _Suppose \(\Psi\) is good. Let \(C^{\prime}>0\) be a constant. There exist a constant \(C>0\) such that the event \(\Omega^{c}(E+\mathrm{i}\eta,\varphi^{C})\) holds with high probability._ Proof.: Recall the argument we used in the proof of the previous lemma. Using the large deviation estimate [2, Corollary 25] with Lemma A.2, it is straightforward that \(\Omega_{o}^{c}\) and \(\Omega_{Z}^{c}\) hold with high probability. For \(\Omega_{d}^{c}\), the desired result follows from the consequence of Cauchy's interlacing theorem, that is, \[\frac{1}{N}\sum_{k=1}^{N}G_{kk}((H^{(i)})^{\top},\omega)-\frac{1}{N}\sum_{k=1}^ {N}G_{kk}((H^{(j)})^{\top},\omega)\lesssim\frac{1}{N\eta}.\] Let us introduce the deviance function \(D(u(\omega),\omega)\) by setting \[D(u(\omega),\omega)\coloneqq\left(\frac{1}{u(\omega)}+c_{N}\omega u(\omega) \right)-\left(\frac{1}{\mathsf{m}_{\mathsf{mp}}(\omega)}+c_{N}\omega\mathsf{ m}_{\mathsf{mp}}(\omega)\right).\] **Lemma A.6**.: _On the event \(\Gamma(\omega,\varphi^{C})\),_ \[|D(m_{H}(\omega),\omega)|\leq O(\varphi^{2C}\Phi^{2})+\infty\mathds{1}_{ \boldsymbol{B}(\omega)}.\] Proof.: Recall that \((\mathsf{m}_{\mathsf{mp}})^{-1}(\omega)=-\omega+(1-c_{N})-\omega c_{N} \mathsf{m}_{\mathsf{mp}}\). Using (A.3), (A.4) and (A.7), on the event \(\Omega^{c}\cap\boldsymbol{B}^{c}\), we have \[G_{ii}^{-1}(H,\omega)=(\mathsf{m}_{\mathsf{mp}})^{-1}(\omega)+\omega c_{N}( \mathsf{m}_{\mathsf{mp}}(\omega)-m_{H}(\omega))-Z_{i}+O(\varphi^{2C}\Phi^{2}+ t^{-4}N^{-\epsilon_{\alpha}}+N^{-1}),\] so it follows that \[m_{H}^{-1}(\omega)-G_{ii}^{-1}(H,\omega)=D(m_{H}(\omega),\omega)+Z_{i}+O( \varphi^{2C}\Phi^{2}+t^{-4}N^{-\epsilon_{\alpha}}+N^{-1}).\] Averaging over \(i\in\mathcal{T}_{r}\) yields \[\frac{1}{|\mathcal{T}_{r}|}\sum_{i\in\mathcal{T}_{r}}(m_{H}^{-1}(\omega)-G_{ ii}^{-1}(H,\omega))=D(m_{H}(\omega),\omega)+\frac{1}{|\mathcal{T}_{r}|}\sum_{i \in\mathcal{T}_{r}}Z_{i}+O(\varphi^{2C}\Phi^{2}+t^{-4}N^{-\epsilon_{\alpha}}+ N^{-1}).\] Since \(\sum_{i}G_{ii}(H,\omega)-m_{H}(\omega)=0\) and \[m_{H}^{-1}(\omega)-G_{ii}^{-1}(H,\omega)=\frac{G_{ii}(H,\omega)-m_{H}(\omega) }{m_{H}^{2}(\omega)}-\frac{\big{(}G_{ii}(H,\omega)-m_{H}(\omega)\big{)}^{2}}{ m_{H}^{3}(\omega)}+O\Big{(}\frac{\big{(}G_{ii}(H,\omega)-m_{H}(\omega)\big{)}^{3}}{ m_{H}^{4}(\omega)}\Big{)},\] we obtain that \(|D(m_{H}(\omega),\omega)|\leq O(\varphi^{2C}\Phi^{2})\) on the event \(\Omega^{c}\cap\boldsymbol{B}^{c}\). **Lemma A.7**.: _Recall \(\omega\equiv\omega(\zeta)=\zeta/(1-t)\) and write \(\omega=E+\mathrm{i}\eta\). Let \(C,C^{\prime}>0\) be constants. Consider an event \(A\) such that_ \[A\subset\bigcap_{\zeta\in\mathsf{D}_{\zeta}}\Gamma(\omega,\varphi^{C})\cap \bigcap_{\zeta\in\mathsf{D}_{\zeta},\eta=C^{\prime}}\boldsymbol{B}^{c}(\omega).\] _Suppose that in \(A\), for \(\omega=\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\),_ \[|D(m_{H}(\omega),\omega)|\leq\mathfrak{d}(\omega)+\infty\mathds{1}_{B(\omega)},\] _where \(\mathfrak{d}:\mathbb{C}\mapsto\mathbb{R}_{+}\) is a continuous function such that \(\mathfrak{d}(E+\mathrm{i}\eta)\) is decreasing in \(\eta\) and \(|\mathfrak{d}(z)|\leq(\log N)^{-8}\)._ _Then, for all \(\omega\equiv\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\), we have_ \[|m_{H}(\omega)-\mathsf{m}_{\mathsf{mp}}(\omega)|\lesssim\log N\frac{\mathfrak{d }(\zeta)}{\sqrt{|E-\lambda_{-}^{\mathsf{mp}}|+\eta+\mathfrak{d}(\zeta)}}\quad \text{ in }A,\] (A.8) _and_ \[A\subset\bigcap_{\zeta\in\mathsf{D}_{\zeta}}\boldsymbol{B}^{c}(\zeta).\] (A.9) Proof.: We follow the proof of [56, Lemma 6.12]. Denote \(\omega=\omega(\zeta)=E+\mathrm{i}\eta\) with \(\zeta\in\mathsf{D}_{\zeta}\). For each \(E\), we define \[I_{E}\coloneqq\{\eta:\Lambda_{o}(E+\mathrm{i}\eta^{\prime})+\Lambda_{d}(E+ \mathrm{i}\eta^{\prime})\leq(\log N)^{-1}\text{ for all }\eta^{\prime}\geq\eta\text{ such that }(1-t)\cdot(E+\mathrm{i}\eta^{\prime})\in \mathsf{D}_{\zeta}\}.\] Let \(m_{1}\) and \(m_{2}\) be two solutions of equation \(D(m(\omega),\omega)=\mathfrak{d}(\omega)\). On \(\mathbf{B}^{c}(\omega)\), by assumption, we have \[|D(m_{H}(\omega),\omega)|\leq\mathfrak{d}(\omega).\] Then, the estimate (A.8) immediately follows from the argument around [56, Eq. (6.45)-Eq. (6.46)]. Next, we will prove the second statement (A.9). Due to the case \(\eta=C^{\prime}\), we know \(I_{E}\neq\emptyset\) on \(A\). Let us argue by contradiction. Define \[\mathcal{D}_{E}=\{\eta:\omega=E+\mathrm{i}\eta,(1-t)\cdot\omega\in\mathsf{D} _{\zeta}\}.\] Assume \(I_{E}\neq\mathcal{D}_{E}\). Let \(\eta_{0}=\inf I_{E}\). For \(\omega_{0}=E+\mathrm{i}\eta_{0}\), we have \(\Lambda_{o}(\omega_{0})+\Lambda_{d}(\omega_{0})=(\log N)^{-1}\). It also follows \[\Lambda(\omega_{0})\leq\Big{|}\frac{1}{N}\sum_{i\in\mathcal{T}_{r }}\big{(}G_{ii}(H,\omega_{0})-\mathsf{m}_{\mathsf{mp}}(\omega_{0})\big{)} \Big{|}+\Big{|}\frac{1}{N}\sum_{i\notin\mathcal{T}_{r}}\big{(}G_{ii}(H,\omega_ {0})-\mathsf{m}_{\mathsf{mp}}(\omega_{0})\big{)}\Big{|}\\ \leq(\log N)^{-1}+\varphi^{C}t^{-2}N^{-\epsilon_{\alpha}}\lesssim (\log N)^{-1}.\] By the first statement we already proved, on the event \(A\), we obtain \[\Lambda(\omega_{0})\lesssim(\log N)^{-3}.\] Since \(\Lambda_{o}(\omega_{0})+\Lambda_{d}(\omega_{0})=(\log N)^{-1}\), we have \(A\subset\mathbf{B}^{c}(\omega_{0})\) and thus, by the assumption for \(A\), we conclude that \(\Lambda_{o}(\omega_{0})+\Lambda_{d}(\omega_{0})\ll(\log N)^{-1}\) on the event \(A\), which makes a contradiction. **Proposition A.8**.: _Recall \(\omega\equiv\omega(\zeta)=\zeta/(1-t)\) and write \(\omega=E+\mathrm{i}\eta\). There exist a constant \(C>0\) such that the following event holds with high probability:_ \[\bigcap_{\zeta\in\mathsf{D}_{\zeta}}\{\Lambda_{o}(\omega)+\Lambda_{d}(\omega) \leq\varphi^{C}(t^{-2}(N\eta)^{-1/2}+t^{-3}N^{-\epsilon_{\alpha}/2}+t^{-3}N^{- \epsilon_{b}})\}.\] Proof.: Consider the event \[A_{0}=\bigcap_{\zeta\in\mathsf{D}_{\zeta}}\Gamma(\omega,\varphi^{C}).\] Also we set (for some constant \(C^{\prime}>1\) and \(\omega=E+\mathrm{i}\eta\)) \[A=A_{0}\cap\bigcap_{\zeta\in\mathsf{D}_{\zeta},\eta=C^{\prime}}\mathbf{B}^{c}( \omega).\] By Lemma A.4 and Corollary A.5, the event \(A\) holds with high probability. Using Lemma A.3, we observe that for \(\omega=\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\), \[\Phi(\omega)\lesssim\varphi t^{-1}(N\eta)^{-1/2}+t^{-2}N^{-\epsilon_{\alpha}/2} +t^{-2}N^{-\epsilon_{b}}.\] Let us set \[\mathfrak{d}(\omega)=\varphi^{C}\big{(}t^{-1}(N\eta)^{-1/2}+t^{-2}N^{- \epsilon_{\alpha}/2}+t^{-2}N^{-\epsilon_{b}}\big{)}.\] On the event \(A\), for \(\omega=\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\), by Lemma A.6 and Lemma A.7, \[\Lambda(\omega)\lesssim\frac{\mathfrak{d}(\omega)}{\sqrt{|E-\lambda_{-}^{ \mathsf{mp}}|+\eta}}.\] Also, by Lemma A.7, \[A\subset\bigcap_{\zeta\in\mathsf{D}_{\zeta}}\mathbf{B}^{c}(\omega),\] which means the event \(A\) is contained in \(\Omega^{c}(\omega,\varphi^{C})\) for any \(\omega=\omega(\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\). The bound for \(\Lambda_{d}\) is given by \(\max_{k\in\mathcal{T}_{r}}|G_{kk}(H,\omega)-m_{H}|+\Lambda\). ### Proof of Theorem 2.10 Recall \(b_{t}=1+c_{N}tm_{t}\) and \(\zeta_{t}=\zeta_{t}(z)=zb_{t}^{2}-tb_{t}(1-c_{N})\). We also set \[\underline{m}_{t}=c_{N}m_{t}-\frac{1-c_{N}}{z},\qquad\underline{m}_{\mathsf{mp }}^{(t)}(\zeta)=c_{N}\mathsf{m}_{\mathsf{mp}}^{(t)}(\zeta)-\frac{1-c_{N}}{\zeta}.\] Let us state a left edge analog of [23, Theorem 2.7]. **Theorem A.9**.: _Suppose that the assumptions in Theorem 2.10 hold. Then,_ \[|G_{ij}(V_{t},z)-b_{t}G_{ij}(X,\zeta_{t}(z))|\prec t^{-3}\left(\sqrt{\frac{ \operatorname{\mathsf{Im}}m_{t}}{N\eta}}+\frac{1}{N\eta}\right)+\frac{t^{-7/ 2}}{N^{1/2}},\] _and_ \[|G_{ij}(V_{t}^{\top},z)-(1+t\underline{m}_{t})G_{ij}(X^{\top},\zeta_{t}(z))| \prec t^{-3}\left(\sqrt{\frac{\operatorname{\mathsf{Im}}m_{t}}{N\eta}}+ \frac{1}{N\eta}\right)+\frac{t^{-7/2}}{N^{1/2}},\] _uniformly in \(z\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\). In addition,_ \[|(G(V_{t},z)V_{t})_{ij}-(G(X,\zeta_{t}(z))X)_{ij}|\prec t^{-3}\left(\sqrt{ \frac{\operatorname{\mathsf{Im}}m_{t}}{N\eta}}+\frac{1}{N\eta}\right)+\frac{t ^{-7/2}}{N^{1/2}},\] _and_ \[|(V_{t}^{\top}G(V_{t},z))_{ij}-(X^{\top}G(X,\zeta_{t}(z)))_{ij}|\prec t^{-3} \left(\sqrt{\frac{\operatorname{\mathsf{Im}}m_{t}}{N\eta}}+\frac{1}{N\eta} \right)+\frac{t^{-7/2}}{N^{1/2}},\] _uniformly in \(z\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\)._ Proof.: Roughly speaking, the conclusion is a left edge analog of [23, Theorem 2.7]. The proof is nearly the same, and thus we only highlight some differences. We first record the notations from [23, Section B of Supplement]. Due to the rotationally invariant property of Gaussian matrix, we have \[V_{t}=X+\sqrt{t}W\stackrel{{ d}}{{=}}O_{1}\tilde{V}_{t}O_{2}^{ \top},\quad\tilde{V}_{t}:=\tilde{X}+\sqrt{t}W,\] (A.10) where \(\tilde{X}\) is a diagonal matrix with diagonal entries being \(\lambda_{i}(\mathcal{S}(X))^{1/2},i\in[M]\). Recall the notations in Lemma 5.2, and we briefly write \(\mathcal{R}(z)=\mathcal{R}(\tilde{V}_{t},z)\) in this proof. By (A.10), to prove an entrywise local law for \(\mathcal{R}(V_{t},z)\), it suffices to prove an anisotropic local law for the resolvent \(\mathcal{R}(z)\). We further define the asymptotic limit of \(\mathcal{R}(z)\) as \[\Pi^{\pi}(z):=\left[\begin{array}{cc}\frac{-(1+c_{N}tm_{t})}{z(1+c_{N}tm_{t}) (1+t\underline{m}_{t})-XX^{\top}}&\frac{-z^{-1/2}}{z(1+c_{N}tm_{t})(1+t\underline {m}_{t})-XX^{\top}}\tilde{X}\\ \tilde{X}^{\top}\frac{-z^{-1/2}}{z(1+c_{N}tm_{t})(1+t\underline{m}_{t})-XX^{ \top}}&\frac{-(1+t\underline{m}_{t})}{z(1+c_{N}tm_{t})(1+t\underline{m}_{t})- X^{\top}X}\end{array}\right].\] We define the index sets \[\mathcal{I}_{1}:=\{1,\cdots,M\},\quad\mathcal{I}_{2}:=\{M+1,\cdots,M+N\}, \quad\mathcal{I}:=\mathcal{I}_{1}\cup\mathcal{I}_{2}.\] In the sequel, we use the Latin letter \(i,j\in\mathcal{I}_{1}\), Greek letters \(\mu,\nu\in\mathcal{I}_{2}\), \(\mathfrak{a},\mathfrak{b}\in\mathcal{I}\). For an \(\mathcal{I}\times\mathcal{I}\) matrix \(A\) and \(i,j\in\mathcal{I}_{1}\), we define the \(2\times 2\) minor as \[A_{[ij]}:=\left(\begin{array}{cc}A_{ij}&A_{i\bar{j}}\\ A_{\bar{i}j}&A_{\bar{i}\bar{j}}\end{array}\right),\] where \(\bar{i}:=i+M\in\mathcal{I}_{2}\). Moreover, for \(\mathfrak{a}\in\mathcal{I}\setminus\{i,\bar{i}\}\), we denote \[A_{[i]\mathfrak{a}}=\left(\begin{array}{c}A_{i\mathfrak{a}}\\ A_{\mathfrak{a}i}\end{array}\right),\quad A_{\mathfrak{a}[i]}=(A_{\mathfrak{a}i },A_{\bar{a}\bar{i}}.)\] Let the error parameter \(\Psi(z)\) be defined as follows, \[\Psi(z):=\sqrt{\frac{\operatorname{\mathsf{Im}}m_{t}}{N\eta}}+\frac{1}{N\eta}.\] Instead of proving [23, Eq. (B.68) in Supplement], which aims at bounding \(u^{\top}(\Pi^{x}(z))^{-1}[R(z)-\Pi^{x}(z)](\Pi^{x}(z))^{-1}v\) for any deterministic unit vector \(u,v\in\mathbb{R}^{M+N}\), we shall prove \[|u^{\top}[\mathcal{R}(z)-\Pi^{x}(z)]v|\prec t^{-3}\Psi(z)+\frac{t^{-7/2}}{N^{1/ 2}}.\] (A.11) We remark here that in [23], it is assumed that all \(\lambda_{i}(\mathcal{S}(X))\)'s are \(O(1)\). Under this assumption, adding \((\Pi^{x}(z))^{-1}\) is harmless. However, in our case, \(\lambda_{i}(\mathcal{S}(X))\) could diverge with \(N\). Then, adding the \((\Pi^{x}(z))^{-1}\) factor which will blow up along with big \(\lambda_{i}(\mathcal{S}(X))\), will complicate the proof of the anisotropic law. On the other hand, (A.11) is what we need anyway. Hence, we get rid of the \((\Pi^{x}(z))^{-1}\) and adapt the proof in [23] to our estimate (A.11). Without the \((\Pi^{x}(z))^{-1}\) factor, the \(\mathcal{R}(z)\) and \(\Pi^{x}(z)\) entries are well controlled, and the remaining proof is nearly the same as [23]. We shall first prove an entrywise version of (A.11): for any \(\mathfrak{a},\mathfrak{b}\in\mathcal{I}\), \[|[\mathcal{R}(z)-\Pi^{x}(z)]_{\mathfrak{a}\mathfrak{b}}|\prec t^{-3}\Psi(z)+ \frac{t^{-7/2}}{N^{1/2}}.\] (A.12) The derivation of (A.12) follows the same procedure as the proof of [23, Eq. (B.69) in Supplement]. This proof primarily relies on Schur complement, the large deviation of quadratic forms of Gaussian vector, and the fact that \(\min_{i}|\lambda_{i}(\mathcal{S}(X))-\zeta_{t}(z)|\gtrsim t^{2}\). Then, for general \(u,v\), analogous to [23, Eq. (B.72) in Supplement], we have \[|u^{\top}[\mathcal{R}(z)-\Pi^{x}(z)]v| \prec t^{-3}\Psi(z)+\frac{t^{-7/2}}{N^{1/2}}+\Big{|}\sum_{i\neq j }u_{[i]}^{\top}\mathcal{R}_{[ij]}u_{[j]}\Big{|}\] \[\quad+\Big{|}\sum_{\mu\neq\nu\geq 2M+1}u_{\mu}^{\top} \mathcal{R}_{\mu\nu}u_{\nu}\Big{|}+2\Big{|}\sum_{i\in\mathcal{I}_{1},\mu\geq 2 M+1}u_{[i]}^{\top}\mathcal{R}_{[ij]}u_{\mu}\Big{|}.\] Therefore, it suffices to prove the following high moment bounds, for any \(a\in\mathbb{N}\), \[\mathbb{E}\Big{|}\sum_{i\neq j}u_{[i]}^{\top}\mathcal{R}_{[ij]}u _{[j]}\Big{|}^{2a}\prec\Big{(}t^{-3}\Psi(z)+\frac{t^{-7/2}}{N^{1/2}}\Big{)}^{2 a},\] \[\mathbb{E}\Big{|}\sum_{i\neq\nu\geq 2M+1}u_{\mu}^{\top} \mathcal{R}_{\mu\nu}u_{\nu}\Big{|}^{2a}\prec\Big{(}t^{-3}\Psi(z)+\frac{t^{-7/ 2}}{N^{1/2}}\Big{)}^{2a},\] \[\mathbb{E}\Big{|}\sum_{i\in\mathcal{I}_{1},\mu\geq 2M+1}u_{[i]}^{ \top}\mathcal{R}_{[i]\mu}u_{\mu}\Big{|}^{2a}\prec\Big{(}t^{-3}\Psi(z)+\frac{t^{- 7/2}}{N^{1/2}}\Big{)}^{2a}.\] The above estimates are proven using a polynomialization method outlined in [17, Section 5], with input from the entrywise estimates (A.12) and resolvent expansion (cf. [23, Lemma B.2 in Supplement]). We omit the details. _Remark 8_.: Actually, the estimates in Theorem A.9 hold uniformly in \(z\) such that \[\lambda_{-,t}-\vartheta^{-1}t^{2}\leq\operatorname{Re}z\leq\lambda_{-,t}+ \vartheta^{-1},\quad\operatorname{Im}z\cdot\Big{(}t+\big{(}|\operatorname{Re} z-\lambda_{-,t}|+\operatorname{Im}z\big{)}^{1/2}\Big{)}\geq N^{-1+\vartheta}, \quad\operatorname{Im}z\leq\vartheta^{-1},\] (A.13) for any \(\vartheta>0\). We can observe that every \(z\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\) satisfies (A.13) if \(\epsilon_{a},\varepsilon_{1},\varepsilon_{2}\) and \(\vartheta\) are sufficiently small. Also note that \(b_{t}=\mathcal{O}(1)\) and \(1+t\underline{m}_{t}=\mathcal{O}(1)\) in the domain \(\mathsf{D}(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\). By Theorem A.9 and Lemma 2.11, it is enough to analyze \(G(X,\zeta)\) and \(G(X^{\top},\zeta)\) with \(\zeta\in\mathsf{D}_{\zeta}\) in order to get the desired result. This was be done in Proposition 2.12. Together with Proposition A.10 and Corollary A.11 below, we complete the proof of Theorem 2.10. **Proposition A.10**.: _Suppose that the assumptions in Proposition 2.12 hold. The following estimates hold with respect to the probability measure \(\mathbb{P}_{\Psi}\)._ 1. _If_ \(i\in\mathcal{T}_{r}\)_, we have_ \[|[G(X,\zeta)X]_{ij}|\prec N^{-\epsilon_{b}/2}.\] _ 2. _If_ \(j\in\mathcal{T}_{c}\)_, we have_ \[|[G(X,\zeta)X]_{ij}|\prec N^{-\epsilon_{b}/2}.\] 3. _Otherwise, we have the crude bound_ \[|[G(X,\zeta)X]_{ij}|\leq\|G(X,\zeta)X\|\lesssim t^{-2}.\] Proof.: Using Proposition 2.12, it follows from Proposition A.12 below. With the above bounds, we can further improve the bound of the off-diagonal Green function entries when \(i\) or \(j\) is typical index. **Corollary A.11**.: _Suppose that the assumptions in Proposition 2.12 hold. The following estimates hold with respect to the probability measure \(\mathbb{P}_{\Psi}\)._ 1. _If_ \(i\neq j\) _and_ \(i\in\mathcal{T}_{r}\) _(or_ \(j\in\mathcal{T}_{r}\)_), there exists a constant_ \(\delta=\delta(\epsilon_{a},\epsilon_{\alpha},\epsilon_{b})>0\) _such that_ \[|G_{ij}(X,\zeta)|\prec N^{-\delta}.\] 2. _If_ \(i\neq j\) _and_ \(i\in\mathcal{T}_{c}\) _(or_ \(j\in\mathcal{T}_{c}\)_), there exists a constant_ \(\delta=\delta(\epsilon_{a},\epsilon_{\alpha},\epsilon_{b})>0\) _such that_ \[|G_{ij}(X^{\top},\zeta)|\prec N^{-\delta}.\] Proof of Corollary a.11.: We shall give the proof only for the case \(i\neq j\) and \(i\in\mathcal{T}_{r}\). The other cases can be proved in the same way. Assume \(i\neq j\) and \(i\in\mathcal{T}_{r}\), observe that \[|G_{ij}(X,\zeta)|=|G_{ii}(X,\zeta)|\cdot\left|\sum_{k,l}x_{ik}G_{kl}((X^{(i)})^ {\top},\zeta)x_{jl}\right|,\] where we denote by \(X^{(i)}\) the matrix obtained from \(X\) by removing \(i\)-th row. Note that \[\sum_{l}G_{kl}((X^{(i)})^{\top},\zeta)x_{jl}=[G((X^{(i)})^{\top},\zeta)(X^{(i) })^{\top}]_{kj}.\] Since \(i\in\mathcal{T}_{r}\), we apply the large deviation estimates in [2, Corollary 25] to bound \[\left|\sum_{k}x_{ik}[G((X^{(i)})^{\top},\zeta)(X^{(i)})^{\top}]_{kj}\right|,\] where we also use Proposition A.12 below to get a high probability bound for \(\|G((X^{(i)})^{\top},\zeta)(X^{(i)})^{\top}\|\). **Proposition A.12**.: _Let \(\zeta=E+\mathrm{i}\eta\in\mathbb{C}_{+}\)._ 1. _If_ \(i\in\mathcal{T}_{r}\)_, we have_ \[|[G(X,\zeta)X]_{ij}|\prec\left(N^{-\epsilon_{b}}\max_{k}|G_{kj}((X ^{(i)})^{\top},\zeta)|+\left(\frac{\operatorname{Im}G_{jj}((X^{(i)})^{\top}, \zeta)}{N\eta}\right)^{1/2}\right)\] \[\times\left(1+|\zeta|\cdot|G_{ii}(X,\zeta)|\cdot\left(N^{-\epsilon _{b}}\max_{k,l}|G_{kl}((X^{(i)})^{\top},\zeta)|+\left(\frac{\sum_{k} \operatorname{Im}G_{kk}((X^{(i)})^{\top},\zeta)}{N^{2}\eta}\right)^{1/2} \right)\right),\] _where we denote by_ \(X^{(i)}\) _the matrix obtained from_ \(X\) _by removing_ \(i\)_-th row._ 2. _If_ \(j\in\mathcal{T}_{c}\)_, we have_ \[|[G(X,\zeta)X]_{ij}|\prec\left(N^{-\epsilon_{b}}\max_{k}|G_{ik}(X ^{[j]},\zeta)|+\left(\frac{\operatorname{Im}G_{ii}(X^{[j]},\zeta)}{N\eta} \right)^{1/2}\right)\] \[\times\left(1+|\zeta|\cdot|G_{jj}(X^{\top},\zeta)|\cdot\left(N^{- \epsilon_{b}}\max_{k,l}|G_{kl}(X^{[j]},\zeta)|+\left(\frac{\sum_{k} \operatorname{Im}G_{kk}(X^{[j]},\zeta)}{N^{2}\eta}\right)^{1/2}\right)\right),\] _where we denote by_ \(X^{[j]}\) _the matrix obtained from_ \(X\) _by removing_ \(j\)_-th column._ _._ 3. _Let_ \(X=UDV\) _be a singular value decomposition of_ \(X\) _where_ \[\text{diag}(D)=(d_{1},d_{2},\cdots,d_{p})\equiv\Big{(}\sqrt{\lambda_{1}(\mathcal{ S}(X))},\sqrt{\lambda_{2}(\mathcal{S}(X))},\cdots,\sqrt{\lambda_{M}(\mathcal{S}(X)) }\Big{)}.\] _(Here we also assume_ \(M<N\) _without loss of generality.) Then,_ \[\|G(X,\zeta)X\|\leq\max_{1\leq i\leq p}\left|\frac{d_{i}}{d_{i}^{2}-\zeta} \right|.\] Proof.: (i) Assume \(i\in\mathcal{T}_{r}\). Note that \(G(X,\zeta)X=XG(X^{\top},\zeta)\). Let \(x_{(i)}\) be the \(i\)-th row of \(X\). See that \[X^{\top}X-\zeta=(X^{(i)})^{\top}X^{(i)}-\zeta+x_{(i)}^{\top}x_{(i)}.\] By the Sherman-Morrison formula, \[G(X^{\top},\zeta)=G((X^{(i)})^{\top},\zeta)-\frac{G((X^{(i)})^{\top},\zeta)x_ {(i)}^{\top}x_{(i)}G((X^{(i)})^{\top},\zeta)}{1+x_{(i)}G((X^{(i)})^{\top}, \zeta)x_{(i)}^{\top}}.\] Since \(\big{(}G_{ii}(X,\zeta)\big{)}^{-1}=-\zeta\big{(}1+x_{(i)}G((X^{(i)})^{\top}, \zeta)x_{(i)}^{\top}\big{)}\), \[G(X^{\top},\zeta)=G((X^{(i)})^{\top},\zeta)+(\zeta G_{ii}(X,\zeta))\cdot G((X^ {(i)})^{\top},\zeta)x_{(i)}^{\top}x_{(i)}G((X^{(i)})^{\top},\zeta).\] We write \([XG(X^{\top},\zeta)]_{ij}=x_{(i)}G(X^{\top},\zeta)e_{j}\). Then, \[x_{(i)}G(X^{\top},\zeta)e_{j}=x_{(i)}G((X^{(i)})^{\top},\zeta)e_{j}+(\zeta G_ {ii}(X,\zeta))\cdot(x_{(i)}G((X^{(i)})^{\top},\zeta)x_{(i)}^{\top})\cdot(x_{(i )}G((X^{(i)})^{\top},\zeta)e_{j}).\] Since \(i\in\mathcal{T}_{r}\), by the large deviation estimate [2, Corollary 25], the desired result follows. (ii) Assume \(j\in\mathcal{T}_{c}\). Let \(x_{[j]}\) be \(j\)-th column of \(X\). See that \[[G(X,\zeta)X]_{ij}=e_{i}^{\top}G(X,\zeta)x_{[j]}.\] By the Sherman-Morrison formula, \[G(X,\zeta)=G(X^{[j]},\zeta)+(\zeta G_{jj}(X^{\top},\zeta))\cdot G(X^{[j]}, \zeta)x_{[j]}x_{[j]}^{\top}G(X^{[j]},\zeta),\] where we denote by \(X^{[j]}\) the matrix obtained from \(X\) by removing \(j\)-th column. Then, \[e_{i}^{\top}G(X,\zeta)x_{[j]}=e_{i}^{\top}G(X^{[j]},\zeta)x_{[j]}+(\zeta G_{ jj}(X^{\top},\zeta))\cdot(e_{i}^{\top}G(X^{[j]},\zeta)x_{[j]})\cdot x_{[j]}^{ \top}G(X^{[j]},\zeta)x_{[j]}).\] Using \(j\in\mathcal{T}_{c}\), we get the desired result using the large deviation estimate [2, Corollary 25]. (iii) This is elementary, and thus we omit the details. ### Remark on Theorem 2.13 Theorem 2.13 is a version of [24, Theorem V.3] with respect to the left edge. The required modification would be straightforward. Let us summarize the main idea of [24] as follows. Let \(\mathsf{B}_{i}\)\((i=1,\cdots,M)\) be independent standard Brownian motions. We fix two time scales: \[t_{0}=N^{-\frac{1}{3}+\phi_{0}},\quad t_{1}=N^{-\frac{1}{3}+\phi_{1}},\] (A.14) where \(\phi_{0}\in(\frac{1}{3}-\frac{\epsilon_{b}}{2},\frac{1}{3})\) and \(0<\phi_{1}<\frac{\phi_{0}}{100}\). For time \(t\geq 0\), we define the process \(\{\lambda_{i}(t):1\leq i\leq M\}\) as the unique strong solution to the following system of SDEs: \[\mathrm{d}\lambda_{i}=2\lambda_{i}^{1/2}\frac{\mathrm{d}\mathsf{B}_{i}}{\sqrt {N}}+\left(\frac{1}{N}\sum_{j\neq i}\frac{\lambda_{i}+\lambda_{j}}{\lambda_{i} -\lambda_{j}}\right)\mathrm{d}t,\quad 1\leq i\leq M,\] with initial data \(\lambda_{i}(0)=\lambda_{i}(\gamma_{w}\mathcal{S}(V_{t_{0}}))\) where \(\gamma_{w}\) is chosen to match the edge eigenvalue gaps of \(\mathcal{S}(V_{t_{0}})\) with those of Wigner matrices. Recall the convention: \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{M}\). Note that the process \(\{\lambda_{i}(t)\}\) has the same joint distribution as the eigenvalues of the matrix \[\gamma_{w}\mathcal{S}(V_{t_{0}+\frac{t}{\gamma_{w}}})=(\gamma_{w}^{1/2}X+( \gamma_{w}t_{0}+t)^{1/2}W)(\gamma_{w}^{1/2}X+(\gamma_{w}t_{0}+t)^{1/2}W)^{\top}.\] Denote by \(\rho_{\lambda,t}\) the asymptotic spectral distribution of \(\mathcal{S}(V_{t_{0}+\frac{t}{\gamma_{w}}})\) (in terms of the rectangular free convolution actually). Let \(E_{\lambda}(t)\) be the left edge of \(\rho_{\lambda,t}\). Now we introduce a defore Wishart matrix \(\mathcal{U}\mathcal{U}^{\top}\). Define \(\mathcal{U}\coloneqq\Sigma^{1/2}\mathcal{X}\) where \(\mathcal{X}\) is a \(M\times N\) real Gaussian matrix (mean zero and variance \(N^{-1}\)) and \(\Sigma=\text{diag}(\sigma_{1},\cdots,\sigma_{M})\) is a diagonal population matrix. Let \(\rho_{\mu,0}\) be the asymptotic spectral distribution of \(\mathcal{U}\mathcal{U}^{\top}\) (given by the multiplicative free convolution of the MP law and the ESD of \(\Sigma\)). We choose the diagonal population covariance matrix \(\Sigma\) such that \(\rho_{\mu,0}\) matches \(\rho_{\lambda,0}\) near the left edge \(E_{\lambda}(0)\) (square-root behavior). We write \(\mu_{i}(0)\coloneqq\mu_{i}(\mathcal{U}\mathcal{U}^{\top})\). Next, define the process \(\{\mu_{i}(t):1\leq i\leq M\}\) through the rectangular DBM with initial data \(\{\mu_{i}(0)\}\). We can show that the edge eigenvalues of \(\{\mu_{i}(t)\}\) are governed by the Tracy-Widom law. We denote by \(\rho_{\mu,t}\) the rectangular free convolution of \(\rho_{\mu,0}\) with the Marchenko-Pastur (MP) law at time \(t\). Let \(E_{\mu}(t)\) be the left edge of \(\rho_{\mu,t}\). We remark that \(E_{\lambda}(0)=E_{\mu}(0)\). Then, in order to get Theorem 2.13, it is enough to show \[\big{|}\big{(}\lambda_{M}(t_{1})-E_{\lambda}(t_{1})\big{)}-\big{(}\mu_{M}(t_{ 1})-E_{\mu}(t_{1})\big{)}\big{|}\prec N^{-2/3-\delta},\] for \(\delta>0\) sufficiently small. The proof of the above estimate relies on the local equilibrium mechanism of the rectangle DBM, which does not have any difference between the left edge or the right edge of the spectrum, given \(\eta_{*}\)-regularities of the initial states. Hence, we omit the remaining argument, and refer to [24] for details. ### Proof of Lemma 3.2 We shall prove Lemma 3.2 in this section. Proof of Lemma 3.2 (i).: The proof is similar to that in [23], we provide proof here completeness. The statement \(\zeta_{-,t}-\lambda_{M}(\mathcal{S}(X))\leq 0\) follows directly from Lemma 3.1. For the other estimate, by Lemma 3.1, we know that \(\Phi_{t}(\zeta_{-,t})\) is the only local extrema of \(\Phi_{t}(\zeta)\) on the interval \((0,\lambda_{M}(\mathcal{S}(X)))\). Hence we have \(\Phi_{t}^{\prime}(\zeta_{-,t})=0\), which gives the equation \[(1-c_{N}tm_{X}(\zeta_{-,t}))^{2}-2c_{N}tm_{X}^{\prime}(\zeta_{-,t})\cdot\zeta _{-,t}\left(1-c_{N}tm_{X}(\zeta_{-,t})\right)-c_{N}(1-c_{N})t^{2}m_{X}^{ \prime}(\zeta_{-,t})=0.\] Rearranging the terms, we can get \[c_{N}tm_{X}^{\prime}(\zeta_{-,t})=\frac{(1-c_{N}tm_{X}(\zeta_{-,t}))^{2}}{2 \zeta_{-,t}\left(1-c_{N}tm_{X}(\zeta_{-,t})\right)+(1-c_{N})t}.\] (A.15) By Lemma 2.1 (iv) and Eq. (2.2), we have on \(\Omega_{\Psi}\) that \[c_{N}tm_{X}(\zeta_{-,t})=\mathcal{O}(t^{1/2}).\] (A.16) Plugging the above bound back to (A.15), we can get \(m_{X}^{\prime}(\zeta_{-,t})\sim t^{-1}\). This together with Lemma 2.3 gives \(\sqrt{\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t}}\sim t\). Proof of Lemma 3.2 (ii).: Since \(\mathcal{S}(X)\) is \(\eta_{*}\)-regular in the sense of Definition 2.2, the estimates for \(|m_{X}^{(k)}(\zeta)|\) on the event \(\Omega_{\Psi}\) is an immediate consequence of Lemmas 2.3 and Lemma 3.2 (i). We prove the estimate for \(|m_{X}(z)-\mathsf{m}_{\mathsf{mp}}^{(t)}(z)|\) as follows. Recall that \(\beta=(\alpha-2)/24\). First, we establish the convergence of Stieltjes transform of a truncated matrix model using the result in [38]. To this end, let us define \(\bar{X}=(\bar{x}_{ij}):=(x_{ij}\mathbbm{1}_{x_{ij}<N^{-\beta}})\) and \(\bar{t}:=1-N\mathbb{E}|\bar{x}_{ij}|^{2}\). It is easy to show that \(|\bar{t}-t|=\mathfrak{o}(N^{-1})\), and thus we have \(|\mathsf{m}_{\mathsf{mp}}^{(t)}(z_{1})-\mathsf{m}_{\mathsf{mp}}^{(\bar{t})}(z _{1})|\leq(N\eta_{1})^{-1}\). Then it follows from [38, Theorem 2.7] that for any \(z_{1}\) such that \(|z_{1}-\zeta_{-,t}|\leq\tau t^{2}\) and \(\eta_{1}\equiv\operatorname{Im}z_{1}>N^{-1+\delta}\) with \(1>\delta>0\) to be chosen later, \[m_{\bar{X}}(z_{1})-\mathsf{m}_{\mathsf{mp}}^{(t)}(z_{1})\prec\frac{1}{N^{ \beta}}+\frac{1}{N\eta_{1}},\] (A.17) We remark here that the local law proved in [38, Theorem 2.7] is for deterministic \(z\). But it is easy to show that the local law holds uniformly in \(z\) in the mentioned domain in [38, Theorem 2.7], with high probability, by a simple continuity argument. Hence, as long as \(z_{1}\) fall in this domain with high probability, even though \(z_{1}\) might be random, we still have (A.17). Using the facts \(|\lambda_{M}(\mathcal{S}(X))-(1-t)\lambda_{-}^{\mathsf{mp}}|\lesssim N^{-\epsilon _{b}}\) and \(\lambda_{M}(\mathcal{S}(X))-\zeta_{-,t}\sim t^{2}\) with high probability (cf. Lemmas 2.8 and 3.2 (i)), we have for \(\tau\) small enough, \[|z_{1}-(1-t)\lambda_{-}^{\mathsf{mp}}|\geq|\zeta_{-,t}-\lambda_{M}(\mathcal{S}( X))|-|\lambda_{M}(\mathcal{S}(X))-(1-t)\lambda_{-}^{\mathsf{mp}}|-|z_{1}-\zeta_{-,t}| \gtrsim t^{2},\] which gives \(|(\mathsf{m}_{\mathsf{mpp}}^{(t)})^{\prime}(z_{1})|\lesssim t^{-4}\) with high probability. Also, we have \(|m_{\tilde{X}}^{\prime}(z_{1})|\lesssim t^{-4}\) with high probability, by the choice of \(z_{1}\), Eq. (2.6), and Lemma 3.2 (i). Therefore, for any \(z_{2}\) satisfying \(\operatorname{Re}z_{2}=\operatorname{Re}z_{1}\) and \(\eta_{2}=\operatorname{Im}z_{2}<N^{-1+\delta}\), we have \[|m_{\tilde{X}}(z_{2})-\mathsf{m}_{\mathsf{mpp}}^{(t)}(z_{2})|\lesssim|m_{ \tilde{X}}(z_{1})-\mathsf{m}_{\mathsf{mpp}}^{(t)}(z_{1})|+t^{-4}|z_{1}-z_{2}| \prec\frac{1}{N^{\beta}}+\frac{1}{N^{1/2}}+\frac{1}{t^{4}N^{1/2}}\lesssim \frac{1}{N^{\beta}},\] (A.18) where in the first step we used the fact \(|z_{i}-\zeta_{-,t}|\leq\tau t^{2},i=1,2\), and in the second step we chose \(\delta=1/2\). Next, we use the rank inequality to compare \(m_{\tilde{X}}(z)\) with \(m_{X}(z)\). Notice that \[m_{\tilde{X}}(z)-m_{X}(z)\leq\frac{2}{N}\mathrm{Rank}(\tilde{X}-X)\cdot(\|( \mathcal{S}(\tilde{X})-z)^{-1}\|+\|(\mathcal{S}(X)-z)^{-1}\|)\prec\frac{ \mathrm{Rank}(\bar{X}-X)}{Nt^{2}}.\] A similar argument as in the proof of Lemma 2.5 shows that, \[\mathrm{Rank}(\bar{X}-X)\prec N^{1-(\alpha-2-2\alpha\beta)/4}.\] Therefore, we can obtain \(m_{\tilde{X}}(z)-m_{X}(z)\prec N^{-(\alpha-2-2\alpha\beta)/4}t^{-2}\). Together with the estimate in (A.18), we have \[m_{X}(z)-\mathsf{m}_{\mathsf{mpp}}^{(t)}(z)\prec\frac{1}{N^{(\alpha-2-2\alpha \beta)/4}t^{2}}+\frac{1}{N^{\beta}}.\] The claim now follows by the fact \(t\gg N^{(2-\alpha)/16}\) in light of Eq. (1.6). Proof of Lemma 3.2 (iii).: Repeating the proof of [23, Lemma A.2], we can obtain \[|\bar{\zeta}_{-,t}-\zeta_{-,t}|\lesssim t^{3}|m_{X}^{\prime}(\zeta_{-,t})-( \mathsf{m}_{\mathsf{mp}}^{(t)})^{\prime}(\zeta_{-,t})|.\] By the Cauchy integral formula, we have \[|m_{X}^{\prime}(\zeta_{-,t})-(\mathsf{m}_{\mathsf{mp}}^{(t)})^{\prime}(\zeta_ {-,t})|\lesssim\oint_{\omega}\frac{|m_{X}(a)-\mathsf{m}_{\mathsf{mp}}^{(t)}(a )|}{|a-\zeta_{-,t}|^{2}}\mathrm{d}a,\] (A.19) where \(\omega\equiv\{a:|a-\zeta_{-,t}|=\tau t^{2}\}\) for some small \(\tau\). Therefore, we have by Lemma 3.2 (ii), \[|\bar{\zeta}_{-,t}-\zeta_{-,t}|\lesssim t\sup_{a\in\omega}|m_{X}(a)-\mathsf{ m}_{\mathsf{mp}}^{(t)}(a)|\prec tN^{-\beta},\] proving the claim. ### Proof of Proposition 3.10 In this section, we shall give the proof of Proposition 3.10. Proof of Proposition 3.10.: By a minor process argument, we have with probability at least \(1-N^{-D}\) for arbitrary large \(D\), there exists constant \(C_{k}>0\), such that \[|\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)}))-\hat{\zeta}_{\mathsf{ e}}|=\Big{|}(1-t)\lambda_{-}^{\mathsf{mp}}-\bar{\zeta}_{-,t}+\lambda_{M}( \mathcal{S}(\tilde{X}^{(k)}))-(1-t)\lambda_{-}^{\mathsf{mp}}+\mathrm{i}N^{-100 K}+\bar{\zeta}_{-,t}-\hat{\zeta}_{\mathsf{e}}\Big{|}\] \[\geq\sqrt{c_{N}}t^{2}-|\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)})) -(1-t)\lambda_{-}^{\mathsf{mp}}|-|\bar{\zeta}_{-,t}-\hat{\zeta}_{\mathsf{e}}| -N^{-100K}\geq C_{k}t^{2}.\] (A.20) Here in the last step, we used Eq. (3.7) and the fact that \(|\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)}))-(1-t)\lambda_{-}^{\mathsf{mp}}| \prec N^{-\epsilon_{b}}\). Therefore, for any \(k\in[N]\), we can define the event \(\Omega_{k}\equiv\{\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)}))-\bar{\zeta}_{-,t} \geq C_{k}t^{2}\}\) with \(\mathbb{P}(\Omega_{k})\geq 1-N^{-D}\) for arbitrary large \(D\). Choosing \(\tau\leq\min_{k}C_{k}/2\). For any \(\zeta\) satisfying \(|\zeta-\hat{\zeta}_{\mathsf{e}}|\leq\tau t^{2}\), we define \[F_{k}(\zeta):=\log|1+\tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)},\zeta))\tilde{x}_ {k}|^{2},\quad\tilde{F}_{k}(\zeta):=\log|1+\tilde{x}_{k}^{\top}(G(\tilde{X}^{( k)},\zeta))_{\mathrm{diag}}\tilde{x}_{k}|^{2}.\] Since \(|\lambda_{M}(\mathcal{S}(\tilde{X}^{(k)}))-\zeta|=|\lambda_{M}(\mathcal{S}( \tilde{X}^{(k)}))-\hat{\zeta}_{\mathsf{e}}|-|\zeta-\hat{\zeta}_{\mathsf{e}}| \geq C_{k}t^{2}/2>0\) on \(\Omega_{k}\), we can obtain that \(\operatorname{Re}\big{(}\tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)},\zeta))\tilde{x} _{k}\big{)}\vee\operatorname{Re}\big{(}\tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)}, \zeta))_{\mathrm{diag}}\tilde{x}_{k}\big{)}\geq 0\). Hence, the functions \(F_{k}(\zeta)\), \(\tilde{F}_{k}(\zeta)\) are well defined on the event \(\Omega_{k}\). For any \(\zeta\in\Xi(\tau)\), using Cauchy integral formula with a cutoff of the contour chosen carefully, we can express \(Y_{k}\equiv Y_{k}(\zeta)\) as \[Y_{k}=\frac{t}{2\pi\mathrm{i}N^{1-\alpha/4}}(\mathbb{E}_{k}-\mathbb{E}_{k-1}) \oint_{\omega\gamma}\frac{F_{k}(z)}{(z-\zeta)^{2}}\mathrm{d}z+\mathsf{err}_{k}( \zeta)=:I_{k}(\zeta)+\mathsf{err}_{k}(\zeta),\] with the contour \(\omega\equiv\{z\in\mathbb{C}:|z-\zeta|=\tau t^{2}/10\}\) and \(\gamma\equiv\{z\in\mathbb{C}:|\mathrm{Im}\,z|\geq N^{-100}\}\), and \(\mathsf{err}_{k}\) collects all the tiny error terms which will not affect our further analysis. Similarly, we can define \(\tilde{I}_{k}(\zeta)\) and \(\mathsf{eir}_{k}(\zeta)\) for \(\tilde{Y}_{k}\) in the same manner as shown above. Therefore, \[\mathbb{E}_{k-1}(Y_{k}Y_{k}^{\prime})-\mathbb{E}_{k-1}(\tilde{Y}_{k}\tilde{Y}_ {k}^{\prime})=\mathbb{E}_{k-1}((I_{k}(\zeta)I_{k}(\zeta^{\prime}))-\mathbb{E} _{k-1}((\tilde{I}_{k}(\zeta)\tilde{I}_{k}(\zeta^{\prime}))+\mathsf{HOT},\] where \(\mathsf{HOT}\) collects terms containing \(\mathsf{err}_{k}(\zeta)\) or \(\mathsf{eir}_{k}(\zeta)\), which are irrelevant in our analysis. For the leading term, since \(F_{k}(z)\), \(\tilde{F}_{k}(z)\),\(\tilde{F}_{k}(z)\),\(\tilde{F}_{k}(z^{\prime})\) are uniformly bounded on \(z\in\omega\cap\gamma\) and \(z^{\prime}\in\omega^{\prime}\cap\gamma\), we may commute the conditional expectation and the integral to obtain \[\mathbb{E}_{k-1}((I_{k}(\zeta)I_{k}(\zeta^{\prime}))-\mathbb{E}_{k-1}(( \tilde{I}_{k}(\zeta)\tilde{I}_{k}(\zeta^{\prime}))=-\frac{t^{2}}{4\pi^{2}N^{2 -\alpha/2}}\oint_{\omega\cap\gamma}\oint_{\omega^{\prime}\cap\gamma}\frac{ \varphi_{k}(z,z^{\prime})-\tilde{\varphi}_{k}(z,z^{\prime})}{(z-\zeta)^{2}(z^{ \prime}-\zeta^{\prime})^{2}}\mathrm{d}z^{\prime}\mathrm{d}z,\] (A.21) where \[\varphi_{k}(z,z^{\prime}) \coloneqq\mathbb{E}_{k-1}\big{(}(\mathbb{E}_{k}-\mathbb{E}_{k-1} )F_{k}(z)(\mathbb{E}_{k}-\mathbb{E}_{k-1})F_{k}(z^{\prime})\big{)}\] \[\tilde{\varphi}_{k}(z,z^{\prime}) \coloneqq\mathbb{E}_{k-1}\big{(}(\mathbb{E}_{k}-\mathbb{E}_{k-1} )\tilde{F}_{k}(z)(\mathbb{E}_{k}-\mathbb{E}_{k-1})\tilde{F}_{k}(z^{\prime}) \big{)},\] and \(\omega^{\prime}\coloneqq\{z\in\mathbb{C}:|z-\zeta^{\prime}|=at^{2}\}\) with a small constant \(a\). In view of (A.21), it suffices to prove that uniformly on \(z\in\omega\cap\gamma\) and \(z^{\prime}\in\omega^{\prime}\cap\gamma\), \(\varphi_{k}-\tilde{\varphi_{k}}\equiv\varphi_{k}(z,z^{\prime})-\tilde{\varphi }_{k}(z,z^{\prime})\ll t^{2}N^{1-\alpha/2}\). In the sequel, we write \(F_{k}=F_{k}(z)\), \(\tilde{F}_{k}=\tilde{F}_{k}(z)\), \(F_{k}^{\prime}=F_{k}(z^{\prime})\), and \(\tilde{F}_{k}^{\prime}=\tilde{F}_{k}(z^{\prime})\) for simplicity. Let \[\eta_{k}=\eta_{k}(z):=\tilde{x}_{k}^{\top}(G(X^{(k)},z))\tilde{x}_{k}-\tilde{ x}_{k}^{\top}(G(\tilde{X}^{(k)},z))_{\mathrm{diag}}\tilde{x}_{k}=\sum_{i\neq j}[G( \tilde{X}^{(k)},z)]_{ij}\tilde{x}_{ik}\tilde{x}_{jk},\] and \[\varepsilon_{k} =\varepsilon_{k}(z):=F_{k}-\tilde{F}_{k}=\log|1+\eta_{k}(1+ \tilde{x}_{k}^{\top}(G(\tilde{X}^{(k)},z))_{\mathrm{diag}}\tilde{x}_{k})^{-1} |^{2}.\] We also write \(\eta_{k}^{\prime}\equiv\eta_{k}(z^{\prime})\) and \(\varepsilon_{k}^{\prime}\equiv\varepsilon_{k}(z^{\prime})\). Using the following elementary identity, \[\mathbb{E}_{k-1}\big{(}(\mathbb{E}_{k}-\mathbb{E}_{k-1})(A)(\mathbb{E}_{k}- \mathbb{E}_{k-1})(B)\big{)}=\mathbb{E}_{k-1}\big{(}\mathbb{E}_{k}(A)\mathbb{ E}_{k}(B)\big{)}-\mathbb{E}_{k-1}(A)\mathbb{E}_{k-1}(B),\] we may rewrite \(\varphi_{k}\) and \(\tilde{\varphi}_{k}\) as \[\varphi_{k} =\mathbb{E}_{k-1}\big{(}\mathbb{E}_{k}(F_{k})\mathbb{E}_{k}(F_{k} ^{\prime})\big{)}-\mathbb{E}_{k-1}(F_{k})\mathbb{E}_{k-1}(F_{k}^{\prime}),\] \[\tilde{\varphi}_{k} =\mathbb{E}_{k-1}\big{(}\mathbb{E}_{k}(\tilde{F}_{k})\mathbb{E}_{ k}(\tilde{F}_{k}^{\prime})\big{)}-\mathbb{E}_{k-1}(\tilde{F}_{k})\mathbb{E}_{k-1}( \tilde{F}_{k}^{\prime}).\] Therefore, let \(E_{\tilde{x}_{k}}\) denote the expectation with respect to the randomness of \(k\)-th column of \(\tilde{X}\), we have by the definitions of \(\varepsilon_{k}\), \(\varepsilon_{k}^{\prime}\), \[\varphi_{k}-\tilde{\varphi}_{k} =\mathbb{E}_{\tilde{x}_{k}}\big{(}\mathbb{E}_{k}(\tilde{F}_{k}) \mathbb{E}_{k}(\varepsilon_{k}^{\prime})\big{)}+\mathbb{E}_{\tilde{x}_{k}}\big{(} \mathbb{E}_{k}(\tilde{F}_{k}^{\prime})\mathbb{E}_{k}(\varepsilon_{k})\big{)}+ \mathbb{E}_{\tilde{x}_{k}}\big{(}\mathbb{E}_{k}(\varepsilon_{k})\mathbb{E}_{k}( \varepsilon_{k}^{\prime})\big{)}\] \[\quad-\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}(\tilde{F}_{k}) \mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k}^{\prime})-\mathbb{E}_{k} \mathbb{E}_{\tilde{x}_{k}}(\tilde{F}_{k}^{\prime})\mathbb{E}_{k}\mathbb{E}_{x_{k}} (\varepsilon_{k})-\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k}^{ \prime})\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k})\] \[\equiv T_{1}+T_{2}+T_{3}+T_{4}+T_{5}+T_{6}.\] Before bounding \(T_{i}\)'s, \(1\leq i\leq 6\), we introduce some shorthand notation for simplicity. Let \[J_{k}=J_{k}(z):=\frac{1}{1+\tilde{x}_{k}^{\top}G(\tilde{X}^{(k)},z)\tilde{x}_{k} },\quad J_{k,\mathrm{diag}}=J_{k,\mathrm{diag}}(z):=\frac{1}{1+\tilde{x}_{k}^{ \top}(G(\tilde{X}^{(k)},z))_{\mathrm{diag}}\tilde{x}_{k}},\] and \(J_{k}^{\prime}=J_{k}(z^{\prime})\), \(J_{k,\mathrm{diag}}^{\prime}=J_{k,\mathrm{diag}}(z^{\prime})\). Further set \[J_{k,\mathrm{Tr}}\coloneqq\frac{1}{1+\frac{\sigma_{N}^{2}}{N} \mathrm{Tr}G(\tilde{X}^{(k)},z)},\qquad\mathcal{E}\coloneqq\tilde{x}_{k}^{ \top}(G(\tilde{X}^{(k)},z))_{\mathrm{diag}}\tilde{x}_{k}-\frac{\sigma_{N}^{2}}{N} \mathrm{Tr}G(\tilde{X}^{(k)},z).\] This gives \(J_{k,\text{diag}}=J_{k,\text{Tr}}-\mathcal{E}J_{k,\text{Tr}}J_{k,\text{diag}}\). We may now establish an upper bound for \(\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k})\) as follows: \[\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k}) =\mathbb{E}_{\tilde{x}_{k}}\log|1+\eta_{k}J_{k,\text{diag}}|^{2} \stackrel{{\text{(i)}}}{{\leq}}\log\mathbb{E}_{\tilde{x}_{k}}|1+ \eta_{k}J_{k,\text{diag}}|^{2}\] \[=\log\mathbb{E}_{\tilde{x}_{k}}(1+2\text{Re}\left(\eta_{k}J_{k, \text{Tr}}-\eta_{k}EJ_{k,\text{Tr}}J_{k,\text{diag}}\right)+|\eta_{k}J_{k, \text{diag}}|^{2})\] \[\stackrel{{\text{(ii)}}}{{\leq}}\log\big{(}1+ \mathcal{O}(\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}||\mathcal{E}|))+\mathcal{O}( \mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}|^{2}))\big{)},\] where, in \(\mathrm{(i)}\), Jensen's inequality is applied, and in \(\mathrm{(ii)}\), we used the fact that \(J_{k,\text{Tr}}\) and \(J_{k,\text{diag}}\) are uniformly bounded for \(\zeta\in\Xi\) on the event \(\Omega_{k}\). Similarly, using the identity \(|1+\eta_{k}J_{k,\text{diag}}||1-\eta_{k}J_{k}|=1\), we have \[\mathbb{E}_{\tilde{x}_{k}}(-\varepsilon_{k}) =\mathbb{E}_{\tilde{x}_{k}}\log|1-\eta_{k}J_{k}|^{2}=\mathbb{E}_{ \tilde{x}_{k}}\log|1-\eta_{k}J_{k,\text{Tr}}-\eta_{k}(\eta_{k}+\mathcal{E})J_{ k,\text{diag}}J_{k}|^{2}\] \[\leq\log\big{(}1+\mathcal{O}(\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k }||\mathcal{E}|))+\mathcal{O}(\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}|^{2}))\big{)}.\] By the Cauchy-Schwarz inequality and Lemma A.13 and Lemma A.14, \[\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}||\mathcal{E}|)\leq\sqrt{\mathbb{E}_{ \tilde{x}_{k}}(|\eta_{k}|^{2})\cdot\mathbb{E}_{\tilde{x}_{k}}(|\mathcal{E}|^{2 })}\lesssim N^{-1/2}t^{-2}N^{\vartheta(2-\alpha/2)-1/2}\|G(\tilde{X}^{(k)},z)\|\] Since \(\vartheta=1/4+1/\alpha+\epsilon_{\vartheta}>1/4+1/\alpha\), and recall that \(\|G(\tilde{X}^{(k)},z)\|\leq|\lambda_{1}(\mathcal{S}(\tilde{X}^{(k)}))-z|^{-1 }\lesssim t^{-2}\) on \(\Omega_{k}\), the above bound can be further simplified as \[\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}||\mathcal{E}|)\lesssim N^{1-\alpha/2} \cdot N^{2/\alpha+3\alpha/8+\epsilon_{\vartheta}(4-\alpha)/2-2}t^{-4}.\] By the facts \(\epsilon_{\vartheta}<(3\alpha-5)/(4\alpha)\) and \(t\gg N^{(\alpha-4)/48}\), it can be verified that \(\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}||\mathcal{E}|)\ll t^{2}N^{1-\alpha/2}\). Therefore, we can conclude that \(|\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k})|\ll t^{2}N^{1-\alpha/2}\). This shows \(|T_{6}|\ll t^{2}N^{1-\alpha/2}\). Together with the crude bound \(\tilde{F}_{k}\leq\log|1+N^{2\vartheta}\|G(\tilde{X}^{(k)},z)\|^{2}\lesssim\log N\), we have \(|T_{4}|,|T_{5}|\ll t^{2}N^{1-\alpha/2}\). For \(|T_{3}|\), by Cauchy-Schwarz inequality, it suffices to give a bound on \(\mathbb{E}_{\tilde{x}_{k}}\big{(}|\mathbb{E}_{k}(\varepsilon_{k})|^{2}\big{)}\). By Jensen's inequality, \[\mathbb{E}_{\tilde{x}_{k}}\big{(}|\mathbb{E}_{k}(\varepsilon_{k})|^{2}\big{)} \leq\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}(|\varepsilon_{k}|^{2}).\] Using again the identity \(|1+\eta_{k}J_{k,\text{diag}}||1-\eta_{k}J_{k}|=1\), \[|\log|1+\eta_{k}J_{k,\text{diag}}|^{2}|=\mathbf{1}_{\{|1+\eta_{k}J_{k, \text{diag}}|\geq 1\}}\log|1+\eta_{k}J_{k,\text{diag}}|^{2}+\mathbf{1}_{\{|1-\eta_{k}J_{k} |>1\}}\log|1-\eta_{k}J_{k}|^{2}\] \[=\mathbf{1}_{\{|1+\eta_{k}J_{k,\text{diag}}|>1\}}\log(1+2\text{Re} \left(\eta_{k}J_{k,\text{diag}}\right)+|\eta_{k}J_{k,\text{diag}}|^{2})+ \mathbf{1}_{\{|1-\eta_{k}J_{k}|>1\}}\log(1-2\text{Re}\left(\eta_{k}J_{k} \right)+|\eta_{k}J_{k}|^{2})\] \[\leq\mathbf{1}_{\{|1+\eta_{k}J_{k,\text{diag}}|>1\}}\big{(}2\text{ Re}\left(\eta_{k}J_{k,\text{diag}}\right)+|\eta_{k}J_{k,\text{diag}}|^{2}\big{)}+ \mathbf{1}_{\{|1-\eta_{k}J_{k}|>1\}}\big{(}-2\text{Re}\left(\eta_{k}J_{k} \right)+|\eta_{k}J_{k}|^{2}\big{)}.\] Therefore, with the fact that \(|\eta_{k}J_{k,\text{diag}}|\leq N^{C}\) for some \(C>0\), \[\mathbb{E}_{\tilde{x}_{k}}|\log|1+\eta_{k}J_{k,\text{diag}}|^{2}|^{2}\lesssim\log N \cdot\mathbb{E}_{\tilde{x}_{k}}\log|1+\eta_{k}J_{k,\text{diag}}|^{2}\lesssim\log N \cdot\mathbb{E}_{\tilde{x}_{k}}(|\eta_{k}|^{2})\lesssim N^{-1}t^{-5},\] which gives \(|T_{3}|\ll t^{2}N^{1-\alpha/2}\) by the fact \(t\gg N^{-2/7+\alpha/14}\). To evaluate \(|T_{2}|\), we start by expressing it as follows: \[T_{2} =\mathbb{E}_{\tilde{x}_{k}}\big{(}\mathbb{E}_{k}(\varepsilon_{k}) \mathbb{E}_{k}(\log|1+N^{-1}\sigma_{N}^{2}\text{Tr}G(\tilde{X}^{(k)},z^{\prime})+ \mathcal{E}|^{2})\big{)}\] \[=\mathbb{E}_{\tilde{x}_{k}}\big{(}\mathbb{E}_{k}(\varepsilon_{k}) \mathbb{E}_{k}(\log|1+N^{-1}\sigma_{N}^{2}\text{Tr}G(\tilde{X}^{(k)},z^{\prime})|^{ 2})\big{)}+\mathbb{E}_{\tilde{x}_{k}}\big{(}\mathbb{E}_{k}(\varepsilon_{k}) \mathbb{E}_{k}(\log|1+\mathcal{E}J_{k,\text{Tr}}|^{2})\big{)}.\] First, we use the fact that \(\log|1+N^{-1}\sigma_{N}^{2}\text{Tr}G(\tilde{X}^{(k)},z)|^{2}\) is independent of \(\tilde{x}_{k}\) and that \(\mathbb{E}_{\tilde{x}_{k}}(\varepsilon_{k})=0\) to obtain the inequality \[T_{2}\lesssim\mathbb{E}_{\tilde{x}_{k}}\big{(}|\mathbb{E}_{k}(\varepsilon_{k})| \cdot\mathbb{E}_{k}|\mathcal{E}|\big{)}.\] Next, we apply the Cauchy-Schwarz inequality to obtain \[T_{2}\leq\sqrt{\mathbb{E}_{\tilde{x}_{k}}\big{(}|\mathbb{E}_{k}(\varepsilon_{k})|^{2} \big{)}\mathbb{E}_{\tilde{x}_{k}}\big{(}|\mathbb{E}_{k}(|\mathcal{E}|)|^{2} \big{)}}.\] Finally, by Jensen's inequality, we have \[T_{2}\leq\sqrt{\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}\big{(}|\varepsilon_{k}|^{2} \big{)}\mathbb{E}_{k}\mathbb{E}_{\tilde{x}_{k}}\big{(}|\mathcal{E}|^{2}\big{)}} \leq N^{1-\alpha/2}\cdot N^{2/\alpha+3\alpha/8+\epsilon_{\vartheta}(4- \alpha)/2-2}t^{-9/2}.\] The bound \(|T_{2}|\ll t^{2}N^{1-\alpha/2}\) follows by the facts \(\epsilon_{\theta}<(3\alpha-5)/(4\alpha)\) and \(t\gg N^{(\alpha-4)/56}\). The same bound holds for \(|T_{1}|\). Therefore, we can obtain that for any \(z\in\omega\cap\gamma\) and \(z^{\prime}\in\omega^{\prime}\cap\gamma\), \(|\varphi_{k}-\tilde{\varphi}_{k}|\ll t^{2}N^{1-\alpha/2}\), which conludes the proof. **Lemma A.13** ([16], Lemma 4.1).: _Let \(a\equiv(a_{1},\cdots,a_{N})^{\top}\) be a column vector whose entries are i.i.d. centered and satisfy \((ii)\) and \((iii)\) in Lemma 3.12. Then for deterministic matrix \(G\), the random variables_ \[X\equiv\sum_{i\neq j}G_{ij}a_{i}a_{j},\quad E\equiv\sum_{i}G_{ii}a_{i}^{2}- \frac{1}{N}\mathrm{Tr}G\] _satisfy_ \[\mathbb{E}|X|^{2}\leq 2N^{-1}\|G\|^{2},\quad\mathbb{E}|E|^{2}\leq 10C(\|G\|^{2 }+1)N^{\vartheta(4-\alpha)-1}.\] The following lemma is a directly consequence of Lemma A.13. **Lemma A.14**.: _Fix \(C>0\). For any \(\zeta\in\{\xi\in\mathbb{C}:|\xi-\bar{\zeta}_{-,t}|\leq Ct^{2}\}\), we have there exist constant \(\tau=\tau(C)\) such that \(\mathbb{E}_{\mathcal{X}_{k}}(|\eta_{k}|^{2})\leq\tau^{-2}N^{-1}t^{-4}\) on the event \(\Omega_{k}=\{\lambda_{1}(\mathcal{S}(\tilde{X}^{(k)}))-\bar{\zeta}_{-,t}\geq \tau t^{2}\}\)._ ## Appendix B Remaining proofs for the general model ### Proof of Lemma 5.1 We need the following lemma on the monotonicity of the Green function to the linearization of \(\mathcal{S}(Y^{\gamma})\). **Lemma B.1** ([11], Lemma 2.1).: _For deterministic matrix \(A\in\mathbb{R}^{M\times N}\), let \(\mathcal{L}(A)\) be defined as in Eq. (5.6) Further define \(\Gamma(z)\coloneqq\max_{i,j\in[M+N]}[(\mathcal{L}(A)-z)^{-1}]_{ij}\lor 1\). We have for any \(L>1\) and \(z\in\mathbb{C}^{+}\), we have \(\Gamma(E+\mathrm{i}\eta/L)\leq L\Gamma(E+\mathrm{i}\eta)\)._ Recall that for any \(\delta>0\), \(z=E+\mathrm{i}\eta\in\mathbb{D}\), \[\mathfrak{P}_{0}(\delta,z,\Psi) =\mathbb{P}_{\Psi}\Big{(}\sup_{\begin{subarray}{c}a,b\in[M]\\ 0\leq\gamma\leq 1\end{subarray}}|z^{1/2}\mathfrak{X}_{ab}[G^{\gamma}(z)]_{ ab}|>N^{\delta}\Big{)},\] \[\mathfrak{P}_{1}(\delta,z,\Psi) =\mathbb{P}_{\Psi}\Big{(}\sup_{\begin{subarray}{c}u,v\in[N]\\ 0\leq\gamma\leq 1\end{subarray}}|z^{1/2}\mathfrak{Y}_{uv}[G^{\gamma}(z)]_{uv}|>N^{ \delta}\Big{)},\] \[\mathfrak{P}_{2}(\delta,z,\Psi) =\mathbb{P}_{\Psi}\Big{(}\sup_{\begin{subarray}{c}a\in[M],u\in[N] \\ 0\leq\gamma\leq 1\end{subarray}}|\mathfrak{Z}_{au}[G^{\gamma}(z)Y^{\gamma}]_{ au}|>N^{\delta}\Big{)}.\] Now let us give the proof of Lemma 5.1. Proof of Lemma 5.1.: Let \(p\) be any sufficiently large (but fixed) integer, and \(F_{p}(x):=|x|^{2p}+1\). It can be easily verified that there exists a constant \(C_{p}\), only depends on \(p\) such that \(|F_{p}^{(a)}(x)|\leq C_{p}F_{p}(x)\), for all \(x\in\mathbb{R}\) and \(a\in\mathbb{Z}^{+}\). Recall Theorem 4.2, and we will focus on the case when \((\#_{1},\#_{2},\#_{3})=(\mathfrak{X}_{ab}\mathrm{Im}\,[G^{\gamma}(z)]_{ab}, \ \mathfrak{X}_{ab}\mathrm{Im}\,[G^{0}(z)]_{ab},\ \mathfrak{I}_{0,ab})\) therein. Applying Theorem 4.2 with \(F(x)=F_{p}(x)\), we have for any \(a,b\in[M]\), there exists constant \(C_{1}>0\) such that, \[\mathbb{E}_{\Psi}\big{(}F_{p}(\mathfrak{X}_{ab}\mathrm{Im}\,[G^{\gamma}(z)]_{ ab})\big{)}-\mathbb{E}_{\Psi}\big{(}F_{p}(\mathfrak{X}_{ab}\mathrm{Im}\,[G^{0}(z)]_{ ab})\big{)}<C_{1}N^{-\omega}(\mathfrak{I}_{p,0}+1)+C_{1}Q_{0}N^{C_{1}},\] where \(\mathfrak{I}_{p,0}\equiv\sup_{i,j\in[M],0\leq\gamma\leq 1}\mathbb{E}_{\Psi} \big{(}|F_{p}(\mathfrak{X}_{ij}\mathrm{Im}\,[G^{\gamma}(z)]_{ij})|\big{)}\). Taking supremum over \(a,b\in[M]\) and \(0\leq\gamma\leq 1\) yields \[(1-C_{1}N^{-\omega})\mathfrak{I}_{p,0}\leq\max_{i,j\in[M]}\mathbb{E}_{\Psi} \big{(}F_{p}(\mathfrak{X}_{ij}\mathrm{Im}\,[G^{0}(z)]_{ij})\big{)}+C_{1}N^{- \omega}+3C_{1}N^{C_{1}}\max_{k\in[0:\mathbb{Z}]}\mathfrak{P}_{k}(\varepsilon,z,\Psi).\] Applying Lemma B.1 on \(\mathcal{R}(Y^{\gamma},z)=z^{-1/2}(\mathcal{L}(Y^{\gamma})-z^{1/2})^{-1}\) with \(z^{1/2}=\tilde{E}+\mathrm{i}\tilde{\eta}\), we have, \[\max_{i,j\in[M+N]}|z^{1/2}[\mathcal{R}(Y^{\gamma},z)]_{ij}|\lor 1\leq L \Big{(}\max_{i,j\in[M+N]}|(z^{\prime})^{1/2}[\mathcal{R}(Y^{\gamma},z^{ \prime})]_{ij}|\lor 1\Big{)},\] for any \(L>0\) and \(z^{\prime}\in\mathbb{C}^{+}\) satisfies \((z^{\prime})^{1/2}=\tilde{E}+\mathrm{i}L\tilde{\eta}\). Let \(L\equiv N^{\varepsilon/6}\) and thus \((z^{\prime})^{1/2}\equiv\tilde{E}+\mathrm{i}N^{\varepsilon/6}\tilde{\eta}\), to obtain \[\max_{i,j\in[M]}|z^{1/2}[G^{\gamma}(z)]_{ij}|\lor 1,\ \max_{i,j\in[N]}|z^{1/2}[G^{ \gamma}(z)]_{ij}|\lor 1,\ \max_{i\in[M],j\in[N]}|[G^{\gamma}(z)Y^{\gamma}]_{ij}|\leq \mathfrak{S},\] where \[\mathfrak{S}\equiv N^{\varepsilon/6}\Big{(}\max_{i,j\in[M]}|(z^{\prime})^{1/ 2}[G^{\gamma}(z^{\prime})]_{ij}|\vee\max_{i,j\in[N]}|(z^{\prime})^{1/2}[G^{ \gamma}(z^{\prime})]_{ij}|\vee\max_{i\in[M],j\in[N]}|[G^{\gamma}(z^{\prime})Y ^{\gamma}]_{ij}|\lor 1\Big{)}.\] This implies that \[\max_{k\in[0:2]}\mathfrak{P}_{k}(\varepsilon,z,\Psi)\leq\max_{k\in[0:2]} \mathfrak{P}_{k}(\varepsilon/2,z^{\prime},\Psi).\] (B.1) For any \(z_{0}=E_{0}+\mathrm{i}\eta_{0}\in\mathsf{D}(\varepsilon_{1},\varepsilon_{2}, \varepsilon_{3})\), we have \(\mathbb{E}_{\Psi}\big{(}F_{p}(\mathfrak{X}_{ij}\mathrm{Im}\,[G^{0}(z_{0})]_{ ij})\big{)}\lesssim N\) (cf. Theorem 2.10). Then there exists some large constant \(C_{2}>0\) such that \[\mathfrak{I}_{p,0}\leq C_{2}N+C_{2}N^{C_{2}}\max_{k\in[0:2]}\mathfrak{P}_{k}( \varepsilon,z_{0},\Psi).\] Using (B.1) by setting \(z\equiv z_{0}\), we have for \(z_{1}=E_{1}+\mathrm{i}\eta_{1}\) where \((E_{1},\eta_{1})\) are defined through (5.3), \[\mathfrak{I}_{p,0}\leq C_{2}N+C_{2}N^{C_{2}}\max_{k\in[0:2]}\mathfrak{P}_{k}( \varepsilon/2,z_{1},\Psi).\] For any \(a,b\in[M]\), and \(0\leq\gamma\leq 1\), applying Markov's inequality with the fact that \(p\delta>D+100\), we have that there exists some large constant \(C_{3}>0\) such that \[\mathbb{P}_{\Psi}\Big{(}|z_{0}^{1/2}\mathfrak{X}_{ab}[\mathrm{Im }\,G^{\gamma}(z_{0})]_{ab}|>N^{\delta}\Big{)} \leq\frac{|z_{0}|^{p/2}\mathbb{E}_{\Psi}\big{(}|F_{p}(\mathfrak{X }_{ab}\mathrm{Im}\,[G^{\gamma}(z_{0})]_{ab})|\big{)}}{N^{p\delta}}\leq\frac{|z_ {1}|^{p/2}\mathfrak{I}_{p,0}}{N^{p\delta}}\] \[\leq C_{3}N^{-D-90}+C_{3}N^{C_{2}}\max_{k\in[0:2]}\mathfrak{P}_{k} (\varepsilon/2,z_{1},\Psi),\] where in the last step we used the fact that \(|z_{0}|\) is bounded. Similar bound holds when \(\mathrm{Im}\,\) is replaced by \(\mathrm{Re}\,\), we omit the details. Now we may apply union bounds on \(i,j\in[M]\) and an \(\epsilon\)-net argument on \(\gamma\) with the following deterministic bounds \[\Big{|}\frac{\partial[G^{\gamma}(z)]_{ab}}{\partial\gamma}\Bigg{|}\lesssim \frac{\|A\|+\gamma\|t^{1/2}W\|}{\eta^{2}},\] \(\eta>N^{-1}\), \(\|A\|\leq N^{1/2}\) and \(\mathbb{P}(\|t^{1/2}W\|>2)<N^{-D}\), to obtain that \[\mathfrak{P}_{0}(\delta,z_{0},\Psi)= \mathbb{P}_{\Psi}\Big{(}\sup_{\begin{subarray}{c}a,b\in[M]\\ 0\leq\gamma\leq 1\end{subarray}}|z_{0}^{1/2}\mathfrak{X}_{ab}[G^{\gamma}(z_{0} )]_{ab}|>N^{\delta}\Big{)}\] \[\leq C_{4}N^{-D-50}+C_{4}N^{C_{4}}\max_{k\in[0:2]}\mathfrak{P}_{k }(\varepsilon/2,z_{1},\Psi),\] for some large constant \(C_{4}>0\). Repeating the above procedure for all \(\mathfrak{P}_{k}(\delta,\eta,\Psi),k=1,2\) proves the claim. ### Proof of Corollary 4.6 We prove this corollary using a similar argument as in [Section 4, [56]] or [Section 4, [36]]. The key inputs are the rigidity estimate in Theorem 4.4 and the Green function comparison in Theorem 4.5. Proof of Corollary 4.6.: Let us first define for any \(E\), \[\mathcal{N}(E):=\big{|}\{i:\lambda_{i}(\mathcal{S}(Y))\leq\lambda_{-,t}+E\} \big{|}.\] For any \(\epsilon>0\), we take \(\ell=N^{-2/3-\epsilon/3}\) and \(\eta=N^{-2/3-\epsilon}\). Recall from Theorem 4.4 that \(\lambda_{M}(\mathcal{S}(Y))\geq\lambda_{-,t}-N^{-2/3+\epsilon}\) holds with high probability. We further define \[\chi_{E}(x) :=\mathbf{1}_{[-N^{-2/3+\epsilon},E]}(x-\lambda_{-,t}),\] \[\theta_{\eta}(x) :=\frac{\eta}{\pi(x^{2}+\eta^{2})}=\frac{1}{\pi}\mathrm{Im}\, \frac{1}{x-\mathrm{i}\eta}.\] Then following the same arguments as in [Lemma 2.7, [42]], we can obtain that for \(|E|\leq N^{-2/3+\epsilon}\), the following holds with high probability: \[\operatorname{Tr}(\chi_{E-\ell}*\theta_{\eta})(\mathcal{S}(Y))-N^{-\epsilon/9} \leq\mathcal{N}(E)\leq\operatorname{Tr}(\chi_{E+\ell}*\theta_{\eta})( \mathcal{S}(Y))+N^{-\epsilon/9}.\] Let \(K(x):\mathbb{R}\to[0,1]\) be a smooth monotonic increasing function such that \[K(x)=1\quad\text{if}\quad x\geq 2/3,\quad K(x)=0\quad\text{if}\quad x\leq 1/3.\] Therefore, we have with high probability that \[K(\operatorname{Tr}(\chi_{E-\ell}*\theta_{\eta})(\mathcal{S}(Y) ))+\mathcal{O}(N^{-\epsilon/9}) \leq K(\mathcal{N}(E))=\mathbf{1}_{\mathcal{N}(E)\geq 1}\] \[\leq K(\operatorname{Tr}(\chi_{E+\ell}*\theta_{\eta})(\mathcal{S }(Y)))+\mathcal{O}(N^{-\epsilon/9}).\] Taking expectation on the above inequality, we have for \(|s|\leq N^{\epsilon}/2\) that \[\mathbb{E}\Bigg{[}K\bigg{(}\operatorname{Im}\bigg{[}\frac{N}{ \pi}\int_{-N^{-2/3+\epsilon}}^{sN^{-2/3}-\ell}m^{1}(\lambda_{-,t}+y+\mathrm{i }\eta)\bigg{]}\mathrm{d}y\bigg{)}\Bigg{]}+\mathcal{O}(N^{-\epsilon/9})\] \[\leq\mathbb{P}\Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(Y))-\lambda _{-,t})\leq s\Big{)}=\mathbb{E}\Big{[}\mathbf{1}_{\mathcal{N}(sN^{-2/3}) \geq 1}\Big{]}\] \[\leq\mathbb{E}\Bigg{[}K\bigg{(}\operatorname{Im}\bigg{[}\frac{N} {\pi}\int_{-N^{-2/3+\epsilon}}^{sN^{-2/3}+\ell}m^{1}(\lambda_{-,t}+y+\mathrm{i }\eta)\bigg{]}\mathrm{d}y\bigg{)}\Bigg{]}+\mathcal{O}(N^{-\epsilon/9}).\] (B.2) Similarly, repeating the above arguments with \(\mathcal{S}(Y)\) replaced by \(\mathcal{S}(V_{t})\), we can also have \[\mathbb{E}\Bigg{[}K\bigg{(}\operatorname{Im}\bigg{[}\frac{N}{\pi }\int_{-N^{-2/3+\epsilon}}^{sN^{-2/3}-\ell}m^{0}(\lambda_{-,t}+y+\mathrm{i} \eta)\bigg{]}\mathrm{d}y\bigg{)}\Bigg{]}+\mathcal{O}(N^{-\epsilon/9})\] \[\leq\mathbb{P}\Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(V_{t}))- \lambda_{-,t})\leq s\Big{)}\] \[\leq\mathbb{E}\Bigg{[}K\bigg{(}\operatorname{Im}\bigg{[}\frac{N} {\pi}\int_{-N^{-2/3+\epsilon}}^{sN^{-2/3}+\ell}m^{0}(\lambda_{-,t}+y+\mathrm{i }\eta)\bigg{]}\mathrm{d}y\bigg{)}\Bigg{]}+\mathcal{O}(N^{-\epsilon/9}).\] (B.3) Note that the conditional expectation \(\mathbb{E}_{\Psi}\) in (4.5) can be replaced by \(\mathbb{E}\) using the law of total expectation together with the fact that \(\Omega_{\Psi}\) holds with high probability. Therefore, we can combine (B.2) and (B.3) with (4.5) to obtain that \[\mathbb{P}\Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(V_{t}))-\lambda_{ -,t})\leq s-2\ell N^{-2/3}\Big{)}+\mathcal{O}(N^{-\epsilon/9})\leq\mathbb{P} \Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(Y))-\lambda_{-,t})\leq s\Big{)}\] \[\leq\mathbb{P}\Big{(}N^{2/3}(\lambda_{M}(\mathcal{S}(V_{t}))- \lambda_{-,t})\leq s+2\ell N^{-2/3}\Big{)}+\mathcal{O}(N^{-\epsilon/9}).\] Now (4.6) follows by the fact that \(\ell N^{-2/3}\ll 1\). For (4.7), we first note by Theorem 2.14 that \[|\lambda_{-,t}-\lambda_{\text{shift}}|\leq N^{-2/3+\epsilon}\] (B.4) holds in probability. This together with Theorem 4.4 implies that \[|\lambda_{M}(\mathcal{S}(Y))-\lambda_{\text{shift}}|\leq N^{-2/3+\epsilon}\] also holds in probability. Then we may proceed similar to the proof of (4.6), but with all high probability estimates replaced by in probability estimates. It's worth noting that during the derivation of (B.2) and (B.3), the error term \(\mathcal{O}(N^{-\epsilon/9})\) will become \(\mathfrak{o}(1)\) because we lack an polynomial bound for the failure probability of (B.4). Finally, we can conclude the proof of (4.7) by using Theorem 4.5. ### Proof of Theorem 4.5 Proof.: To ease presentation, we show the proof of the following comparison instead: for any \(|E|\leq N^{-2/3+\epsilon}\), \[\Big{|}\mathbb{E}_{\Psi}\Big{(}F(N\eta_{0}\mathrm{Im}\,m^{1}(\lambda_{-,t}+E+ \mathrm{i}\eta_{0}))\Big{)}-\mathbb{E}_{\Psi}\Big{(}F(N\eta_{0}\mathrm{Im}\,m^ {0}(\lambda_{-,t}+E+\mathrm{i}\eta_{0}))\Big{)}\Big{|}\leq CN^{-\delta_{1}}.\] (B.5) The proof of (4.5) is similar, and thus we omit it. Using the same notation as in the proof of Theorem 4.3 and further defining \(h_{\gamma,(ij)}(\lambda,\beta)\equiv\eta_{0}\sum_{a}f_{\gamma,(aa),(ij)}( \lambda,,\beta)\), we have \[\frac{\partial\mathbb{E}_{\Psi}\big{(}F(N\eta_{0}\mathrm{Im}\,m^{\gamma}(z_{t }))\big{)}}{\partial\gamma}=-2\Big{(}\sum_{i,j}(I_{1})_{ij}-(I_{2})_{ij}\Big{)},\] with \[(I_{1})_{ij} \equiv\mathbb{E}_{\Psi}\bigg{[}A_{ij}F^{\prime}\Big{(}h_{\gamma, (ij)}\big{(}[Y^{\gamma}]_{ij},X_{ij}\big{)}\Big{)}g_{(ij)}\big{(}[Y^{\gamma}] _{ij},X_{ij}\big{)}\bigg{]},\] \[(I_{2})_{ij} \equiv\frac{\gamma t^{1/2}}{(1-\gamma^{2})^{1/2}}\mathbb{E}_{ \Psi}\bigg{[}w_{ij}F^{\prime}\Big{(}h_{\gamma,(ij)}\big{(}[Y^{\gamma}]_{ij},X_ {ij}\big{)}\Big{)}g_{(ij)}\big{(}[Y^{\gamma}]_{ij},X_{ij}\big{)}\bigg{]}.\] We first consider the estimation for \((I_{1})_{ij}\). Notice that \((I_{1})_{ij}\) can be further decomposed as \[(I_{1})_{ij}=(I_{1})_{ij}\cdot\mathbf{1}_{\psi_{ij}=0}+(I_{1})_{ij}\cdot \mathbf{1}_{\psi_{ij}=1}=(I_{1})_{ij}\cdot\mathbf{1}_{\psi_{ij}=0},\] where in the last step we used the fact that \(A_{ij}\cdot\mathbf{1}_{\psi_{ij}=1}=0\). Therefore, we only need to consider the case when \(\psi_{ij}=0\), and \((I_{1})_{ij}\) can be rewritten as \[(I_{1})_{ij}=\mathbb{E}_{\Psi}\bigg{[}(1-\chi_{ij})a_{ij}F^{\prime}\Big{(}h_{ \gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\Big{)}g_{(ij)}(d_{ij},\chi_{ij}b_{ij}) \bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}.\] By Taylor expansion, for an \(s_{1}>0\) to be chosen later, there exists \(\tilde{d}_{ij}\in[0,d_{ij}]\) such that, \[(I_{1})_{ij} =\sum_{k_{1}=0}^{s_{1}}\frac{1}{k_{1}!}\mathbb{E}_{\Psi}\bigg{[} (1-\chi_{ij})a_{ij}d_{ij}^{k_{1}}g_{(ij)}^{(k_{1},0)}(0,\chi_{ij}b_{ij})F^{ \prime}\big{(}h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}\bigg{]}\cdot \mathbf{1}_{\psi_{ij}=0}\] \[\quad+\frac{1}{(s_{1}+1)!}\mathbb{E}_{\Psi}\bigg{[}(1-\chi_{ij}) a_{ij}d_{ij}^{s_{1}+1}g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi_{ij}b_{ij})F^{ \prime}\big{(}h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}\bigg{]}\cdot \mathbf{1}_{\psi_{ij}=0}\] \[\equiv\sum_{k_{1}=0}^{s_{1}}(I_{1})_{ij,k_{1}}+\mathsf{Rem}_{1}.\] Using (5.26)-(5.28),the perturbation argument as in (5.10), and the fact that \(\mathrm{Im}\,m^{\gamma}(z_{t})\prec 1\), we have for any (small)\(\epsilon>0\) and (large)\(D>0\), \[\mathbb{P}_{\Psi}\bigg{(}\Omega_{\epsilon,1}:=\Big{\{}\big{|}g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi_{ij}b_{ij})F^{\prime}\big{(}h_{\gamma,(ij)}(d_{ij}, \chi_{ij}b_{ij})\big{)}\big{|}\cdot\mathbf{1}_{\psi_{ij}=0}<t^{-s_{1}-2}N^{ \epsilon}\Big{\}}\bigg{)}\geq 1-N^{-D}.\] Further, by the Gaussianity of \(w_{ij}\), we have \[\mathbb{P}_{\Psi}\bigg{(}\Omega_{\epsilon,2}:=\Big{\{}\max_{i\in[M],j\in[N]}| t^{1/2}w_{ij}|<N^{-1/2+\epsilon}\Big{\}}\bigg{)}\geq 1-N^{-D}.\] Let \(\Omega_{\epsilon}:=\Omega_{\epsilon,1}\cap\Omega_{\epsilon,2}\). Then \[|\mathsf{Rem}_{1}| \lesssim\mathbb{E}_{\Psi}\bigg{[}|(1-\chi_{ij})a_{ij}d_{ij}^{s_{1}+ 1}|\cdot\big{|}g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi_{ij}b_{ij})F^{\prime} \big{(}h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}\big{|}\cdot\mathbf{1}_{ \Omega_{\epsilon}}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[\quad+\mathbb{E}_{\Psi}\bigg{[}|(1-\chi_{ij})a_{ij}d_{ij}^{s_{1}+ 1}|\cdot|g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi_{ij}b_{ij})F^{\prime}\big{(} h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}\big{|}\cdot\mathbf{1}_{\Omega_{ \epsilon}}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[\overset{\rm(i)}{\lesssim}\mathbb{E}_{\Psi}\bigg{[}|(1-\chi_{ij} )a_{ij}d_{ij}^{s_{1}+1}|\cdot\big{|}g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi _{ij}b_{ij})F^{\prime}\big{(}h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)} \big{|}\cdot\mathbf{1}_{\Omega_{\epsilon}}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[\quad+N^{-D+C_{1}+2(s_{1}+3)}\] \[\overset{\rm(ii)}{\lesssim}\frac{N^{\epsilon}}{N^{1/2+\epsilon_{b }(s_{1}+1)t_{8}+2}},\] (B.6) where in \(\rm(i)\) we used the deterministic bound \(\big{|}g_{(ij)}^{(s_{1}+1,0)}(\tilde{d}_{ij},\chi_{ij}b_{ij})F^{\prime}\big{(} h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}\big{|}\leq N^{C_{1}+2(s_{1}+3)}\) when \(\eta\geq N^{-2}\), and \(\rm(ii)\) is a consequence of the definition of \(\Omega_{\epsilon}\). Choosing \(s_{1}\) sufficiently large, i.e., \(s_{1}>4/\epsilon_{b}\), and \(t\gg N^{-\epsilon_{b}/2}\) we can obtain \[|\mathsf{Rem}_{1}|\lesssim N^{-5/2}.\] For \((I_{1})_{ij,k_{1}}\), we need to further expand \(F^{\prime}\big{(}h_{\gamma,(ij)}(d_{ij})\big{)}\) as follows: \[F^{\prime}\big{(}h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}=\sum_{k=0}^{s_ {2}}\frac{d_{ij}^{k}}{k!}\frac{\partial^{k}F^{\prime}}{\partial d_{ij}^{k}} \big{(}h_{\gamma,(ij)}(0,\chi_{ij}b_{ij})\big{)}+\frac{d_{ij}^{s_{2}+1}}{(s_{ 2}+1)!}\frac{\partial^{k}F^{\prime}}{\partial d_{ij}^{k}}\big{(}h_{\gamma,(ij) }(\hat{d}_{ij},\chi_{ij}b_{ij})\big{)},\] where \(s_{2}\) is a positive integer to be chosen later, and \(\hat{d}_{ij}\in[0,d_{ij}]\). Then \((I_{1})_{ij,k_{1}}\) can be rewritten as, \[(I_{1})_{ij,k_{1}}=\sum_{k_{2}=0}^{s_{2}}\frac{1}{k_{1}!k_{2}!} \mathbb{E}_{\Psi}\bigg{[}(1-\chi_{ij})a_{ij}d_{ij}^{k_{1}+k_{2}}g_{(ij)}^{(k_{ 1},0)}(0,\chi_{ij}b_{ij})\frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_ {2}}}\big{(}h_{\gamma,(ij)}(0,\chi_{ij}b_{ij})\big{)}\bigg{]}\cdot\mathbf{1}_{ \psi_{ij}=0}\] \[+\frac{1}{k_{1}!(s_{2}+1)!}\mathbb{E}_{\Psi}\bigg{[}(1-\chi_{ij} )a_{ij}d_{ij}^{k_{1}+s_{2}+1}g_{(ij)}^{(k_{1},0)}(0,\chi_{ij}b_{ij})\frac{ \partial^{s_{2}+1}F^{\prime}}{\partial d_{ij}^{s_{2}+1}}\big{(}h_{\gamma,(ij) }(\hat{d}_{ij},\chi_{ij}b_{ij})\big{)}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[\equiv\sum_{k_{2}=0}^{s_{2}}(I_{1})_{ij,k_{1}k_{2}}+\mathsf{Rem} _{2}.\] By Faa di Bruno's formula, we have for any integer \(n>0\), \[\frac{\partial^{n}F^{\prime}}{\partial d_{ij}^{n}}\big{(}h_{ \gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\big{)}=\sum_{(m_{1},\cdots,m_{n})}\frac{n! }{m_{1}!m_{2}!\cdot m_{n}!}\cdot F^{(m_{1}+\cdots+m_{n}+1)}\big{(}h_{\gamma,(ij )}(d_{ij},\chi_{ij}b_{ij})\big{)}\] \[\times\prod_{\ell=1}^{n}\bigg{(}\frac{h_{\gamma,(ij)}^{(\ell)}(d_ {ij},\chi_{ij}b_{ij})}{\ell!}\bigg{)}^{m_{\ell}}\] (B.7) Considering (B.7), (5.26)-(5.28), and using the perturbation argument as described in (5.10), we arrive at the following result: \[\frac{\partial^{s_{2}+1}F^{\prime}}{\partial d_{ij}^{s_{2}+1}}\big{(}h_{ \gamma,(ij)}(\hat{d}_{ij},\chi_{ij}b_{ij})\big{)}\sim\prod_{\ell=1}^{n}t^{-( \ell+1)m_{\ell}}\leq t^{-2n}.\] (B.8) Moreover, taking into account the fact that \(g_{(ij)}^{(k_{1})}(0)\prec t^{-(k_{1}+1)}\), we can deduce that: \[|\mathsf{Rem}_{2}|\lesssim\frac{N^{\epsilon}}{N^{1/2+\epsilon_{b}(k_{1}+s_{2}+1) }t^{k_{1}+2(s_{2}+1)}}\lesssim N^{-5/2},\] where, for the final step, we have chosen \(s_{2}\geq 4/\epsilon_{b}\) and \(t\gg N^{-\epsilon_{b}/4}\). Next, we estimate \((I_{1})_{ij,k_{1}k_{2}}\) in different cases. **Case 1:**\(k_{1}+k_{2}\) is even. By the law of total expectation, \[(I_{1})_{ij,k_{1}k_{2}}\] \[=\frac{\mathbf{1}_{\psi_{ij}=0}}{k_{1}!k_{2}!}\sum_{n=0}^{1} \mathbb{E}_{\Psi}\bigg{[}(1-\chi_{ij})a_{ij}d_{ij}^{k_{1}+k_{2}}g_{(ij)}^{(k_{ 1},0)}(0,\chi_{ij}b_{ij})\frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_ {2}}}\big{(}h_{\gamma,(ij)}(0,\chi_{ij}b_{ij})\big{)}\bigg{|}\chi_{ij}=n\bigg{]} \mathbb{P}(\chi_{ij}=n)\] \[=\frac{\mathbf{1}_{\psi_{ij}=0}=0}{k_{1}!k_{2}!}\mathbb{E}_{\Psi} \bigg{[}a_{ij}d_{ij}^{k_{1}+k_{2}}\bigg{|}\chi_{ij}=0\bigg{]}\mathbb{E}_{\Psi }\bigg{[}g_{(ij)}^{(k_{1},0)}(0,0)\frac{\partial^{k_{2}}F^{\prime}}{\partial d _{ij}^{k_{2}}}\big{(}h_{\gamma,(ij)}(0,0)\big{)}\bigg{]}\mathbb{P}(\chi_{ij}=0),\] (B.9) where the last step follows from the symmetry condition. **Case 2:**\(k_{1}+k_{2}\) is odd and \(k_{1}+k_{2}\geq 5\). Similar to (B.9), we have \[|(I_{1})_{ij,k_{1}k_{2}}| \lesssim\bigg{|}\mathbb{E}_{\Psi}\bigg{[}a_{ij}d_{ij}^{k_{1}+k_{2 }}\bigg{|}\chi_{ij}=0\bigg{]}\bigg{|}\mathbb{E}_{\Psi}\bigg{[}|g_{(ij)}^{(k_{ 1},0)}(0,0)|\bigg{|}\frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}} }\big{(}h_{\gamma,(ij)}(0,0)\big{)}\bigg{|}\bigg{|}\chi_{ij}=0\bigg{]}\mathbb{ P}(\chi_{ij}=0)\mathbf{1}_{\psi_{ij}=0}\] \[\lesssim\frac{1}{N^{2+2\epsilon_{a}+(k_{1}+k_{2}-3)\epsilon_{b}} }\mathbb{E}_{\Psi}\bigg{[}|g_{(ij)}^{(k_{1},0)}(0,\chi_{ij}b_{ij})|\bigg{|} \frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{\gamma,( ij)}(0,\chi_{ij}b_{ij})\big{)}\bigg{|}\bigg{]}\mathbf{1}_{\psi_{ij}=0,\chi_{ij}=0}.\] We may again obtain the bound \(|g_{(ij)}^{(k_{1})}(0,\chi_{ij}b_{ij})|\cdot\mathbf{1}_{\psi_{ij}=0,\chi_{ij} =0}\prec t^{-(k_{1}+1)}\) by (5.26)-(5.28), and the perturbation argument as described in (5.10). Using (i)equation (B.7) with \(d_{ij}\) replaced by \(0\), and (ii)the following rank inequality, \[|h_{\gamma,(ij)}(0,\chi_{ij}b_{ij})-h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})| \cdot\mathbf{1}_{\psi_{ij}=0,\chi_{ij}=0}\leq 2\eta_{0}\big{(}\|G_{(ij)}^{ \gamma,d_{ij}}(z_{t})\|+\|G_{(ij)}^{\gamma,0}(z_{t})\|\big{)}\mathbf{1}_{\psi_{ ij}=0,\chi_{ij}=0}\leq 2,\] (B.10) with the fact that \(h_{\gamma,(ij)}(d_{ij},\chi_{ij}b_{ij})\cdot\mathbf{1}_{\psi_{ij}=0,\chi_{ij} =0}\prec 1\), we can obtain that \[\bigg{|}\frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{ \gamma,(ij)}(0,\chi_{ij}b_{ij})\big{)}\bigg{|}\cdot\mathbf{1}_{\psi_{ij}=0, \chi_{ij}=0}\prec t^{-2k_{2}}.\] (B.11) Combining the above estimates and choosing \(t\gg N^{-\epsilon_{b}/8}\), we arrive at \[|(I_{1})_{ij,k_{1}k_{2}}|\lesssim\frac{N^{\epsilon}}{N^{2+2\epsilon_{a}+(k_{1} +k_{2}-3)\epsilon_{b}}t^{k_{1}+1+2k_{2}}}\lesssim\frac{1}{N^{2+2\epsilon_{a}}}.\] **Case 3:**\(k_{1}+k_{2}=3\). The estimation in this case is similar to Case 2 above, but we need to use the bound \(g_{(ij)}^{(k_{1},0)}(0,\chi_{ij}b_{ij})\prec 1\) when \(i\in\mathcal{T}_{r}\) and \(j\in\mathcal{T}_{c}\). Recall that \(|\mathcal{D}_{r}|\vee|\mathcal{D}_{c}|\leq N^{1-\epsilon_{d}}\). Then we have \[|(I_{1})_{ij,k_{1}k_{2}}| \lesssim\frac{N^{\epsilon}}{N^{2+2\epsilon_{a}}}\cdot\mathbf{1}_{ \psi_{ij}=0}\cdot\mathbf{1}_{i\in\mathcal{T}_{r},j\in\mathcal{T}_{c}}+\frac{1}{ N^{2-\epsilon_{d}}}\cdot\frac{N^{\epsilon}}{N^{2\epsilon_{a}+\epsilon_{d}}t^{k_{1}+1+2k_{2}}} \cdot\mathbf{1}_{\psi_{ij}=0}\cdot(1-\mathbf{1}_{i\in\mathcal{T}_{r},j\in \mathcal{T}_{c}})\] \[\lesssim\frac{1}{N^{2+\epsilon_{a}}}\cdot\mathbf{1}_{\psi_{ij}=0} \cdot\mathbf{1}_{i\in\mathcal{T}_{r},j\in\mathcal{T}_{c}}+\frac{1}{N^{2- \epsilon_{d}+\epsilon_{a}}}\cdot\mathbf{1}_{\psi_{ij}=0}\cdot(1-\mathbf{1}_{i \in\mathcal{T}_{r},j\in\mathcal{T}_{c}}),\] where in the last step, we used the fact \(t\gg N^{-\epsilon_{d}/8}\). **Case 4:**\(k_{1}+k_{2}=1\). In this case, using (B.9) we may compute that \[(I_{1})_{ij,k_{1}k_{2}}=\mathbb{E}_{\Psi}\big{[}\gamma a_{ij}^{2}\big{]}\cdot \mathbb{E}_{\Psi}\bigg{[}g_{(ij)}^{(k_{1},0)}(0,0)\frac{\partial^{k_{2}}F^{ \prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{\gamma,(ij)}(0,0)\big{)}\bigg{|} \chi_{ij}=0\bigg{]}\cdot\mathbb{P}(\chi_{ij}=0)\cdot\mathbf{1}_{\psi_{ij}=0}.\] We note that there will be corresponding terms in \((I_{2})_{ij}\), and these terms will cancel out with the ones described above. Combining the estimates in the above cases, we can obtain that there exists some constant \(\delta_{1}=\delta_{1}(\epsilon_{a})\) such that \[\sum_{i,j}(I_{1})_{ij}=\sum_{i,j}\sum_{\begin{subarray}{c}k_{1},k_{2 }\geq 0,\\ k_{1}+k_{2}=1\end{subarray}}\mathbb{E}_{\Psi}\big{[}\gamma a_{ij}^{2}\big{]} \cdot\mathbb{E}_{\Psi}\bigg{[}g_{(ij)}^{(k_{1},0)}(0,0)\frac{\partial^{k_{2}}F ^{\prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{\gamma,(ij)}(0,0)\big{)}\bigg{]}\] \[\times\mathbb{P}(\chi_{ij}=0)\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{ O}(N^{-\delta_{1}}).\] (B.12) Next, we consider the estimation for \((I_{2})_{ij}\). When \(\psi_{ij}=1\), we can apply Gaussian integration by parts to obtain that \[|(I_{2})_{ij}\cdot\mathbf{1}_{\psi_{ij}=1}|\lesssim\frac{t^{1/2}}{N}\mathbb{E }_{\Psi}\bigg{[}\Big{|}\partial_{w_{ij}}\big{\{}g_{(ij)}(e_{ij},c_{ij})F^{ \prime}\big{(}h_{\gamma,(ij)}(e_{ij},c_{ij})\big{)}\big{\}}\Big{]}\bigg{]} \cdot\mathbf{1}_{\psi_{ij}=1}\lesssim\frac{N^{\epsilon}}{Nt}\cdot\mathbf{1}_ {\psi_{ij}=1},\] where the last step follows from (5.26)-(5.28). The estimation for \((I_{2})_{ij}\cdot\mathbf{1}_{\psi_{ij}=0}\) is similar to those of \((I_{1})_{ij}\), we omit repetitive details. In summary, with the independence between \(z_{t}\) and \(w_{ij}\), we have by possibly adjusting \(\delta_{1}\), \[\sum_{i,j}(I_{2})_{ij}=\sum_{i,j}(I_{2})_{ij}\cdot\mathbf{1}_{ \psi_{ij}=0}+\sum_{i,j}(I_{2})_{ij}\cdot\mathbf{1}_{\psi_{ij}=1}\] \[=\sum_{i,j}\sum_{\begin{subarray}{c}k_{1},k_{2}\geq 0,\\ k_{1}+k_{2}=1\end{subarray}}\mathbb{E}_{\Psi}\big{[}\gamma tw_{ij}^{2}\big{]} \mathbb{E}_{\Psi}\bigg{[}g_{(ij)}^{(k_{1},0)}(0,\chi_{ij}b_{ij})\frac{ \partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{\gamma,(ij)}(0, \chi_{ij}b_{ij})\big{)}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathcal{O}(N^{- \delta_{1}}).\] (B.13) Note by (5.17) and the choices of \(\epsilon_{a}\) and \(\epsilon_{b}\), we have \[\mathbb{E}_{\Psi}\big{[}\gamma a_{ij}^{2}\big{]}\mathbb{P}(\chi_{ij}=0)- \mathbb{E}_{\Psi}\big{[}\gamma tw_{ij}^{2}\big{]}=\mathcal{O}\bigg{(}\frac{t} {N^{2+2\epsilon_{b}}}\bigg{)}.\] This together with the \(t\) dependent bounds for \(g_{(ij)}^{(k_{1},0)}\) and \(\partial^{k_{2}}F^{\prime}/(\partial d_{ij}^{k_{2}})\) implies that it suffices to bound the following quantity: \[\mathsf{G}:=\bigg{(}\mathbb{E}_{\Psi}\bigg{[}g_{(ij)}^{(k_{1},0)}(0,\chi_{ij}b _{ij})\frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{ \gamma,(ij)}(0,\chi_{ij}b_{ij})\big{)}\bigg{]}-\mathbb{E}_{\Psi}\bigg{[}g_{( ij)}^{(k_{1},0)}(0,0)\frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}}} \big{(}h_{\gamma,(ij)}(0,0)\big{)}\bigg{]}\bigg{)}\cdot\mathbf{1}_{\psi_{ij}=0}\] To provide a more precise distinction between (B.12) and (B.13), we let \[\mathsf{F}_{k_{1},k_{2}}(z_{t}^{(ij)}(\beta)):=g_{(ij)}^{(k_{1})}(0,\beta) \frac{\partial^{k_{2}}F^{\prime}}{\partial d_{ij}^{k_{2}}}\big{(}h_{\gamma,( ij)}(0,\beta)\big{)}.\] Therefore, \[\mathsf{G}=\bigg{(}\mathbb{E}_{\Psi}\bigg{[}\mathsf{F}_{k_{1},k_{2}}\big{(}z_{ t}(\chi_{ij}b_{ij})\big{)}\bigg{]}-\mathbb{E}_{\Psi}\bigg{[}\mathsf{F}_{k_{1},k_{2}} \big{(}z_{t}(0)\big{)}\bigg{]}\bigg{)}\cdot\mathbf{1}_{\psi_{ij}=0}.\] We may apply Taylor expansion to obtain that \[\bigg{(}\mathbb{E}_{\Psi}\bigg{[}\mathsf{F}_{k_{1},k_{2}}\big{(}z _{t}(\chi_{ij}b_{ij})\big{)}\bigg{]}-\mathbb{E}_{\Psi}\bigg{[}\mathsf{F}_{k_{1},k_{2}}\big{(}z_{t}(0)\big{)}\bigg{]}\bigg{)}\cdot\mathbf{1}_{\psi_{ij}=0}\] \[=\mathbb{E}_{\Psi}\bigg{[}\chi_{ij}^{2}b_{ij}^{2}\mathsf{F}_{k_{1},k_{2}}^{\prime}\big{(}z_{t}(b)\big{)}\cdot\frac{\partial^{2}\lambda_{-,t}}{ \partial B_{ij}^{2}}(b)\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}+\mathbb{E}_{ \Psi}\bigg{[}\chi_{ij}^{2}b_{ij}^{2}\mathsf{F}_{k_{1},k_{2}}^{\prime\prime} \big{(}z_{t}(b)\big{)}\cdot\bigg{(}\frac{\partial\lambda_{-,t}}{\partial B_{ij}}(b )\bigg{)}^{2}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0},\] (B.14) with \(b\in[0,B_{ij}]\). Here the first oder term disappeared due to symmetry. To bound the above terms we need to first verify that \(z_{t}(b)\) still lies inside \(\mathsf{D}\) (w.h.p). This can be done by noting that for the replacement matrix \(X_{(ij)}(b)\) which replace the \(B_{ij}\) by \(b\) in \(X\) still satisfies the \(\eta^{*}\)-regularity. Therefore by Weyl's inequality, \[|\lambda_{-,t}(\chi_{ij}b_{ij})-\lambda_{-,t}(b)| \prec|\lambda_{-,t}(\chi_{ij}b_{ij})-\lambda_{M}(\mathcal{S}(X))| +|\lambda_{M}(\mathcal{S}(X))-\lambda_{M}(\mathcal{S}(X_{(ij)}(b)))|\] \[+|\lambda_{M}(\mathcal{S}(X_{(ij)}(b)))-\lambda_{-,t}(b)|\prec N^ {-2/3}+N^{-\epsilon_{b}}+N^{-2/3}\prec N^{-\epsilon_{b}}.\] (B.15) Applying the perturbation argument as in (5.10) to relate \(g_{(ij)}^{(k_{1})}(0,b)\) back to \(g_{(ij)}^{(k_{1})}(d_{ij},b)\), and then using (B.15) to verify that \(z_{t}^{(ij)}(b)\in\text{D}\), we can see that the bound \(g_{(ij)}^{(k_{1})}(0,b)\prec t^{-(k_{1}+1)}\) still holds. Similarly, we can also obtain \(h_{\gamma,(ij)}^{(k_{2})}(0,b)\prec t^{-k_{2}}\) for \(k_{2}\geq 1\). For the case when \(k_{2}=0\), we may use (B.10) and the fact that \(N\eta_{0}\text{Im}\,m^{\gamma}(z_{t}^{(ij)}(b))\prec 1\) to conclude that \(h_{\gamma,(ij)}(0,b)\prec 1\). Combining the above bounds with a Cauchy integral argument, we have \[\mathsf{F}_{k_{1},k_{2}}^{\prime}\big{(}z_{t}(b)\big{)}\prec\frac{1}{\eta_{0} t^{2}},\quad\mathsf{F}_{k_{1},k_{2}}^{\prime\prime}\big{(}z_{t}(b)\big{)}\prec \frac{1}{\eta_{0}^{2}t^{2}}.\] Further using Lemma 5.4, we have for arbitrary (small)\(\epsilon>0\) and (large)\(D>0\), Since \[\chi_{ij}^{2}b_{ij}^{2}\Big{(}\mathsf{F}_{k_{1},k_{2}}^{\prime} \big{(}z_{t}(b)\big{)}\cdot\frac{\partial^{2}\lambda_{-,t}}{\partial B_{ij}^{ 2}}(b)+\mathsf{F}_{k_{1},k_{2}}^{\prime\prime}\big{(}z_{t}(b)\big{)}\cdot\bigg{(} \frac{\partial\lambda_{-,t}}{\partial B_{ij}}(b)\bigg{)}^{2}\Big{)}\cdot \mathbf{1}_{\psi_{ij}=0}\] \[=\Big{(}\mathsf{F}_{k_{1},k_{2}}\big{(}z_{t}(\chi_{ij}b_{ij}) \big{)}-\mathsf{F}_{k_{1},k_{2}}\big{(}z_{t}(0)\big{)}\Big{)}\cdot\mathbf{1}_ {\psi_{ij}=0}-\Big{(}\chi_{ij}b_{ij}F_{k_{1},k_{2}}^{\prime}(z_{t}(0))\cdot \frac{\partial\lambda_{-,t}}{\partial B_{ij}}(0)\Big{)}\cdot\mathbf{1}_{\psi _{ij}=0},\] the deterministic upper bound for the left hand side of the above equation follows from (5.19) in Lemma 5.4 and the fact that \(\text{Im}\,z_{t}\geq N^{-1}\). Then we may follow the steps as in (B.6) to obtain that \[\mathbb{E}_{\Psi}\bigg{[}\chi_{ij}^{2}b_{ij}^{2}\Big{(}\mathsf{F}_{k_{1},k_{2 }}^{\prime}\big{(}z_{t}(b)\big{)}\cdot\frac{\partial^{2}\lambda_{-,t}}{\partial B _{ij}^{2}}(b)+\mathsf{F}_{k_{1},k_{2}}^{\prime\prime}\big{(}z_{t}(b)\big{)} \cdot\bigg{(}\frac{\partial\lambda_{-,t}}{\partial B_{ij}}(b)\bigg{)}^{2} \bigg{)}\bigg{]}\cdot\mathbf{1}_{\psi_{ij}=0}\lesssim\frac{N^{\epsilon}}{N^{2} \eta_{0}t^{7}}\] Therefore, with the fact that \(\mathbb{E}_{\Psi}[\gamma a_{ij}^{2}]\mathbb{P}(\chi_{ij}=0)\sim t\mathbb{E}_{ \Psi}[\gamma w_{ij}^{2}]=\gamma t/N\), we have by possibly adjusting \(\delta_{1}\), \[\Big{|}\sum_{i,j}(I_{1})_{ij}-(I_{2})_{ij}\Big{|}=\sum_{i,j}\frac{\gamma t}{N} \sum_{\begin{subarray}{c}k_{1},k_{2}\geq 0,\\ k_{1}+k_{2}=1\end{subarray}}\Big{|}\mathbb{E}_{\Psi}\Big{[}\mathsf{F}_{k_{1},k_ {2}}\big{(}z_{t}(\chi_{ij}b_{ij})\big{)}\Big{]}-\mathbb{E}_{\Psi}\Big{[} \mathsf{F}_{k_{1},k_{2}}\big{(}z_{t}(0)\big{)}\Big{]}\Big{|}\mathbf{1}_{\psi_{ ij}=0}+\mathcal{O}(N^{-\delta_{1}})=\mathcal{O}(N^{-\delta_{1}}).\] This together with the arguments as in (5.45)-(5.46) completes the proof of (B.5). The proof for the case \(\alpha=8/3\) closely parallels, and is in fact simpler, primarily due to the absence of randomness in \(\lambda_{\text{shift}}\). Thus we omit the details. This concludes the proof.
2302.03791
How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control
Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-box predictor. In this work, we focus on image-to-image regression tasks and we present a generalization of the Risk-Controlling Prediction Sets (RCPS) procedure, that we term $K$-RCPS, which allows to $(i)$ provide entrywise calibrated intervals for future samples of any diffusion model, and $(ii)$ control a certain notion of risk with respect to a ground truth image with minimal mean interval length. Differently from existing conformal risk control procedures, ours relies on a novel convex optimization approach that allows for multidimensional risk control while provably minimizing the mean interval length. We illustrate our approach on two real-world image denoising problems: on natural images of faces as well as on computed tomography (CT) scans of the abdomen, demonstrating state of the art performance.
Jacopo Teneggi, Matthew Tivnan, J. Webster Stayman, Jeremias Sulam
2023-02-07T23:01:16Z
http://arxiv.org/abs/2302.03791v3
# How To Trust Your Diffusion Model: ###### Abstract Score-based generative modeling, informally referred to as _diffusion models_, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-box predictor. In this work, we focus on image-to-image regression tasks and we present a generalization of the Risk-Controlling Prediction Sets (RCPS) procedure, that we term \(K\)-RCPS, which allows to _(i)_ provide entrywise calibrated intervals for future samples of any diffusion model, and _(ii)_ control a certain notion of risk with respect to a ground truth image with minimal mean interval length. Differently from existing conformal risk control procedures, ours relies on a novel convex optimization approach that allows for multidimensional risk control while provably minimizing the mean interval length. We illustrate our approach on two real-world image denoising problems: on natural images of faces as well as on computed tomography (CT) scans of the abdomen, demonstrating state of the art performance. ## 1 Introduction Generative modeling is one of the longest standing tasks of classical and modern machine learning [12]. Recently, the foundational works by Song and Ermon [50], Song et al. [52], Pang et al. [43] on sampling via _score-matching_[26] and by Ho et al. [22] on _denoising diffusion models_[49] paved the way for a new class of _score-based_ generative models, which solve a reverse-time stochastic differential equation (SDE) [53, 2]. These models have proven remarkably effective on both unconditional (i.e., starting from random noise) and conditional (e.g., inpainting, denoising, super-resolution, or class-conditional) sample generation across a variety of fields [61, 18]. For example, score-based generative models have been applied to inverse problems in general computer vision and medical imaging [28, 31, 30, 59, 55, 54], 3D shape generation [62, 60, 42], and even in protein design [24, 17, 58, 27]. These strong empirical results highlight the potential of score-based generative models. However, they currently lack of precise statistical guarantees on the distribution of the generated samples, which hinders their safe deployment in high-stakes scenarios [25]. For example, consider a radiologist who is shown a computed tomography (CT) scan of the abdomen of a patient reconstructed via a score-based generative model. How confident should they be of the fine-grained details of the presented image? Should they trust that the model has not _hallucinated_ some of the features (e.g., calcifications, blood vessels, or nodules) involved in the diagnostic process? Put differently, how different will future samples be from the presented image, and how far can we expect them to be from the ground truth image? In this work we focus on image-to-image regression problems, where we are interested in recovering a high-quality ground truth image given a low-quality observation. While our approach is general, we focus on the problem of image denoising as a running example. We address the questions posed above on the reliability of score-based generative models (and, more generally, of any sampling procedure) through the lens of conformal prediction [44, 57, 38, 48, 3] and conformal risk control [10, 4, 5] which provide any black-box predictor with _distribution-free_, _finite-sample_ uncertainty guarantees. In particular, the contribution of this paper is three-fold: 1. Given a fixed score network, a low-quality observation, and any sampling procedure, we show how to construct valid entrywise calibrated intervals that provide _coverage_ of future samples, i.e. future samples (on the same observation) will fall within the intervals with high probability; 2. We introduce a novel high-dimensional conformal risk control procedure that minimizes the mean interval length directly, while guaranteeing the number of pixels in the ground truth image that fall outside of these intervals is below a user-specified level on future, unseen low-quality observations; 3. We showcase our approach for denoising of natural face images as well as for computed tomography of the abdomen, achieving state of the art results in mean interval length. Lastly, even though the contribution of this paper is presented and discussed in the context of score-based generative modeling for image-to-image regression problems, our results are broadly applicable to any sampling procedure, and we will comment on potential direct extensions where appropriate. All code and data necessary to reproduce the results presented in this work will be made publicly available at [https://github.com/Sulam-Group/k-rcps](https://github.com/Sulam-Group/k-rcps). ### Related work Image-to-Image Conformal Risk ControlPrevious works have explored conformal risk control procedures for image-to-image regression tasks. In particular, Angelopoulos et al. [6] show how to construct set predictors from heuristic notions of uncertainty (e.g., quantile regression [34, 45]) for any image regressor, and how to calibrate the resulting intervals according to the original RCPS procedure of Bates et al. [10]. Kutiel et al. [35] move beyond set predictors and propose a mask-based conformal risk control procedure that allows for notions of distance between the ground truth and predicted images other than interval-based ones. Finally, and most closely to this paper, Horwitz and Hoshen [25] sketch ideas of conformal risk control for diffusion models with the intention to integrate quantile regression and produce heuristic sampling bounds without the need to sample several times. Horwitz and Hoshen [25] also use the original RCPS procedure to guarantee risk control. Although similar in spirit, the contribution of this paper focuses on a high-dimensional generalization of the original RCPS procedure that formally minimizes the mean interval length. Our proposed procedure is agnostic of the notion of uncertainty chosen to construct the necessary set predictors. High-dimensional Conformal Risk ControlExisting works have considered multi-dimensional conformal risk control procedures for _multiple risks_[4, 36]. Angelopoulos et al. [4] introduce a general multiple hypothesis testing framework that can allow for high-dimensional risk control. Laufer-Goldshtein et al. [36] deploy these ideas in the context of multi-objective optimization and Pareto frontier exploration [32, 15]. Differently from existing literature, the focus of this paper is to study and characterize high-dimensional risk control for _a single risk_ such that it minimizes the mean interval length. Background First, we briefly introduce the necessary notation and general background information. Herein, we will refer to images as vectors in \(\mathbb{R}^{d}\), such that \(\mathcal{X}\subset\mathbb{R}^{d}\) and \(\mathcal{Y}\subset\mathbb{R}^{d}\) indicate the space of high-quality ground truth images, and low-quality observations, respectively. We assume both \(\mathcal{X}\) and \(\mathcal{Y}\) to be bounded. For a general image-to-image regression problem, given a pair \((x,y)\) drawn from an unknown distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\), the task is to retrieve \(x\in\mathcal{X}\) given \(y\in\mathcal{Y}\). This is usually carried out by means of a predictor \(f:\;\mathcal{Y}\to\mathcal{X}\) that minimizes some notion of distance (e.g., MSE loss) between the ground truth images and reconstructed estimates on a set \(\{(x_{i},y_{i})\}_{i=1}^{n}\sim\mathcal{D}^{n}\) of \(n\) pairs of high- and low-quality images. For example, in the classical denoising problem, one has \(y=x+v_{0}\) where \(v_{0}\sim\mathcal{N}(0,\sigma_{0}^{2})\) is random Gaussian noise with variance \(\sigma_{0}^{2}\), and one wishes to learn a denoiser \(f\) such that \(f(y)\approx x\). ### Score-based Conditional Sampling Most image-to-image regression problems are ill-posed: there exist several ground truth images that could have generated the same low-quality observation. This is easy to see for the classical denoising problem described above. Instead of a point predictor \(f\)--which could approximate a maximum-a-posteriori (MAP) estimate--one is often interested in devising a sampling procedure \(F:\;\mathcal{Y}\to\mathcal{X}\) for the posterior \(p(x|y)\), which precisely describes the distribution of possible ground truth images that generated the observation \(y\). In real-world scenarios, however, the full joint \((x,y)\) is unknown, and one must resort to approximate \(p(x|y)\) from finite data. It is known that for a general Ito process \(\mathrm{d}x=h(x,t)\,\mathrm{d}t+g(t)\,\mathrm{d}w\) that perturbs an input \(x\) into random noise [29], it suffices to know the _Stein score_\(\nabla_{x}\log p_{t}(x)\)[2, 39] to sample from \(p(x)\) via the reverse-time process \[\mathrm{d}x=[h(x,t)-g(t)^{2}\nabla_{x}\log p_{t}(x)]\,\mathrm{d}t+g(t)\, \mathrm{d}\bar{w}, \tag{1}\] where \(h(x,t)\) and \(g(t)\) are a _drift_ and _diffusion_ term, respectively, and \(\mathrm{d}w\) and \(\mathrm{d}\bar{w}\) are forward- and reverse-time standard Brownian motion.1 Furthermore, if the likelihood \(p(y|x)\) is known--which is usually the case for image-to-image regression problems--it is possible to condition the sampling procedure on an observation \(y\). Specifically, by Bayes' rule, it follows that \(\nabla_{x}\log p_{t}(x|y)=\nabla_{x}\log p_{t}(y|x)+\nabla_{x}\log p_{t}(x)\) which can be plugged-in into the reverse-time SDE in Equation (1) to sample from \(p(x|y)\). Footnote 1: We will assume time \(t\) continuous in \([0,1]\). Recent advances in generative modeling by Song and Ermon [50], Song et al. [53] showed that one can efficiently train a _time-conditional_ score network \(s(\tilde{x},t)\) to approximate the score \(\nabla_{x}\log p_{t}(\tilde{x})\) via _denoising score-matching_[26]. In this way, given a forward-time SDE that models the observation process, a score network \(s(\tilde{x},t)\approx\nabla_{x}\log p_{t}(\tilde{x})\), and the likelihood term \(p(y|\tilde{x})\), one can sample from \(p(x|y)\) by solving the conditional reverse-time SDE with any discretization (e.g., Euler-Maruyama) or predictor-corrector scheme [53]. While these models perform remarkably well in practice, limited guarantees exist on the distributions that they sample from [37]. Instead, we will provide guarantees for diffusion models by leveraging ideas of conformal prediction and conformal risk control, which we now introduce. ### Conformal Prediction Conformal prediction has a rich history in mathematical statistics [57, 44, 56, 8, 9, 21].2 It comprises various methodologies to construct finite-sample, statistically valid uncertainty guarantees for general predictors without making any assumption on the distribution of the response (i.e., they are distribution-free). It particular, these methods construct valid prediction sets that provide coverage, which we now define. **Definition 2.1** (Coverage [48]).: Let \(z_{1},\ldots,z_{m},z_{m+1}\) be \(m+1\) exchangeable random variables drawn from the same unknown distribution \(\mathcal{Q}\) over \(\mathcal{Z}\). For a desired miscoverage level \(\alpha\in[0,1]\), a set \(\mathcal{C}\subseteq 2^{\mathcal{Z}}\) that only depends on \(z_{1},\ldots,z_{m}\) provides coverage if \[\mathbb{P}[z_{m+1}\in\mathcal{C}]\geq 1-\alpha. \tag{2}\] We remark that the notion of coverage defined above was introduced in the context of classification problems, where one is interested in guaranteeing that the true, unseen label of a future sample will be in the prediction set \(\mathcal{C}\) with high probability. It is immediate to see how conformal prediction conveys a very precise notion of uncertainty--the larger \(\mathcal{C}\) has to be in order to guarantee coverage, the more _uncertain_ the underlying predictor. We refer the interested reader to [48, 3] for classical examples of conformal prediction. In many scenarios (e.g., regression), the natural notion of uncertainty may be different from miscoverage as described above (e.g., \(\ell_{2}\) norm). We now move onto presenting conformal risk control, which extends coverage to a broader family of notions of risk. ### Conformal Risk Control Let \(\mathcal{I}:\ \mathcal{Y}\to\mathcal{X}^{\prime}\) be a general _set-valued_ predictor from \(\mathcal{Y}\) into \(\mathcal{X}^{\prime}\subseteq 2^{\mathcal{X}}\). Consider a nonnegative loss \(\ell:\ \mathcal{X}\times\mathcal{X}^{\prime}\to\mathbb{R}\) measuring the discrepancy between a ground truth \(x\) and the predicted intervals \(\mathcal{I}(y)\). We might be interested in guaranteeing that this loss will be below a certain tolerance \(\epsilon\geq 0\) on future, unseen samples \(y\)--for which we do not know the ground truth \(x\)--with high probability. Conformal risk control [10, 4, 5] extends ideas of conformal prediction in order to select a specific predictor \(\mathcal{I}\) that controls the risk, \(\mathbb{E}[\ell(x,\mathcal{I}(y))]\), in the following sense. **Definition 2.2** (Risk Controlling Prediction Sets).: Let \(\mathcal{S}_{\mathrm{cal}}=\{(x_{i},y_{i})\}_{i=1}^{n}\sim\mathcal{D}^{n}\) be a calibration set of \(n\) i.i.d. samples from an unknown distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\). For a desired risk level \(\epsilon\geq 0\) and a failure probability \(\delta\in[0,1]\), a random set-valued predictor \(\mathcal{I}:\ \mathcal{Y}\to\mathcal{X}^{\prime}\subseteq 2^{\mathcal{X}}\) is an \((\epsilon,\delta)\)-RCPS w.r.t. a loss function \(\ell:\ \mathcal{X}\times\mathcal{X}^{\prime}\to\mathbb{R}\) if \[\mathbb{P}_{\mathcal{S}_{\mathrm{cal}}}[\mathbb{E}_{(x,y)\sim\mathcal{D}}[ \ell(x,\mathcal{I}(y))]\leq\epsilon]\geq 1-\delta. \tag{3}\] Bates et al. [10] introduced the first conformal risk control procedure for _monotonically nonincreasing_ loss functions, those that satisfy, for a fixed \(x\), \[\mathcal{I}(y)\subset\mathcal{I}^{\prime}(y)\implies\ell(x,\mathcal{I}^{ \prime}(y))\leq\ell(x,\mathcal{I}(y)). \tag{4}\] In this way, increasing the size of the sets cannot increase the value of the loss. Furthermore, assume that for a fixed input \(y\) the family of set predictors \(\{\mathcal{I}_{\lambda}(y)\}_{\lambda\in\Lambda}\), indexed by \(\lambda\in\Lambda\), \(\Lambda\subset\overline{\mathbb{R}}\coloneqq\mathbb{R}\cup\{\pm\infty\}\), satisfies the following nesting property [21] \[\lambda_{1}<\lambda_{2}\implies\mathcal{I}_{\lambda_{1}}(y)\subset\mathcal{I} _{\lambda_{2}}(y). \tag{5}\] Denote \(R(\lambda)=\mathbb{E}[\ell(x,\mathcal{I}_{\lambda}(y))]\) the risk of \(\mathcal{I}_{\lambda}(y)\) and \(\hat{R}(\lambda)\) its empirical estimate over a calibration set \(\mathcal{S}_{\mathrm{cal}}=\{(x_{i},y_{i})\}_{i=1}^{n}\). Finally, let \(\hat{R}^{+}(\lambda)\) be a _pointwise upper confidence bound_ (UCB) that covers the risk, that is \[\mathbb{P}[R(\lambda)\leq\hat{R}^{+}(\lambda)]\geq 1-\delta \tag{6}\] for _each_, fixed value of \(\lambda\)--such that can be derived by means of concentration inequalities (e.g., Hoeffding's inequality [23], Bentkus' inequality [11], or respective hybridization [10]).3 With these elements, Bates et al. [10] show that choosing Footnote 3: We stress that Equation (6) does _not_ imply _uniform coverage_\(\forall\lambda\in\Lambda\). \[\hat{\lambda}=\inf\{\lambda\in\Lambda:\ \hat{R}^{+}(\lambda^{\prime})<\epsilon,\ \forall\lambda^{\prime}\geq\lambda\} \tag{7}\] guarantees that \(\mathcal{I}_{\hat{\lambda}}(y)\) is an \((\epsilon,\delta)\)-RCPS according to Definition 2.2. In other words, choosing \(\hat{\lambda}\) as the smallest \(\lambda\) such that the UCB is below the desired level \(\epsilon\) for all values of \(\lambda\geq\hat{\lambda}\) controls the risk al level \(\epsilon\) with probability at least \(1-\delta\). For the sake of completeness, we include the original conformal risk control procedure in Algorithm 2 in Appendix B. Equipped with these general concepts, we now move onto presenting the contributions of this work. ## 3 How to Trust your Diffusion Model We now go back to the main focus of this paper: solving image-to-image regression problems with diffusion models. Rather than a point-predictor \(f:\;\mathcal{Y}\to\mathcal{X}\), we assume to have access to a stochastic sampling procedure \(F:\;\mathcal{Y}\to\mathcal{X}\) such that \(F(y)\) is a random variable with unknown distribution \(\mathcal{Q}_{y}\)--that hopefully approximates the posterior distribution of \(x\) given \(y\), i.e. \(\mathcal{Q}_{y}\approx p(x|y)\). However, we make no assumptions on the quality of this approximation for our results to hold. As described in Section 2.1, \(F\) can be obtained by means of a time-conditional score network \(s(\tilde{x},t)\) and a reverse-time SDE. While our results are applicable to _any_ sampling procedure, we present them in the context of diffusion models because of their remarkable empirical results and increasing use in critical applications [61, 18]. One can identify three separate sources of randomness in a general stochastic image-to-image regression problem: _(i)_ the unknown prior \(p(x)\) over the space of ground-truth images, as \(x\sim p(x)\), _(ii)_ the randomness in the observation process of \(y\) (which can be modeled by a forward-time SDE over \(x\)), and finally _(iii)_ the stochasticity in the sampling procedure \(F(y)\). We will first provide conformal prediction guarantees for a fixed observation \(y\), and then move onto conformal risk control for the ground truth image \(x\). ### Calibrated Quantiles for Future Samples Given the same low-quality (e.g., noisy) observation \(y\), where will future unseen samples from \(F(y)\sim\mathcal{Q}_{y}\) fall? How concentrated will they be? Denote \(\mathcal{I}:\;\mathcal{Y}\to\mathcal{X}^{\prime}\) a (random) set-valued predictor from \(\mathcal{Y}\subset\mathbb{R}^{d}\) into a space of sets \(\mathcal{X}^{\prime}\subseteq 2^{\mathcal{X}}\) over \(\mathcal{X}\subset\mathbb{R}^{d}\) (e.g., \(\mathcal{X}=[0,1]^{d}\), \(\mathcal{X}^{\prime}\subseteq 2^{[0,1]^{d}}\)). We extend the notion of coverage in Definition 2.1 to _entrywise coverage_, which we now make precise. **Definition 3.1** (Entrywise coverage).: Let \(z_{1},\ldots,z_{m},z_{m+1}\) be \(m+1\) exchangeable random vectors drawn from the same unknown distribution \(\mathcal{Q}\) over \(\mathcal{X}\subset\mathbb{R}^{d}\). For a desired miscoverage level \(\alpha\in[0,1]\), a set \(\mathcal{I}\subseteq 2^{\mathcal{X}}\) that only depends on \(z_{1},\ldots,z_{m}\) provides entrywise coverage if \[\mathbb{P}[(z_{m+1})_{j}\in\mathcal{I}_{j}]\geq 1-\alpha \tag{8}\] for each \(j\in[d]\coloneqq\{1,\ldots,d\}\). We stress that the definition above is different from notions of _vector quantiles_[13, 16] in the sense that coverage is not guaranteed over the entire new random vector \(z_{m+1}\) but rather along each dimension independently. Ideas of vector quantile regression (VQR) are complementary to the contribution of the current work and subject of ongoing research [20, 14, 47]. For a fixed observation \(y\), we use conformal prediction to construct a set predictor that provides entrywise coverage. **Lemma 3.2** (Calibrated quantiles guarantee entrywise coverage).: _Let \(F:\;\mathcal{Y}\to\mathcal{X}\) be a stochastic sampling procedure from \(\mathcal{Y}\subset\mathbb{R}^{d}\) into \(\mathcal{X}\subset\mathbb{R}^{d}\). Given \(y\in\mathcal{Y}\), let \(F_{1},\ldots,F_{m},F_{m+1}\) be \(m+1\) i.i.d. samples from \(F(y)\). For a desired miscoverage level \(\alpha\in[0,1]\) and for each \(j\in[d]\), let \(\hat{l}_{j,\alpha},\hat{u}_{j,\alpha}\) be the \(\lfloor(m+1)\alpha/2\rfloor/m\) and \(\lceil(m+1)(1-\alpha/2)\rceil/m\) entrywise empirical quantiles of \(F_{1},\ldots,F_{m}\). Then,_ \[\mathcal{I}^{\alpha}(y)_{j}=[\hat{l}_{j,\alpha},\hat{u}_{j,\alpha}] \tag{9}\] _provides entrywise coverage._ The simple proof of this result is included in Appendix A.1. We remark that, analogously to previous works [6, 25], the intervals in \(\mathcal{I}^{\alpha}(y)\) are _feature-dependent_ and they capture regions of the image where the sampling process \(F(y)\) may have larger uncertainty. The intervals in \(\mathcal{I}^{\alpha}(y)\) are statistically valid for any number of samples \(m\) and any distribution \(\mathcal{Q}_{y}\), i.e. they are not a heuristic notion of uncertainty. If the sampling procedure \(F\) is a diffusion model, constructing \(\mathcal{I}^{\alpha}(y)\) is agnostic of the discretization scheme used to solve the reverse-time SDE [53] and it does not require retraining the underlying score network, which can be a time-consuming and delicate process, especially when the size of the images is considerable. On the other hand, constructing the intervals \(\mathcal{I}^{\alpha}(y)\) requires sampling a large enough number of times from \(F(y)\), which may seen cumbersome [25]. This is by construction and intention: diffusion models are indeed very useful in providing good (and varied) samples from the approximate posterior. In this way, practitioners do typically sample several realizations to get an empirical study of this distribution. In these settings, constructing the intervals \(\mathcal{I}^{\alpha}(y)\) does not involve any additional computational costs. Furthermore, note that sampling is completely parallelizable, and so no extra complexity is incurred if a larger number of computing nodes are available. ### A Provable Approach to Optimal Risk Control In this section, we will revisit the main ideas around conformal risk control introduced in Section 2.3 and generalize them into our proposed approach, \(K\)-RCPS. Naturally, one would like a good conformal risk control procedure to yield the shortest possible interval lengths. Assume pixel intensities are normalized between \([0,1]\) and consider the loss function \[\ell^{01}(x,\mathcal{I}(y))=\frac{1}{d}\sum_{j\in[d]}\mathbbm{1}[x_{j}\notin \mathcal{I}(y)_{j}], \tag{10}\] which counts the (average) number of ground truth pixels that fall outside of their respective intervals in \(\mathcal{I}(y)\). The constant set-valued predictor \(\mathcal{U}(y)=[0,1]^{d}\) would trivially control the risk, i.e. \(R^{01}(\lambda)=\mathbb{E}[\ell^{01}(x,\mathcal{U}(y))]=0\). Alas, such a predictor would be completely uninformative. Instead, let \(\{\mathcal{I}_{\lambda}(y)\}_{\lambda\in\Lambda}\), \(\Lambda\subset\overline{\mathbb{R}}\) be a family of predictors that satisfy the nesting property in Equation (5). In particular, we propose the following additive parametrization in \(\lambda\) \[\mathcal{I}_{\lambda}(y)_{j}=[\hat{l}_{j}-\lambda,\hat{u}_{j}+\lambda] \tag{11}\] for some lower and upper endpoints \(\hat{l}_{j}<\hat{u}_{j}\) that may depend on \(y\). For this particularly chosen family of nested predictors, it follows that the mean interval length is \[\bar{I}(\lambda)=\frac{1}{d}\sum_{j\in[d]}(\hat{u}_{j}-\hat{l}_{j})+2\lambda, \tag{12}\] a linear function of \(\lambda\). Moreover, we can instantiate \(\hat{l}_{j}\) and \(\hat{u}_{j}\) to be the calibrated quantiles with entrywise coverage, i.e. \(\mathcal{I}^{\alpha}_{\lambda}(y)=[\hat{l}_{j,\alpha}-\lambda,\hat{u}_{j, \alpha}+\lambda]\). For such a class of predictors--since \(\ell^{01}\) is monotonically nonincreasing--the original RCPS procedure (see Equation (7)) is equivalent to the following constrained optimization problem \[\hat{\lambda}=\operatorname*{arg\,min}_{\lambda\in\Lambda}\;\bar{I}(\lambda) \qquad\text{s.t.}\quad\hat{R}^{01+}(\lambda^{\prime})<\epsilon,\;\forall \lambda^{\prime}\geq\lambda\] (P1) which naturally minimizes \(\lambda\). However, optimizing the mean interval length over a single scalar parameter \(\lambda\) is suboptimal in general, as shown in Figure 1. With abuse of notation--we do not generally refer to vectors with boldface--let \(\{\mathcal{I}_{\lambda}(y)\}_{\lambda\in\Lambda^{d}}\) be a family of predictors indexed by a \(d\)-dimensional _vector_\(\mathbf{\lambda}=(\lambda_{1},\ldots,\lambda_{d})\) that satisfies the nesting property in Equation (5) in an entrywise fashion. A natural extension of Equation (11) is then \[\mathcal{I}_{\mathbf{\lambda}}(y)_{j}=[\hat{l}_{j}-\lambda_{j},\hat{u}_{j}+\lambda_ {j}], \tag{13}\] from which one can define an equivalent function \(\bar{I}(\mathbf{\lambda})\). In particular, using the calibrated intervals as before, define \[\mathcal{I}_{\mathbf{\lambda}}^{\alpha}(y)_{j}=[\hat{l}_{j,\alpha}-\lambda_{j}, \hat{u}_{j,\alpha}+\lambda_{j}]. \tag{14}\] Now note that \(\ell^{01}(x,\mathcal{I}_{\mathbf{\lambda}}(y))\) is entrywise monotonically nonincreasing. Hence, for each, fixed vector \(\mathbf{\eta}\in\mathbb{R}^{d}\) in the positive orthant (i.e., \(\mathbf{\eta}\geq 0\)), the \(d\)-dimensional extension of (P1) becomes \[\hat{\mathbf{\lambda}}=\underset{\mathbf{\lambda}\in\Lambda^{d}}{\text{arg min}}\ \sum_{j\in[d]}\lambda_{j}\qquad\text{s.t.}\quad\hat{R}^{01+}(\mathbf{\lambda}+\beta \mathbf{\eta})<\epsilon,\ \forall\beta\geq 0\] (P \[d\] ) Intuitively, \(\hat{\mathbf{\lambda}}\) minimizes the sum of its entries such that the UCB is smaller than \(\epsilon\) for all points _to its right_ along the direction of \(\mathbf{\eta}\) parametrized by \(\beta\). We now show a general high-dimensional risk control result that holds for any entrywise monotonically nonincreasing loss function \(\ell\) (and not just \(\ell^{01}\) as presented in (P\(d\))) with risk \(R(\mathbf{\lambda})\), empirical estimate \(\hat{R}(\mathbf{\lambda})\) and respective UCB \(\hat{R}^{+}(\mathbf{\lambda})\). **Theorem 3.3** (High-dimensional conformal risk control).: _Let \(\ell:\ \mathcal{X}\times\mathcal{X}^{\prime}\to\mathbb{R}\), \(\mathcal{X}^{\prime}\subseteq 2^{\mathcal{X}}\), \(\mathcal{X}\subset\mathbb{R}^{d}\) be an entrywise monotonically nonincreasing function and let \(\{\mathcal{I}_{\mathbf{\lambda}}(y)=[\hat{l}_{j}-\lambda_{j},\hat{u}_{j}+\lambda_ {j}]\}_{\mathbf{\lambda}\in\Lambda^{d}}\) be a family of set-valued predictors \(\mathcal{I}:\ \mathcal{Y}\to\mathcal{X}^{\prime}\), \(\mathcal{Y}\subset\mathbb{R}^{d}\) indexed by \(\mathbf{\lambda}\in\Lambda^{d}\), \(\Lambda\subset\overline{\mathbb{R}}\) for some lower and upper endpoints \(\hat{l}_{j}<\hat{u}_{j}\) that may depend on \(y\). For a fixed vector \(\mathbf{\eta}\in\mathbb{R}^{d}\), \(\mathbf{\eta}\geq 0\), if_ \[\hat{\mathbf{\lambda}}=\underset{\mathbf{\lambda}\in\Lambda^{d}}{\text{arg min}}\ \sum_{j\in[d]}\lambda_{j}\qquad\text{s.t.}\quad\hat{R}^{+}(\mathbf{\lambda}+\beta \mathbf{\eta})<\epsilon,\ \forall\beta\geq 0, \tag{15}\] _then \(\mathcal{I}_{\hat{\mathbf{\lambda}}}(y)\) is an \((\epsilon,\delta)\)-RCPS._ The proof is included in Appendix A.2. Since \(\ell^{01}\) is entrywise monotonically nonincreasing, it follows that the solution to (P\(d\)) controls risk. The attentive reader will have noticed (as shown in Figure 1) that the constraint set \(\hat{R}^{01+}(\mathbf{\lambda})\leq\epsilon\) need not be convex. Furthermore, and as shown in Figure 1(b), \(\ell^{01}\) is not convex in \(\mathbf{\lambda}\). Hence, it is not possible to optimally solve (P_d_) directly. Instead, we relax it to a convex optimization problem by means of a convex upper bound \[\ell^{\gamma}(x,\mathcal{I}_{\mathbf{\lambda}}(y))=\frac{1}{d}\sum_{j\in[d]}\left[ \frac{2(1+q)}{I(\mathbf{\lambda})_{j}}|x_{j}-c_{j}|-q\right]_{+}, \tag{16}\] where \(q=\gamma/(1-\gamma)\), \(\gamma\in[0,1)\), \(I(\mathbf{\lambda})_{j}=\hat{u}_{j}-\hat{l}_{j}+2\lambda_{j}\), \(c_{j}=(\hat{u}_{j}+\hat{l}_{j})/2\), and \([u]_{+}=\max(0,u)\). We show that \(\ell^{\gamma}(x,\mathcal{I}_{\mathbf{\lambda}}(y))\) is convex in \(\mathbf{\lambda}\) for \(\mathbf{\lambda}\geq 0\) in Appendix A.3. As shown in Figure 1(a), the hyperparameter \(\gamma\) controls the degree of relaxation by means of changing the portion of the intervals \([\hat{l}_{j},\hat{u}_{j}]\) where the loss is \(0\). This way, \(\gamma=0\) retrieves the \(\ell_{1}\) loss centered at \(c_{j}\), and \(\lim_{\gamma\to 1}\ell^{\gamma}=\infty\) if \(\exists j\in[d]:\ x_{j}\notin[\hat{l}_{j},\hat{u}_{j}]\) and \(0\) otherwise. While one can readily write a convex alternative to (P_d_) by means of this new loss, we instead propose a generalization of this idea in our final problem formulation \[\widetilde{\mathbf{\lambda}}_{K}=\underset{\mathbf{\lambda}\in\Lambda^{K}}{\text{arg min}}\ \sum_{k\in[K]}n_{k}\lambda_{k}\qquad\text{s.t.}\quad\hat{R}^{\gamma}(M\mathbf{ \lambda})\leq\epsilon,\ \mathbf{\lambda}\geq 0\] ( \[\text{P}K\] ) for any user-defined \(K\)-partition of the \([d]\) features--which can be identified by a membership matrix \(M\in\{0,1\}^{d\times K}\) where each feature belongs to one and only one of \(K\) classes, and \(n_{k}\coloneqq|\{j\in[d]:\ M_{jk}=1\}|\), \(\sum_{k\in[K]}n_{k}=d\). As we will shortly see, it will be useful to set \(M\) as that assigning each feature to its respective \(k^{\text{th}}\) empirical quantile of \(\hat{R}^{\gamma}(\mathbf{0})\). We remark that the constrain set in (P_K_) is defined on the empirical estimate of the risk of \(\mathcal{I}_{\mathbf{\lambda}}(y)\) and it does not involve the computation of the UCB. Then, (P_K_) can be solved with any standard off-the-shelf convex optimization software (e.g., CVXPY [19, 1], MOSEK [7]). Our novel conformal risk control procedure, \(K\)-RCPS, finds a vector \(\hat{\mathbf{\lambda}}_{K}\) that approximates a solution to the nonconvex optimization problem (P_d_) via a two step procedure: 1. First obtaining the solution \(\widetilde{\mathbf{\lambda}}_{K}\) to a user-defined (P_K_) problem, and then 2. Choosing \(\hat{\beta}\in\Lambda\) such that \[\hat{\beta}=\inf\{\beta\in\Lambda:\ \hat{R}^{01+}(M\widetilde{\mathbf{\lambda}}_{K}+ \beta^{\prime}\mathbf{1})<\epsilon,\ \forall\beta^{\prime}\geq\beta\}\] and \(\hat{\mathbf{\lambda}}_{K}=M\widetilde{\mathbf{\lambda}}_{K}+\hat{\beta}\mathbf{1}\). Intuitively, the \(K\)-RCPS algorithm is equivalent to performing the original RCPS procedure along the line \(M\widetilde{\mathbf{\lambda}}_{K}+\beta\mathbf{1}\) parametrized by \(\beta\). We remark that--as noted in Theorem 3.3--any choice of \(\mathbf{\eta}\geq 0\) provides a valid direction along which to perform the RCPS procedure. Here, we choose \(\mathbf{1}\) because it is precisely the gradient of the objective function. Future work entails devising more sophisticated algorithms to approximate the solution of (P_d_). Algorithm 1 implements the \(K\)-RCPS procedure for any calibration set \(\mathcal{S}_{\mathrm{cal}}=\{(x_{i},y_{i})\}_{i=1}^{n}\), any general family of set-valued predictors of the form \(\{\mathcal{I}_{\mathbf{\lambda}}=[\hat{l}_{j}-\lambda_{j},\hat{u}_{j}+\lambda_{j}] \}_{\mathbf{\lambda}\in\Lambda^{d}}\), any membership function \(\mathcal{M}:\ \{\mathcal{X}\times\mathcal{Y}\}^{n}\to\mathbb{R}^{d\times K}\), and a general \(\mathrm{UCB}(n,\delta,\hat{R}(\mathbf{\lambda}))\) that accepts a number of samples \(n\), a failure probability \(\delta\), and an empirical risk \(\hat{R}(\mathbf{\lambda})\) and that returns a pointwise upper confidence bound \(\hat{R}^{+}(\mathbf{\lambda})\) that satisfies Equation (6). We remark that, following the _split fixed sequence testing_ framework introduced in Angelopoulos et al. [4] and applied in previous work [36], the membership matrix and its optimization problem (P\(K\)) are computed on a subset \(\mathcal{S}_{\mathrm{opt}}\) of the calibration set \(\mathcal{S}_{\mathrm{cal}}\), such that the direction \(M\widetilde{\mathbf{\lambda}}_{K}+\beta\mathbf{1}\) along which to perform the RCPS procedure is chosen _before_ looking at the data \(\mathcal{S}_{\mathrm{RCPS}}=\mathcal{S}_{\mathrm{cal}}\setminus\mathcal{S}_{ \mathrm{opt}}\). We note that \(K\)-RCPS allows for some of the entries in \(\hat{\mathbf{\lambda}}\) to be set to \(0\), which preserves the original intervals such that--if they are obtained as described in Section 3.1--they still provide entrywise coverage of future samples at the desired level \(\alpha\). We now move onto showcasing the advantage of \(K\)-RCPS in terms on mean interval length on two real-world high dimensional denoising problems: one on natural images of faces as well as on CT scans of the abdomen. ## 4 Experiments Since our methodological contribution is two-fold (i.e., the calibrated intervals \(\mathcal{I}^{\alpha}(y)\) for diffusion models, as well as our \(K\)-RCPS framework), we compare our procedure with the original RCPS algorithm with both its original implementation based on quantile regression as well as with our proposed intervals \(\mathcal{I}^{\alpha}(y)\). We focus on denoising problems where \(y=x+v_{0}\), \(v_{0}\sim\mathcal{N}(0,\sigma_{0}^{2})\) on two imaging datasets: the CelebA dataset [40] and the AbdomenCT-1K dataset [41]. In particular, we train _(i)_ a time-conditional score network \(s(\tilde{x},t)\approx\nabla_{x}\log p_{t}(\tilde{x})\) following Song et al. [53] to sample from the posterior distribution \(p(x|y)\) as described in Section 2.1, and _(ii)_ a time-conditional image regressor \(f:\ \mathcal{Y}\times\mathbb{R}\to\mathcal{X}^{3}\) following Angelopoulos et al. [6] such that \(f(y,t)=(\hat{q}_{\alpha/2},\hat{x},\hat{q}_{1-\alpha/2})\), where \(\hat{x}\approx\mathbb{E}[x\mid y]\) minimizes the MSE loss between the noisy observation \(y\) and the ground truth \(x\), and \(\hat{q}_{\alpha/2},\hat{q}_{1-\alpha/2}\) are the \(\alpha/2\) and \(1-\alpha/2\) quantile regressors of \(x\), respectively [33, 45, 6]. Both models are composed of the same NCSN++ [53] backbone for a fair comparison. We include further details on the datasets, the models, and the training and sampling procedures in Appendix C. For the diffusion model, we compute its entrywise calibrated quantiles \(\{T^{\alpha}_{\mathbf{\lambda}}(y)_{j}=[\hat{l}_{j,\alpha}-\lambda_{j},\hat{u}_{j, \alpha}+\lambda_{j}]\}_{\mathbf{\lambda}\in\Lambda^{d}}\) as described in Equation (9) on 128 samples from the same noisy input image (see Figure 3 for some examples). For the image regressor, instead, we follow Angelopoulos et al. [6] and construct the family of nested set predictors \(\{\mathcal{T}_{\lambda}(y)\}_{\lambda\in\Lambda}\) such that \[\mathcal{T}_{\lambda}(y)_{j}=[\hat{x}_{j}-\lambda(\hat{q}_{\alpha/2})_{j}, \hat{x}_{j}+\lambda(\hat{q}_{1-\alpha/2})_{j}]. \tag{17}\] For a fair comparison, we compare all models and calibration procedures on 20 random draws of calibration and validation sets \(\mathcal{S}_{\text{cal}},\mathcal{S}_{\text{val}}\) of length \(n_{\text{cal}}\) and \(n_{\text{val}}\), respectively. We remark that for the \(K\)-RCPS procedure, \(n_{\text{opt}}\) samples from \(\mathcal{S}_{\text{cal}}\) will be used to solve the optimization problem (P\(K\)). It follows that for a fixed \(n_{\text{cal}}\), the concentration inequality used in the \(K\)-RCPS procedure will be looser compared to the one in the RCPS algorithm. We will show that there remains a clear benefit of using the \(K\)-RCPS algorithm in terms of mean interval length given the same amount of calibration data available (i.e., even while the concentration bound becomes looser). In these experiments, we construct the membership matrix \(M\) by assigning each feature \(j\in[d]\) to the respective \(k^{\text{th}}\), \(k=1,\ldots,K\) quantile of the empirical estimate of the risk computed on \(\mathcal{S}_{\text{opt}}\). Furthermore, even though (P\(K\)) is low-dimensional (i.e., \(K\ll d\)), the number of constraints grows as \(dn_{\text{opt}}\), which quickly makes the computation of \(\mathbf{\tilde{\lambda}}_{K}\) inefficient and time-consuming (e.g., for the AbdomenCT-1K dataset, \(dn_{\text{opt}}\approx 10\times 10^{7}\) when \(n_{\text{opt}}=128\), a mild number of samples to optimize over). In practice, we randomly subsample a small number of features \(d_{\text{opt}}\ll d\) stratified by membership, which drastically speeds up computation. Finally, we pick \(\gamma\) such that it minimizes the objective function over 16 values equally spaces in \([0.3,0.7]\). The choice of these heuristics makes the runtime of \(K\)-RCPS comparable to that of RCPS, with a small overhead to solve the reduced Figure 4: Empirical estimates of the risk over 20 random draws of \(\mathcal{S}_{\text{val}}\), \(n_{\text{val}}=128\). All procedures control risk at level \(\epsilon=0.10,0.05\) for each dataset, respectively, with probability at least 90%. (\(\mathsf{P}K\)) problem (potentially more than one time to optimize \(\gamma\)). Figure 4 shows that all combinations of models and calibration procedures control the risk, as guaranteed. In particular, we set \(\delta=0.10\) for both datasets, and \(\epsilon=0.10,0.05\) for the CelebA and AbdomenCT-1K dataset, respectively. We repeat all calibration procedures over 20 random samples of \(\mathcal{S}_{\text{cal}},\mathcal{S}_{\text{val}}\), with \(n_{\text{val}}=128\), and \(n_{\text{cal}}=640\) or \(n_{\text{cal}}=512\) for the CelebA or AbdomenCT-1K dataset, respectively. Figure 5 showcases some example \(\hat{\mathbf{\lambda}}_{K}\)'s obtained by running the \(K\)-RCPS procedure with \(K=4,8\) and \(32\) quantiles alongside their respective conformalized uncertainty maps from \(\mathcal{S}_{\text{val}}\). We can appreciate how for both datasets, \(\hat{\mathbf{\lambda}}_{K}\) captures information about the structure of the data distribution (e.g., eyes and lips for the CelebA dataset, and the position of lungs and the heart for the AbdomenCT-1K dataset). Finally, we compare all models and calibration procedures by means of mean interval length. We perform a grid search over \(n_{\text{opt}}\in\{128,256\}\), \(d_{\text{opt}}\in\{50,100\}\), and \(K\in\{4,8,32\}\), and we report the optimal results in Table 1 (see Appendix D, Table 2 for all results). On both datasets, entrywise calibrated quantiles with \(K\)-RCPS achieve the shortest mean interval length. Importantly, no matter the choice of hyperparameters, \(K\)-RCPS always outperforms the original RCPS algorithm applied to the calibrated intervals. \begin{table} \begin{tabular}{c c c} \hline \hline Dataset & Calibration & Mean Interval \\ \hline \multirow{3}{*}{CelebA} & \(\mathcal{T}\), RCPS & \(0.4834\pm 0.0121\) \\ & \(\mathcal{I}^{\alpha}\), RCPS & \(0.2762\pm 0.0059\) \\ & \(\mathbf{\mathcal{I}^{\alpha}}\), \(\mathbf{K}\)**-RCPS** & \(\mathbf{0.2644\pm 0.0067}\) \\ \hline \multirow{3}{*}{AbdomenCT-1K} & \(\mathcal{T}\), RCPS & \(0.3522\pm 0.0085\) \\ & \(\mathcal{I}^{\alpha}\), RCPS & \(0.1614\pm 0.0020\) \\ \cline{1-1} & \(\mathbf{\mathcal{I}^{\alpha}}\), \(\mathbf{K}\)**-RCPS** & \(\mathbf{0.1391\pm 0.0025}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Calibrated quantiles with \(K\)-RCPS yield the best performance in terms of mean interval length on both datasets. Conclusions Diffusion models represent huge potential for sampling in inverse problems, alas how to devise precise guarantees on uncertainty has remained open. We have provided _(i)_ calibrated intervals that guarantee coverage of future samples generated by diffusion models, _(ii)_ shown how to extend RCPS to \(K\)-RCPS, allowing for greater flexibility by conformalizing in higher dimensions by means of a convex surrogate problem. Yet, our results are general and hold for any data distribution and any sampling procedure--diffusion models or otherwise. When combined, these two contributions provide state of the art uncertainty quantification by controlling risk with minimal mean interval length. Our contributions open the door to a variety of new problems. While we have focused on denoising problems, the application of these tools for other, more challenging restoration tasks is almost direct since no distributional assumptions are employed. The variety of diffusion models for other conditional-sampling problem can readily be applied here too [61, 18]. Lastly--and differently from other works that explore controlling multiple risks [36]--ours is the first approach to provide multi-dimensional control of one risk for conformal prediction, and likely improvements to our optimization scheme could be possible. More generally, we envision our tools to contribute to the responsible use of machine learning in modern settings.
2307.06037
Resetting Metadynamics
Metadynamics is a powerful method to accelerate molecular dynamics simulations, but its efficiency critically depends on the identification of collective variables that capture the slow modes of the process. Unfortunately, collective variables are usually not known a priori, and finding them can be very challenging. We recently presented a collective variables-free approach to enhanced sampling using stochastic resetting. Here, we combine the two methods for the first time, showing that it can lead to greater acceleration than either of them separately. We also demonstrate that resetting Metadynamics simulations performed with suboptimal collective variables can lead to speedups comparable with those obtained with optimal collective variables. Therefore, the application of stochastic resetting can be an alternative to the challenging task of improving suboptimal collective variables, at almost no additional computational cost. Finally, we propose a method to extract unbiased mean first-passage times from Metadynamics simulations with resetting, resulting in an improved tradeoff between speedup and accuracy. This work opens the way for combining stochastic resetting with other enhanced sampling methods to accelerate a broad range of molecular simulations.
Ofir Blumer, Shlomi Reuveni, Barak Hirshberg
2023-07-12T09:32:36Z
http://arxiv.org/abs/2307.06037v1
# Resetting Metadynamics ###### Abstract Metadynamics is a powerful method to accelerate molecular dynamics simulations, but its efficiency critically depends on the identification of collective variables that capture the slow modes of the process. Unfortunately, collective variables are usually not known a priori and finding them can be very challenging. We recently presented a collective variables-free approach to enhanced sampling using stochastic resetting. Here, we combine the two methods for the first time, showing that it can lead to greater acceleration than either of them separately. We also demonstrate that resetting Metadynamics simulations performed with _suboptimal_ collective variables can lead to speedups comparable with those obtained with _optimal_ collective variables. Therefore, the application of stochastic resetting can be an alternative to the challenging task of improving suboptimal collective variables, at almost no additional computational cost. Finally, we propose a method to extract unbiased mean first-passage times from Metadynamics simulations with resetting, resulting in improved tradeoff between speedup and accuracy. This work opens the way for combining stochastic resetting with other enhanced sampling methods to accelerate a broad range of molecular simulations. ## Introduction Molecular dynamics (MD) simulations provide valuable insights into the dynamics of complex chemical and physical systems. They are a powerful tool but, due to their atomic spatial and temporal resolution, they cannot be applied to processes that are longer than a few microseconds, such as protein folding and crystal nucleation [1, 2]. Different methods have been developed in order to overcome this timescale problem, such as umbrella sampling [3, 4], replica-exchange [5], free energy dynamics [6, 7], Metadynamics (MetaD) [8, 9, 10, 11, 12], On-the-fly probability enhanced sampling (OPES) [13, 14, 15, 16], and many others. In this paper, we will focus on MetaD, which relies on identifying efficient collective variables (CVs), capturing the slow modes of the process, and introducing an external bias potential to enhance the sampling of phase space along them. The ability of Metadynamics to accelerate simulations crucially depends on the quality of the CVs [17, 12]. An optimal CV is capable of distinguishing between metastable states of interest as well as describing their interconversion dynamics [18, 19]. A sub-optimal CV can lead to hysteresis, and poor inference of the unbiased free energy surface or kinetics [17, 18, 20, 21, 8]. Very recently, we developed a CV-free approach for enhanced sampling, based on stochastic resetting (SR) [22]. Resetting is the procedure of stopping stochastic processes, at random or fixed time intervals, and restarting them using independent and identically distributed initial conditions. It has received much attention recently [23, 24], since it is able to expedite various processes ranging from randomized computer algorithms [25, 26, 27] and service in queuing systems [28], to first-passage and search processes [29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. We demonstrated the power of SR in enhanced sampling of MD simulations, showing it can lead to speedups of up to an order of magnitude in simple model systems and a molecular system [22]. Moreover, we developed a method to infer the unbiased kinetics, in the absence of SR, from simulations with SR. Resetting is an appealing method due to its extreme simplicity: it can be trivially implemented in MD codes, and no CVs are required. Since finding good CVs in complex condensed phase systems is a difficult challenge, it is a potentially significant advantage. Furthermore, unlike other methods, which continuously add energy to the system, SR does not change the dynamics between resetting events. However, acceleration by SR is not guaranteed. A sufficient condition is that the distribution of transition times (also called first-passage times, FPTs) of the corresponding process is wide enough, i.e., it has a standard deviation that is greater than its mean [39]. Many systems, including the models and molecular example discussed in our previous work [22], fulfill this criterion but there are also counter examples. The complementary advantages and limitations of MetaD and SR raise an important question: can we combine them to obtain the best of both worlds? Since SR can be applied to any random process, we can restart MetaD simulations as previously done for unbiased simulations. This observation opens the door for using SR as a complementary tool to existing enhanced sampling procedures. In this paper, we combine SR with MetaD for the first time. In one model system, we show that this approach leads to greater acceleration than any of them independently, even in comparison to using the optimal CV in MetaD simulations. In another model system, we restart MetaD simulations, performed with suboptimal CVs, and obtain accelerations similar to those achieved using the optimal CV. This result suggests that a straightforward application of SR can be an alternative to the challenging task of improving a suboptimal CV. We then demonstrate this for transitions between meta-stable states of alanine tetrapeptide. Lastly, we develop a procedure to infer the unbiased kinetics from simulations combining SR and MetaD, showing an improved tradeoff between speedup and accuracy in comparison to MetaD simulations alone. ## 2 Methods The simulations of the model systems were performed using the Large-scale Atomic/Molecular Massively Parallel Simulator [40] (LAMMPS), with MetaD introduced using PLUMED 2.7.1 [41, 42, 43]. Alanine tetrapeptide was simulated using GROMACS 2019.6 [44] and the same PLUMED ver sion. All simulations were carried out in the NVT ensemble at a temperature of \(300\,K\), and \(10^{4}\) trajectories were sampled for each case. Full simulation details are given in the SI. Initial velocities were sampled from the Maxwell-Boltzmann distribution, while initial positions were fixed (equivalent to sampling from a delta function positions distribution). We defined the FPT as the earliest instance in which a certain criterion was met, as specified below for each system. The COMMITTOR command in PLUMED was used for testing this criterion and stopping the simulations when it was fulfilled. For most simulations with SR, we sampled the time intervals between resetting events from an exponential distribution with a fixed resetting rate (Poisson resetting) using Python. Simulations designated for kinetics inference used constant time intervals between resetting events (Sharp resetting). If a first-passage event did not occur prior to the next resetting time, the simulation was restarted. We stress that we continue tallying the overall time until a first-passage occurred, regardless of the number of resetting events in between. Finally, for simulations combining SR with MetaD, we emphasize that the MetaD bias potential was zeroed after each resetting event. ## Results and discussion ### A two wells model We begin by combining SR with MetaD simulations for the first time. A simple model system is considered first, where the optimal CV is well defined. For this model, combining SR with MetaD leads to greater speedups than MetaD independently, even when biasing with the optimal CV. The model is shown in Figure 1(a). It is a two dimensional harmonic trap, divided into two states centered at \((x=\pm 3,y=0)\,\AA\) by a barrier of \(5\,k_{B}T\) for all y values. The harmonic trap is soft, such that a particle in one of the wells can easily travel about \(50\,\AA\) away from the central barrier. The exact parameters of the potential are given in the SI. They were chosen such that the unbiased mean FPT (MFPT) between the wells is long (\(7.5\,ns\)), but can still be sampled in unbiased simulations. The optimal CV is the x-coordinate. We follow the trajectories of a particle that was initialized at the right minimum and define the FPT criterion as arriving to the left minimum (\(x\leq-3\,\AA\)). For comparison, we first performed SR on standard MD simulations using different resetting rates. The obtained speedups are shown in Figure 1(b). The speedup increases with the resetting rate, reaches a maximum value of \(\sim 4\), which is obtained at a rate of \(50\,ns^{-1}\), and decreases for higher rates. This non-monotonic trend can be understood since in the limit of high resetting rates all trajectories are restarted before a transition can occur, and the speedup drops to zero. The resetting rate which leads to maximum speedup will be referred to as the optimal resetting rate in this paper. Next, we performed MetaD simulations without SR using the x-coordinate as a CV and varying the bias deposition rates. Other bias parameters are given in the SI. The results are shown as green squares in Figure 1(c). The speedup increases with the bias rate, with a value of \(\sim 20\) attained for a bias rate of \(10^{4}\,ns^{-1}\) (every 100 simulation steps). It is evident that MetaD leads to larger acceleration than SR for this system, giving speedups that are greater by a factor of \(\sim 5\). We then combined SR with MetaD and found that even greater speedups are obtained (Figure 1(c), orange triangles). How the combination of resetting and MetaD is done in practice is shown in Figure 1(d), using the highest bias deposition rate (\(10^{4}\,ns^{-1}\)) as an example. The green square in Figure 1(d) shows the speedup obtained with MetaD and no resetting. Then, for that given bias deposition rate, we add SR at increasing rates and evaluate the resulting speedup. We again stress that the MetaD bias is zeroed at every resetting event. We observe the same qualitative behavior seen in Figure 1(b), with the speedup increasing until some optimal resetting rate, highlighted with an orange triangle. We repeat this procedure for all bias rates, and present the optimal speedup by orange triangles in Figure 1(c). Combining SR with MetaD gave additional acceleration for all bias deposition rates, with a maximal speedup of \(\sim 130\) at a bias rate of \(10^{4}\,ns^{-1}\). The corresponding optimal resetting rate was found to be \(125\,ns^{-1}\), which is significantly slower than the bias deposition. The fact that SR can further accelerate MetaD simulations, even when performed with optimal CVs, is the first key result of this paper. MetaD practitioners might wonder: 1) How to tell whether SR will accelerate my simulations?, and 2) How to identify the optimal resetting rate and what would be the resulting speedup? Next, we show that both questions can be answered at almost no additional cost, assuming some MetaD trajectories are already available. To answer the first question, we showed in a recent paper [22] that a sufficient condition for acceleration of MD simulations by SR is that the ratio of the standard deviation to the MFPT (the coefficient of variation, COV) is greater than one [39]. Introducing a small resetting rate is then guaranteed to lead to speedup. We stress that this condition holds also for resetting MetaD simulations, with the added benefit that enhanced sampling generates more transitions, and thus gives a much more reliable estimation of the COV compared to unbiased MD simulations. Moreover, if SR does not accelerate the unbiased simulations significantly, it does not mean that it will not do so for MetaD simulations, since biasing alters the FPT distribution significantly and, consequently, may also change the COV. Figure 2(a) shows the COV of MetaD simulations (without resetting) as a function of the bias deposition rate, for the two wells model (Figure 1(a)). The COV shows non-monotonic behavior with the bias deposition rate. It starts from a value of 1.05 without resetting, drops to 0.80 at an intermediate bias deposition rate and increases up to a value of 1.44 at the highest biasing rate. This shows that MetaD can increase the COV significantly (leading to much higher speedups). As for the second question, estimating the optimal resetting rate and the resulting speedup is also straightforward for MetaD simulations. The MFPT under a resetting rate \(r\) can be estimated using a simple algebraic equation [45], \[\left<\tau\right>_{r}=\frac{1-\tilde{f}(r)}{r\tilde{f}(r)}. \tag{1}\] Figure 1: (a) The two wells model. The star marks the initial position. (b) Speedups obtained for SR simulations at different resetting rates. (c) Speedups obtained for MetaD simulations at different bias deposition rates (green squares), and for combined MetaD + SR simulations at an optimal resetting rate (orange triangles). (d) Speedups obtained for MetaD + SR simulations with a bias deposition rate of \(10^{4}\,ns^{-1}\) and different resetting rates. In Eq. 1, \(\tilde{f}(r)\) is the Laplace transform of the FPT distribution for the MetaD simulations, and \(\langle\tau\rangle_{r}\) is the MFPT for MetaD simulations with SR at resetting rate \(r\). The Laplace transform is evaluated as \[\tilde{f}(r)=\langle e^{-r\tau}\rangle\simeq\frac{1}{N}\sum_{j=1}^{N}e^{-r\tau_ {j}}, \tag{2}\] where \(N\) is the number of MetaD trajectories, and \(\tau_{j}\) is the FPT obtained from trajectory \(j\). Figure 2(b) shows the additional speedups, over MetaD without resetting, estimated using Eq. 1 (dotted lines). They are plotted as a function of the resetting rate for the two bias deposition rates highlighted with colored circles in Figure 2(a). It is evident that the estimations match results obtained from simulations (full circles). While the full FPT distribution is required for an exact description of the behavior under SR, we previously showed[22] that as few as a hundred samples are sufficient for estimating the optimal resetting rate. Figure 2: (a) COV of MetaD simulations with no SR. (b) Additional speedups against resetting rate for MetaD simulations with bias deposition rate of \(10^{4}\,ns^{-1}\) (blue) or \(20\,ns^{-1}\) (green), for the two wells model. Full circles present results obtained from simulations, while dotted lines present evaluations based on the FPT distribution with MetaD and no SR and using Eq. 1. The dashed gray line indicates no additional speedup. Surprisingly, for intermediate bias deposition rates, some additional speedup is gained even though the COV without resetting is smaller than 1. This can be seen in Figure 2(b), which shows an optimal speedup of \(\sim 2\) for a bias deposition rate of \(20\,ns^{-1}\), even though its COV without SR is only 0.80. How can it be? A COV \(<1\) indicates that introducing a small resetting rate will increase the MFPT. In other words, the COV only indicates the initial slope of the speedup curve as a function of the resetting rate. Interestingly, non-trivial cases where small resetting rates decelerate the process but larger ones accelerate it are also possible, as can be seen for the green curve in the inset of Figure 2(b). For comparison, we also give the results for a bias deposition rate of \(10^{4}\,ns^{-1}\), for which the COV without SR is \(>1\), and the initial slope of the speedup with respect to the resetting rate is positive. The results show that, whether the value of the COV is greater or smaller than one, it is worthwhile to estimate the speedup using Eq. 1. To conclude this example, we combined SR with MetaD for the first time, leading to faster acceleration than either approach separately. This is demonstrated even for a system for which the optimal CV is known. Since MetaD simulations already enhance the sampling of the underlying process, it is significantly easier to evaluate their COV than for unbiased simulations. If the COV is larger than 1, then MetaD simulations are guaranteed to be further accelerated by SR, and Eq. 1 can be used to easily estimate by how much. If not, they may still be accelerated, which can be easily checked using Eq. 1. ### The modified Faradjian-Elber potential As a second example, we consider a modified version of the two dimensional potential introduced by Faradjian and Elber when developing the milestoning enhanced sampling method [46]. The potential is shown in Figure 3(a), and full details are given in the SI. It is also composed of two symmetric wells, with minima at \((x=\pm 3,y=0)\,\AA\) that are separated by a Gaussian barrier at \(x=0\,\AA\). The barrier is higher than the first example, \(12\,k_{B}T\) for most \(y\) values, but has a narrow saddle, only \(3\,k_{B}T\) high, around \(y=0\,\AA\). Figure 3(b) shows slices along the x-axis at \(y=0,25\,\AA\), as well as the effective potential integrated over the entire y-axis. We follow the trajectories of a particle that was initialized at the right minimum and define the FPT criterion as crossing the barrier and reaching \(x<-1\,\AA\). For this model, we find the same MFPT as in the two wells model. Employing SR on unbiased simulations gave an optimal speedup of \(\sim 15\) at a resetting rate of \(200\,ns^{-1}\). As in the two wells model, MetaD simulations gave higher speedups than SR, with a speedup of \(\sim 212\) when using the optimal CV, the x-coordinate, at a bias deposition rate of \(10^{4}\,ns^{-1}\). Using this optimal CV and rate, combining SR with MetaD did not lead to further acceleration of the simulations. However, in most real systems, the optimal CV is not known, and suboptimal CVs are almost always used [17]. To test the efficiency of SR in such cases, we gradually reduce the quality of the CV by rotating it. The green squares in Figure 3(c) show the speedup obtained as a function of the sine of the angle \(\theta\) between the CV and the x-axis, which serves as a measure of the deviation from the optimal CV. The degradation in the quality of the CV leads to a decrease in the MetaD speedup with almost no acceleration at an angle of \(24^{\circ}\). However, combining SR with MetaD recovers almost all of the speedup of the optimal CV, despite the use of suboptimal CVs. This is shown by the orange triangles in Figure 3(c). Optimizing CVs for condensed phase systems remains a difficult challenge [9, 47]. Our results suggest that SR may serve as an alternative, or complementary method, to improving CVs. Instead of using sophisticated algorithms to find better CVs [47, 48, 49, 50, 51, 52], one can use SR to obtain a similar speedup at a much lower cost. This is the second key result of this paper. ### Alanine tetrapeptide As a final example, we demonstrate the capabilities of combining MetaD with SR on a molecular system, alanine tetrapeptide. We focus on two of its conformers: a "folded" and "unfolded" states, shown in Figure 4(a). Six dihedral angles serve as important degrees of freedoms, with \(\phi_{3}\) being the slowest one [13]. Figure 4(b) shows the free-energy surface along \(\phi_{3}\), which has two minima separated by an energy barrier of \(\sim 15\,k_{B}T\). Transitions in unbiased Figure 3: (a) The modified Paradigm-Elber potential. The star marks the initial position. (b) The integrated projection of the potential on the x-axis (blue), and the potential cross section at \(y=0,25\,\AA\) (dotted and dashed black lines, respectively). (c) Speedups obtained for a bias deposition rate of \(10^{4}\,ns^{-1}\) using suboptimal, rotated CVs, in the MetaD simulations with no SR (green squares) and with optimal SR (orange triangles). The angle between the CV and the x-axis is denoted by \(\theta\). simulations from the unfolded state (upper configuration in panel (a), left basin in panel (b)) to the folded one (lower configuration in panel (a), right basin in panel (b)) have an estimated MFPT of \(\sim 5.6\,\mu s\) (see the SI for more details). To improve the sampling, we performed MetaD simulations using three different CVs: the angle \(\phi_{3}\) serves as the optimal CV, and two adjacent angles, \(\phi_{2}\) and \(\psi_{3}\), serve as suboptimal ones. The two dimensional free energy surfaces as a function of all CVs are presented in panels (c) and (d) of Figure 4. They show that \(\psi_{3}\) has some overlap and does not separate the two states as well as \(\phi_{3}\), while there is almost no separation of the states in \(\phi_{2}\). The simulations were initialized from a fixed, unfolded configuration, marked with stars in panels (b) to (d). The first-passage criterion (\(0.5<\phi_{3}<1.5\,rad\)) is also marked in these panels, with vertical dashed lines. Figure 4(e) shows the speedup of MetaD simulations using different protocols, without SR (green) and with optimal SR (orange). COV values for simulations with no SR are given in panel (f). As expected, using \(\phi_{3}\) as a CV gives the greatest speedup, and a COV of \(\sim 0.3\) for which the optimal resetting rate is \(r^{*}=0\). Thus, there is no benefit from SR in this case. Suboptimal CVs show similar behavior to that observed for the Faradjian-Elber potential, but for a realistic system: The speedups obtained for MetaD without SR decrease when using bad CVs and the COV values increase above one. Namely, while MetaD simulations using \(\phi_{3}\) as the CV gives more than four orders of magnitude speedup, simulations using \(\psi_{3}\) and \(\phi_{2}\) lead to accelerations by factors of only \(\sim 580\) and \(\sim 4\), respectively. Concurrently, the COV of \(\psi_{3}\) is \(\sim 1.24\) while for the worst CV, \(\phi_{2}\), it is \(\sim 3\). As a result, SR becomes more effective the poorer the CV is, giving an additional speedup of \(\sim 133\) over MetaD when using \(\phi_{2}\) as a CV. ### Kinetics inference To conclude the paper, we demonstrate that SR can improve the inference of unbiased kinetics from MetaD simulations, using the case of alanine tetrapeptide as an example. The Figure 4: (a) Two conformers of alanine tetrapeptide. The white, gray, blue, and red balls represent hydrogen, carbon, nitrogen, and oxygen atoms, respectively. Free-energy surfaces of alanine tetrapeptide along (b) \(\phi_{3}\), (c) \(\phi_{3},\phi_{2}\) and (d) \(\phi_{3},\psi_{3}\). (e) Speedup of MetaD simulations, without SR (green) and with optimal SR (orange). (f) COV of MetaD simulations without SR for different CVs. unbiased FPT distribution can be extracted from MetaD simulations with no SR through a procedure known as infrequent MetaD (iMetaD) [2]. In this method, the MFPT is obtained by rescaling the FPT of each MetaD trajectory by an acceleration factor that depends exponentially on the external bias (see Eq. S5 in the SI). iMetaD assumes that no bias is deposited near the transition state, and that none of the basins are over-filled. When this assumption is valid, the distribution of the rescaled FPTs matches the unbiased distribution. However, the assumption does not hold for suboptimal CVs or high bias deposition rates [1], which result in over-deposition. Due to the exponential dependence of the acceleration factor on the bias, trajectories exhibiting over-deposition result in extremely large acceleration factors. They contribute unrealisticly long FPTs to the rescaled distribution, shifting the obtained MFPT from the true value by orders of magnitude. The inference can be improved, even with suboptimal CVs, by decreasing the bias deposition rate, resulting in a tradeoff between speedup and accuracy. This tradeoff is demonstrated in Figure 5. It shows (green squares) the error in the estimation of the unbiased MFPT as a function of speedup, for iMetaD simulations of alanine tetrapeptide biasing the \(\psi_{3}\) angle, which is a suboptimal CV. The prediction error is defined as \(\left|\left\langle\tau\right\rangle_{true}-\left\langle\tau\right\rangle_{est} \right|/\left\langle\tau\right\rangle_{true}\) where \(\left\langle\tau\right\rangle_{true}\) is the true unbiased MFPT and \(\left\langle\tau\right\rangle_{est}\) is the estimated MFPT. Next, we demonstrate that resetting can give a better tradeoff, reducing the error for all speedups, as shown by the orange triangles in Figure 5. These results were obtained by adding SR at different resetting rates to the iMetaD simulations at the highest bias deposition rate (highlighted by a circle). Full details explaining how to infer the unbiased MFPT from combined SR and iMetaD simulations are given in the SI. Here, we briefly provide only the key ingredients and underlying intuition. We note that between resetting events, the trajectories are standard iMetaD simulations. Moreover, due to SR, the short trajectories between restarts, can be treated as independent from one another (recall that restart also zeros previous bias). As a result, we can use the standard iMetaD rescaling procedure on each short trajectory, and then evaluate the unbiased survival probability at short times. For short enough times, even with suboptimal CVs, we will avoid over-deposition and get a good estimate of the unbiased survival. Finally, we assume that the survival probability decays exponentially, as commonly done in iMetaD [1], and obtain an estimate of the unbiased MFPT from its slope. The quality of the linear fit can be used to assess the reliability of the predicted MFPT, similar to the Kolmogorov-Smirnov test used by Salvalaglio et. al in standard iMetaD simulations [53, 54, 1]. Our results show that, for alanine tetrapetide, applying SR to iMetaD simulations and using the proposed inference procedure gives a better tradeoff than decreasing the bias deposition rate. In choosing between the two, practitioners of iMetaD might consider a simple question: given a fixed simulation time, would the error obtained be lower in a single long iMetaD trajectory or on a series of shorter iMetaD trajectories with SR? For suboptimal CVs, the acceleration factor becomes increasingly unreliable with time, due to over-deposition of the external bias. In that case, short trajectories with minimal bias are preferred, and SR may be more favorable. Figure 5: The error in the prediction of the unbiased MFPT as a function of speedup for iMetaD simulations at different bias deposition rates (green squares) and iMetaD simulations with SR at different resetting rates (orange triangles). ## Conclusions We combine SR with MetaD simulations for the first time. We show that resetting can further accelerate MetaD simulations, even when the optimal CV is used. In practice, the optimal CV is almost never known and suboptimal CVs are employed. We provide examples in which adding SR to MetaD simulations performed with poor CVs, leads to speedups comparable to using the optimal CV. This suggests that resetting may serve as an alternative to the challenging task of improving suboptimal CVs using sophisticated algorithms. Testing whether SR can accelerate simulations is very easy. Given a small number (\(\sim 100\)) of short MetaD trajectories, showing one transition each, we can estimate the COV and find whether SR would further accelerate the simulations, and by how much, using Eq. 1. Resetting can be of benefit even when it does not provide additional acceleration on top of the one attained by MetaD. We demonstrate that SR can improve the inference of the unbiased kinetics from iMetaD simulations performed with suboptimal CVs, giving a better tradeoff between speedup and accuracy for alanine tetrapeptide. Finally, we conjecture that benefits coming from combining MetaD and SR are not limited to the examples presented herein, and are much more general. The reason is that MetaD, and similar methods, flatten the free energy surface. Previous work has shown that SR is particularly efficient for flat landscapes [33, 55, 56], with the extreme case being free diffusion [57]. Thus, starting from an arbitrary free energy surface, MetaD changes it to one that is more amenable to acceleration by SR. Future method development would likely harness the power of this important observation. Barak Hirshberg acknowledges support by the Israel Science Foundation (grants No. 1037/22 and 1312/22) and the Pazy Foundation of the IAEC-UPBC (grant No. 415-2023). Shlomi Reuveni acknowledges support from the Israel Science Foundation (grant No. 394/19). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 947731). See the SI for full details of the computational setup and of the procedure to infer the unbiased kinetics from Metadynamics simulations with SR. Example input files are available on the GitHub repository [58]. ## References * Salvalaglio et al. 2014 Salvalaglio, M.; Tiwary, P.; Parrinello, M. Assessing the Reliability of the Dynamics Reconstructed from Metadynamics. _Journal of Chemical Theory and Computation_**2014**, _10_, 1420-1425. * Tiwary and Parrinello 2013 Tiwary, P.; Parrinello, M. From Metadynamics to Dynamics. _Physical Review Letters_**2013**, _111_, 230602. * Torrie and Valleau 1977 Torrie, G.; Valleau, J. Nonphysical Sampling Distributions in Monte Carlo Free-Energy Estimation: Umbrella Sampling. _J. Comput. Phys._**1977**, _23_, 187-199. * Kastner 2011 Kastner, J. Umbrella Sampling. _Wiley Interdiscip. Rev. Comput. Mol. Sci._**2011**, \(1\), 932-942. * Sugita and Okamoto 1999 Sugita, Y.; Okamoto, Y. Replica-exchange molecular dynamics method for protein folding. _Chemical Physics Letters_**1999**, _314_, 141-151. * Rosso and Tuckerman 2002 Rosso, L.; Tuckerman, M. E. An Adiabatic Molecular Dynamics Method for the Calculation of Free Energy Profiles. _Mol. Simul._**2002**, _28_, 91-112. * Rosso et al. 2002 Rosso, L.; Minary, P.; Zhu, Z.; Tuckerman, M. E. On the Use of the Adiabatic Molecular Dynamics Technique in the Calculation of Free Energy Profiles. _J. Chem. Phys._**2002**, _116_, 4389-4402. * Barducci et al. 2011 Barducci, A.; Bonomi, M.; Parrinello, M. Metadynamics. _Wiley Interdiscip. Rev. Comput. Mol. Sci._**2011**, \(1\), 826-843. * Valsson et al. 2016 Valsson, O.; Tiwary, P.; Parrinello, M. Enhancing Important Fluctuations: Rare Events and Metadynamics from a Conceptual Viewpoint. _Annual Review of Physical Chemistry_**2016**, _67_, 159-184. * Sutto et al. 2012 Sutto, L.; Marsili, S.; Gervasio, F. L. New Advances in Metadynamics. _Wiley Interdiscip. Rev. Comput. Mol. Sci._**2012**, \(2\), 771-779. * Barducci et al. 2008 Barducci, A.; Bussi, G.; Parrinello, M. Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method. _Physical Review Letters_**2008**, _100_, 020603. * Bussi and Laio 2020 Bussi, G.; Laio, A. Using metadynamics to explore complex free-energy landscapes. _Nature Reviews Physics_**2020**, \(2\), 200-212. * Invernizzi and Parrinello 2020 Invernizzi, M.; Parrinello, M. Rethinking Metadynamics: From Bias Potentials to Probability Distributions. _J. Phys. Chem. Lett._**2020**, _11_, 2731-2736. * Invernizzi et al. 2020 Invernizzi, M.; Piaggi, P. M.; Parrinello, M. Unified Approach to Enhanced Sampling. _Phys. Rev. X_**2020**, _10_, 041034. * Invernizzi 2021 Invernizzi, M. OPES: On-the-fly Probability Enhanced Sampling method. _Nuovo Cimento C_**2021**, _44_, 1-4. * Invernizzi and Parrinello 2022 Invernizzi, M.; Parrinello, M. Exploration vs Convergence Speed in Adaptive-Bias Enhanced Sampling. _Journal of Chemical Theory and Computation_**2022**, _18_, 3988-3996. * Invernizzi and Parrinello 2019 Invernizzi, M.; Parrinello, M. Making the Best of a Bad Situation: A Multiscale Approach to Free Energy Calculation. _J. Chem. Theory. Comput._**2019**, _15_, 2187-2194. * Demuynck et al. 2018 Demuynck, R.; Wieme, J.; Rogge, S. M. J.; Dedecker, K. D.; Vanduyfhuys, L.; Waroquier, M.; Van Speybroeck, V. Protocol for Identifying Accurate Collective Variables in Enhanced Molecular Dynamics Simulations for the Description of Structural Transformations in Flexible Metal-Organic Frameworks. _Journal of Chemical Theory and Computation_**2018**, _14_, 5511-5526. * Peters 2017 Peters, B. In _Reaction Rate Theory and Rare Events Simulations_; Peters, B., Ed.; Elsevier: Amsterdam, 2017; pp 539-571. * Besker and Gervasio 2012 Besker, N.; Gervasio, F. L. In _Computational Drug Discovery and Design_; Baron, R., Ed.; Methods in Molecular Biology; Springer: New York, NY, 2012; pp 501-513. * Ray et al. 2022 Ray, D.; Ansari, N.; Rizzi, V.; Invernizzi, M.; Parrinello, M. Rare Event Kinetics from Adaptive Bias Enhanced Sampling. _Journal of Chemical Theory and Computation_**2022**, _18_, 6500-6509. * Blumer et al. 2022 Blumer, O.; Reuveni, S.; Hirshberg, B. Stochastic Resetting for Enhanced Sampling. _The Journal of Physical Chemistry Letters_**2022**, _13_, 11230-11236. * Evans et al. 2020 Evans, M. R.; Majumdar, S. N.; Schehr, G. Stochastic Resetting and Applications. _J. Phys. A : Math. Theor._**2020**, _53_, 193001. * Kundu and Reuveni 2022 Kundu, A.; Reuveni, S. Stochastic Resetting: Theory and Applications [special issue]. _J. Phys. A : Math. Theor._**2022**, * Luby et al. 1993 Luby, M.; Sinclair, A.; Zuckerman, D. Optimal Speedup of Las Vegas Algorithms. _Inf. Process. Lett._**1993**, _47_, 173-180. * Gomes et al. 1998 Gomes, C.; Selman, B.; Kautz, H. _15th AAAI_; The AAAI Press: Madison, WI, 1998; p 431-437. * Montanari and Zecchina 2002 Montanari, A.; Zecchina, R. Optimizing Searches via Rare Events. _Phys. Rev. Lett._**2002**, _88_, 178701. * Bonomo et al. 2022 Bonomo, O. L.; Pal, A.; Reuveni, S. Mitigating Long Queues and Waiting Times with Service Resetting. _PNAS Nexus_**2022**, \(1\), pgac070. * Bressloff 2020 Bressloff, P. C. Queueing Theory of Search Processes with Stochastic Resetting. _Phys. Rev. E_**2020**, _102_, 032109. * Kusmierz and Gudowska-Nowak 2015 Kusmierz, L.; Gudowska-Nowak, E. Optimal First-Arrival Times in Levy Flights with Resetting. _Phys. Rev. E_**2015**, _92_, 052127. * Bhat et al. 2016 Bhat, U.; Bacco, C. D.; Redner, S. Stochastic Search with Poisson and Deterministic Resetting. _J. Stat. Mech._**2016**, \(8\), 083401. * Chechkin and Sokolov 2018 Chechkin, A.; Sokolov, I. Random Search with Resetting: A Unified Renewal Approach. _Phys. Rev. Lett._**2018**, _121_, 050601. * Ray et al. 2019 Ray, S.; Mondal, D.; Reuveni, S. Peclet Number Governs Transition to Acceleratory Restart in Drift-Diffusion. 2019; [http://arxiv.org/abs/1811.08239](http://arxiv.org/abs/1811.08239), Accessed: November 10, 2022. * Robin et al. 2019 Robin, T.; Hadany, L.; Urbakh, M. Random Search with Resetting as a Strategy for Optimal Pollination. _Phys. Rev. E_**2019**, _99_, 052119. * Evans and Majumdar 2018 Evans, M. R.; Majumdar, S. N. Run and Tumble Particle under Resetting: a Renewal Approach. _J. Phys. A : Math. Theor._**2018**, _51_, 475003. * Pal et al. 2020 Pal, A.; Kusmierz, L.; Reuveni, S. Search with Home Returns Provides Advantage under High Uncertainty. _Phys. Rev. Res._**2020**, \(2\), 043174. * Bodrova and Sokolov 2020 Bodrova, A. S.; Sokolov, I. M. Resetting Processes with Noninstantaneous Return. _Phys. Rev. E_**2020**, _101_, 052130. * Luo et al. 2022 Luo, Y.; Zeng, C.; Huang, T.; Ai, B.-Q. Anomalous Transport Tuned through Stochastic Resetting in the Rugged Energy Landscape of a Chaotic System with Roughness. _Phys. Rev. E_**2022**, _106_, 034208. * Pal et al. 2022 Pal, A.; Kostinski, S.; Reuveni, S. The Inspection Paradox in Stochastic Resetting. _J. Phys. A : Math. Theor._**2022**, _55_, 021001. * A Flexible Simulation Tool for Particle-Based Materials Modeling at the Atomic, Meso, and Continuum Scales. _Comp. Phys. Comm._**2022**, _271_, 108171. * Bonomi et al. 2009 Bonomi, M.; Branduardi, D.; Bussi, G.; Camilloni, C.; Provasi, D.; Raiteri, P.; Donadio, D.; Marinelli, F.; Pietrucci, F.; Broglia, R. A.; Parrinello, M. PLUMED: A portable plugin for free-energy calculations with molecular dynamics. _Computer Physics Communications_**2009**, _180_, 1961-1972. * Bonomi et al. 2019 Bonomi, M. et al. Promoting transparency and reproducibility in enhanced molecular simulations. _Nature Methods_**2019**, _16_, 670-673. * Tribello et al. 2014 Tribello, G. A.; Bonomi, M.; Branduardi, D.; Camilloni, C.; Bussi, G. PLUMED 2: New feathers for an old bird. _Computer Physics Communications_**2014**, _185_, 604-613. * Abraham et al. 2015 Abraham, M. J.; Murtola, T.; Schulz, R.; Pall, S.; Smith, J. C.; Hess, B.; Lindahl, E. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. _SoftwareX_**2015**, _1-2_, 19-25. * Reuveni 2016 Reuveni, S. Optimal Stochastic Restart Renders Fluctuations in First Passage Times Universal. _Physical Review Letters_**2016**, _116_, 170601. * Faradjian and Elber 2004 Faradjian, A. K.; Elber, R. Computing time scales from reaction coordinates by milestoning. _The Journal of Chemical Physics_**2004**, _120_, 10880-10889. * Bonati et al. 2021 Bonati, L.; Piccini, G.; Parrinello, M. Deep learning the slow modes for rare events sampling. _Proceedings of the National Academy of Sciences_**2021**, _118_, e2113533118. * Mendels et al. 2018 Mendels, D.; Piccini, G.; Parrinello, M. Collective Variables from Local Fluctuations. _The Journal of Physical Chemistry Letters_**2018**, \(9\), 2776-2781. * Piccini et al. 2018 Piccini, G.; Mendels, D.; Parrinello, M. Metadynamics with Discriminants: A Tool for Understanding Chemistry. _Journal of Chemical Theory and Computation_**2018**, _14_, 5040-5044. * Sultan and Pande 2018 Sultan, M. M.; Pande, V. S. Automated design of collective variables using supervised machine learning. _The Journal of Chemical Physics_**2018**, _149_, 094106. * Sidky et al. 2020 Sidky, H.; Chen, W.; Ferguson, A. L. Machine learning for collective variable discovery and enhanced sampling in biomolecular simulation. _Molecular Physics_**2020**, _118_, e1737742. * Karmakar et al. 2021 Karmakar, T.; Invernizzi, M.; Rizzi, V.; Parrinello, M. Collective variables for the study of crystallisation. _Molecular Physics_**2021**, _119_, e1893848. * Massey 1951 Massey, F. J. The Kolmogorov-Smirnov Test for Goodness of Fit. _Journal of the American Statistical Association_**1951**, _46_, 68-78. * Miller 1956 Miller, L.H. 1956, Table of Percentage Points of Kolmogorov Statistics. Journal of the American Statistical Association. Vol. 51, 111-121. [http://www.sciepub.com/reference/142693](http://www.sciepub.com/reference/142693). * Ray and Reuveni 2020 Ray, S.; Reuveni, S. Diffusion with resetting in a logarithmic potential. _The Journal of Chemical Physics_**2020**, _152_, 234110. * Ray and Reuveni 2021 Ray, S.; Reuveni, S. Resetting transition is governed by an interplay between thermal and potential energy. _The Journal of Chemical Physics_**2021**, _154_, 171103. * Evans and Majumdar 2011 Evans, M. R.; Majumdar, S. N. Diffusion with Stochastic Resetting. _Phys. Rev. Lett._**2011**, _106_, 160601. * Blumer et al. 2015 Blumer, O.; Reuveni, S.; Hirshberg, B. Input files for'resettingMetadynamics'. [https://github.com/OfirBlumer/resettingMetadynamics](https://github.com/OfirBlumer/resettingMetadynamics). **Supporting Information:** **Resetting Metadynamics** Ofir Blumer,\({}^{\dagger}\) Shlomi Reuveni,\({}^{\dagger,\ddagger,\lx@paragraphsign}\) and Barak Hirshberg\({}^{*,\dagger,\ddagger}\) \(\dagger\)_School of Chemistry, Tel Aviv University, Tel Aviv 6997801, Israel._ \(\ddagger\)_The Center for Computational Molecular and Materials Science, Tel Aviv University, Tel Aviv 6997801, Israel._ \(\lx@paragraphsign\)_The Center for Physics and Chemistry of Living Systems, Tel Aviv University, Tel Aviv 6997801, Israel._ E-mail: [email protected] ## 1 General simulation details ### Model systems Simulations of model potentials were performed in the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS).[1] All of them were performed in the canonical (NVT) ensemble at a temperature \(T=300\,K\), using a Langevin thermostat with a friction coefficient \(\gamma=0.01\,fs^{-1}\). The integration time step was \(1\,fs\). We followed the trajectories of a single particle with mass \(m=40\,g\,mol^{-1}\), representing an argon atom. Metadynamics (MetaD) was implemented using PLUMED 2.7.1.[2, 3, 4] We used a bias factor of 10, bias height of \(0.5k_{B}T\) and grid spacing of \(0.01\AA\). The Gaussians width was \(\sigma=1.3,0.15\AA\) for the two wells model and the modified Faradjian-Elber potential respectively. ### Alanine tetrapeptide For the simulations of alanine tetrapeptide, we used input files by Invernizzi and Parrinello [55], given in PLUMED-NEST, the public repository of the PLUMED consortium [86], as plumID:22.003. The simulations were performed in GROMACS 2019.6 [57] patched with PLUMED 2.7.1 [58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 211, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 341, 342, 343, 351, 352, 353, 354, 355, 356, 357, 358, 363, 359, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 411, 422, 433, 444, 45, 46, 471, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 89, 80, 83, 86, 88, 89, 81, 84, 85, 88, 87, 82, 85, 89, 82, 86, 83, 87, 84, 85, 89, 86, 87, 88, 88, 89, 80, 84, 85, 89, 82, 86, 87, 88, 89, 80, 85, 89, 80, 86, 87, 88, 89, 82, 89, 83, 84, 85, 89, 86, 87, 88, 89, 88, 89, 80, 88, 81, 89, 80, 81, 82, 83, 84, 85, 89, 80, 86, 87, 88, 88, 89, 81, 88, 82, 89, 80, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 82, 89, 80, 83, 84, 85, 86, 88, 87, 88, 89, 80, 89, 81, 82, 84, 85, 86, 88, 89, 82, 87, 88, 88, 89, 80, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 88, 89, 82, 89, 83, 84, 85, 88, 89, 80, 86, 87, 88, 88, 89, 81, 88, 89, 82, 83, 84, 85, 89, 80, 87, 88, 88, 89, 80, 88, 82, 89, 84, 85, 86, 88, 87, 88, 89, 88, 89, 80, 89, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 88, 89, 81, 89, 82, 85, 87, 88, 89, 80, 81, 82, 86, 88, 89, 82, 87, 88, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 88, 89, 80, 88, 89, 82, 89, 84, 85, 86, 87, 88, 89, 88, 89, 80, 89, 81, 82, 89, 80, 83, 84, 85, 86, 87, 88, 89, 82, 88, 89, 80, 87, 88, 89, 82, 88, 89, 80, 88, 89, 81, 82, 82, 83, 84, 85, 86, 87, 88, 88, 89, 82, 89, 83, 84, 85, 89, 86, 87, 88, 89, 88, 89, 80, 89, 82, 84, 85, 88, 89, 80, 83, 86, 88, 89, 80, 87, 88, 89, 80, 88, 89, 80, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 88, 89, 82, 89, 83, 88, 84, 85, 87, 88, 89, 82, 89, 84, 86, 88, 89, 80, 87, 88, 89, 82, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 82, 85, 88, 89, 80, 83, 84, 85, 88, 86, 87, 88, 89, 80, 88, 89, 81, 82, 84, 85, 87, 88, 89, 82, 89, 80, 88, 82, 85, 86, 88, 89, 80, 87, 88, 89, 82, 89, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 82, 89, 80, 83, 84, 85, 88, 89, 80, 84, 85, 86, 87, 88, 89, 80, 89, 82, 85, 87, 88, 89, 82, 89, 80, 83, 84, 85, 86, 88, 89, 82, 87, 88, 89, 83, 84, 85, 89, 84, 86, 89, 85, 87, 89, 86, 87, 88, 89, 80, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 89, 82, 87, 89, 83, 86, 89, 83, 84, 85, 89, 86, 87, 88, 89, 89, 80, 89, 80, 89, 80, 81 The modifications were made to: 1) stretch the y-axis, and 2) ensure there is a barrier also at \(y=0\) by setting the value of \(B\neq 1\). ## Estimation of mean first-passage time of transitions between conformers of alanine tetrapeptide To estimate the mean first-passage time (MFPT) between two conformers of alanine tetrapeptide, we first performed \(10^{4}\) unbiased simulations, which were stopped when a transition was observed. However, \(\sim 17.5\%\) of them did not show a transition even after \(\tau_{max}=10\,\mu s\). Therefore, unbiased simulations only provided the MFPT for transition times \(\tau<\tau_{max}\), and the probability to observe a transition prior to \(\tau_{max}\). We denote these quantities \(\langle\tau|\tau<\tau_{max}\rangle\) and \(P(\tau<\tau_{max})\) respectively. The true, unknown MFPT, \(\langle\tau\rangle\), may be written as: \[\langle\tau\rangle=P(\tau<\tau_{max})\langle\tau|\tau<\tau_{max}\rangle+(1-P( \tau<\tau_{max}))\,\langle\tau|\tau\geq\tau_{max}\rangle. \tag{3}\] Thus, it remains to evaluate the MFPT of trajectories with \(\tau\geq\tau_{max}\), denoted \(\langle\tau|\tau\geq\tau_{max}\rangle\), to obtain an estimation for \(\langle\tau\rangle\). To evaluate \(\langle\tau|\tau\geq\tau_{max}\rangle\), we assume that the probability density function of the process decays exponentially for times longer then some characteristic time \(\tau_{0}\). If \(\tau_{0}<\tau_{max}\), we can sample the rate of the decay from the tail of \(P(\tau<\tau_{max})\). Practically, we plot the survival probability at times \(\tau<\tau_{max}\) on a logarithmic scale and fit a linear function close to \(\tau=\tau_{max}\). The slope of the fit is taken as the exponential rate \(\mu\), and is used to evaluate: \[\langle\tau|\tau>=\tau_{max}\rangle=\tau_{max}+\mu^{-1}. \tag{4}\] Substituting in Eq. 3 yields the estimated MFPT. About \(5\%\) of MetaD simulations using the \(\phi_{2}\) angle as CV did not show a transition. The MFPT for this case was estimated as explained above for the unbiased case. For this angle, we also estimated the coefficient of variation (COV), which required the standard deviation. To obtain it, for each trajectory \(j\) that did not show a transition, we sampled a value \(\eta_{j}\) from an exponential distribution with rate \(\mu\) and acquired a transition time \(\tau_{j}=\tau_{max}+\eta_{j}\). ## Kinetics inference for iMetaD simulations with SR The kinetics inference procedure for iMetaD simulations with SR is composed of the following steps: First, we obtain a set of \(N\) trajectories ending in a first-passage from iMetaD simulations with SR. We divide the full trajectories to the shorter trajectories between resetting events. These strips form a set of \(M_{tot}\geq N\) shorter iMetaD trajectories, \(N\) of them successfully ending with a first-passage. The length of each strip \(t_{j}=n_{j}\Delta t\) was scaled using the standard iMetaD acceleration factor \(\alpha_{j}\),[8, 10] to \(\tilde{t}_{j}=\alpha_{j}t_{j}\), where \[\alpha_{j}=\frac{1}{n_{j}}\sum_{i=1}^{n_{j}}e^{\beta V_{j}(s(t_{i}),t_{i})}. \tag{5}\] Here, \(n_{j}\) is the total number of time steps in the strip, \(\Delta t\) is the time step size, \(V_{j}\) is the external bias potential, \(s\) is the CV, \(t_{i}\) is the \(i\)-th time step, and \(\beta\) is the inverse temperature. Next, we evaluate the survival function, defined as \(S\left(\tilde{t}\right)=M_{\tilde{t}_{j}>\tilde{t}}/M_{tot}\), where \(M_{\tilde{t}_{j}>\tilde{t}}\) being the number of strips with rescaled length that is larger than \(\tilde{t}\). Finally, we assume that the underlying true FPT distribution is exponential. For an exponential FPT distribution, \(\log\left(S\left(\tilde{t}\right)\right)\) decays linearly with a slope of \(-\langle\tau\rangle^{-1}\), where \(\langle\tau\rangle\) is the MFPT. We perform a linear fit to the obtained survival function, and use its slope to estimate the unbiased MFPT. To include all trajectories in the analysis, we took \(\tilde{t}\) to be smaller than the \(\tilde{t}_{j}\) of the shortest trajectory that did not show a transition. This procedure is demonstrated for alanine tetrapeptide. Figure S1(a) shows \(\log\left(S\left(\tilde{t}\right)\right)\) for iMetaD simulations and no SR using the sub-optimal CV \(\psi_{3}\) (in green). The true survival function, as obtained from unbiased simulations, is given in blue. Due to bias over-deposition, the resulting survival function decays much slower than the true one. However, at short times, where the bias is minimal, even a sub-optimal CV gives a survival function that tracks the unbiased one. The benefit of resetting is in providing excessive sampling of the short time region, leading to more reliable estimation of the exponential decay of the survival. The survival function estimated from iMetaD with SR is given in panel (b), showing an improved agreement with the unbiased results. The quality of the linear fit (black line) provides an assessment for the reliability of the results. Specifically, we use the Pearson correlation coefficient \(R\) between the samples and the linear fit. Figure S2 shows \(R^{2}\) next to the associated prediction error, as a function of resetting rate. It reaches \(R^{2}\to 1\) for high resetting rates, leading to minimal error.
2308.08847
META-SELD: Meta-Learning for Fast Adaptation to the new environment in Sound Event Localization and Detection
For learning-based sound event localization and detection (SELD) methods, different acoustic environments in the training and test sets may result in large performance differences in the validation and evaluation stages. Different environments, such as different sizes of rooms, different reverberation times, and different background noise, may be reasons for a learning-based system to fail. On the other hand, acquiring annotated spatial sound event samples, which include onset and offset time stamps, class types of sound events, and direction-of-arrival (DOA) of sound sources is very expensive. In addition, deploying a SELD system in a new environment often poses challenges due to time-consuming training and fine-tuning processes. To address these issues, we propose Meta-SELD, which applies meta-learning methods to achieve fast adaptation to new environments. More specifically, based on Model Agnostic Meta-Learning (MAML), the proposed Meta-SELD aims to find good meta-initialized parameters to adapt to new environments with only a small number of samples and parameter updating iterations. We can then quickly adapt the meta-trained SELD model to unseen environments. Our experiments compare fine-tuning methods from pre-trained SELD models with our Meta-SELD on the Sony-TAU Realistic Spatial Soundscapes 2023 (STARSSS23) dataset. The evaluation results demonstrate the effectiveness of Meta-SELD when adapting to new environments.
Jinbo Hu, Yin Cao, Ming Wu, Feiran Yang, Ziying Yu, Wenwu Wang, Mark D. Plumbley, Jun Yang
2023-08-17T08:10:56Z
http://arxiv.org/abs/2308.08847v1
Meta-SELD: Meta-Learning for Fast Adaptation to the New Environment in Sound Event Localization and Detection ###### Abstract For learning-based sound event localization and detection (SELD) methods, different acoustic environments in the training and test sets may result in large performance differences in the validation and evaluation stages. Different environments, such as different sizes of rooms, different reverberation times, and different background noise, may be reasons for a learning-based system to fail. On the other hand, acquiring annotated spatial sound event samples, which include onset and offset time stamps, class types of sound events, and direction-of-arrival (DOA) of sound sources is very expensive. In addition, deploying a SELD system in a new environment often poses challenges due to time-consuming training and fine-tuning processes. To address these issues, we propose Meta-SELD, which applies meta-learning methods to achieve fast adaptation to new environments. More specifically, based on Model Agnostic Meta-Learning (MAML), the proposed Meta-SELD aims to find good meta-initialized parameters to adapt to new environments with only a small number of samples and parameter updating iterations. We can then quickly adapt the meta-trained SELD model to unseen environments. Our experiments compare fine-tuning methods from pre-trained SELD models with our Meta-SELD on the Sony-TAU Realistic Spatial Soundscapes 2023 (STARSSS23) dataset. The evaluation results demonstrate the effectiveness of Meta-SELD when adapting to new environments. Jinbo Hu\({}^{1,2}\), Yin Cao\({}^{3}\), Ming Wu\({}^{1}\), Feiran Yang\({}^{1}\), Ziying Yu\({}^{1}\), Wenwu Wang\({}^{4}\), Mark D. Plumbley\({}^{4}\), Jun Yang\({}^{1,2}\)\({}^{1}\)Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China, {hujinbo, mingwu, feiran, yuziying, jyang}@mail.ioa.ac.cn \({}^{2}\)University of Chinese Academy of Sciences, Beijing, China \({}^{3}\)Department of Intelligent Science, Xi'an Jiaotong Liverpool University, China, [email protected] \({}^{4}\)Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK, {w.wang, m.plumbley}@surrey.ac.uk SELD, MAML, unseen environments, fast adaptation, meta-learning, few-shot ## 1 Introduction Sound event localization and detection (SELD) refers to detecting categories, presence, and spatial locations of different sound sources. SELD characterizes sound sources in a spatial-temporal manner. SELD was first introduced in Task 3 of the Detection and Classification of Acoustics Scenes and Events (DCASE) 2019 Challenge [1]. After three iterations of Task 3 of the DCASE Challenge, types of data transform from computationally generated spatial recordings to real-scene recordings [2]. SELD can be regarded as a Multi-Task Learning problem. Adavanne et al. [3] proposed SELDnet for a joint task of sound event detection (SED) and regression-based direction-of-arrival (DOA) estimation. SELDnet is unable to detect homogeneous overlap, which refers to overlapping sound events of the same type but with different locations. The Event-Independent Network V2 (EINV2), with a track-wise output format and permutation invariant training, was proposed to tackle the homogeneous overlap detection problem [4, 5, 6]. Different from two outputs of SED and DOA in SELDnet and EINV2, the Activity-coupled Cartesian DOA (ACCDOA) approach merges two subtasks into a single task [7, 8]. The Cartesian DOA vectors contain the activity information of sound events in the ACCDOA method. In practical SELD system deployment, unseen complex environments may lead to performance degradation. In the STARSS22 dataset [2], there are no duplicated recording environments in the training and validation sets. Our previous system submitted to Task 3 of the DCASE 2022 Challenge obtained the second rank in the team ranking [9]. However, we found unsatisfactory generalization performance for fold4_room2 recordings in the _dev-test-tau_ set of STARSS22 [9]. Experimental results show that class-dependent localization error \(\mathrm{LE_{CD}}\) is high and location-dependent F-score F\({}_{\mathrm{c}\geq 0^{20}}\) is low, but class-dependent localization recall \(\mathrm{LR_{CD}}\) is high. This suggests there may be the weak localizing performance of our system in fold4_room2. In addition, manually annotated spatial sound event recordings are very expensive. Taking the STARSS22 dataset for example [2], each scene was captured with a 32-channel spherical microphone array, a \(360^{\circ}\) camera, a motion capture (mode) system, and wireless microphones. Onset, offset, and class information of sound events were manually detected and classified by annotators through listening to wireless microphone recordings and watching video recordings, while positional annotations were extracted for each event by masking the tracker data with the temporal activity window of the event. In the end, \(360^{\circ}\) video recordings are utilized to validate those annotations. This type of complex recording and annotation process means that large datasets of the annotated spatial recording might be expensive. Few-shot learning can act as a test bed for learning like humans, allowing a system to learn from small samples and reducing data gathering effort and computation [10]. Meta-learning, which facilitates few-shot learning, learns a general-purpose learning algorithm that generalizes across tasks and ideally enables each new task to be learned well from the task-distribution view [11]. Meta-learning has advanced few-shot learning significantly in computer vision [12, 13]. One of the most successful meta-learning algorithms is model-agnostic meta-learning (MAML) [14]. MAML tries to learn general initial parameters that can be rapidly adapted to another task. The method is model-agnostic and compatible with any model trained with gradient descent. It can be applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. In audio signal processing, the meta-learning method has recently attracted in a way to solve few-shot learning problems recently. Meta-TTS [15] is proposed to build personalized speech synthesis systems with few enrolled recordings of unseen users' voices using MAML. In [16], MAML is utilized to allow sound source localization models to adapt to different environments and conditions. In this paper, we propose Meta-SELD, applying meta-learning to SELD models with activity-coupled Cartesian DOA (ACCDOA) representation [7] to improve performance, especially in localization. We use MAML to find general initial parameters to minimize the loss across several tasks in Meta-SELD so that it can quickly adapt to an unseen environment. We take recordings in different environments as different tasks and aim to improve the performance of a specific unseen environment with a few samples recorded in the same environment. The experimental results demonstrate that Meta-SELD outperforms the fine-tuning method from the pre-trained SELD model in the STARSS23 dataset. ## 2 Related Work Activity-coupled Cartesian DOA (ACCDOA) representation [7] assigns a sound event activity to the length of a corresponding Cartesian DOA. When inferring, the threshold is set for the length of class-wise Cartesian DOA vectors to determine whether an event class is active. In contrast to EINV2, the ACCDOA representation merges SED and DOA branches into a single branch, decreasing the model parameters and avoiding the necessity of balancing the loss measuring on the SED task and the DOA task. The ACCDOA representation can not detect homogenous overlaps. Therefore, multi-ACCDOA which still contains a single branch and combines class-wise output format and track-wise output format, is proposed to overcome the problem [8]. While each track in the track-wise output format of EINV2 only detects one event class and a corresponding location, each track in the multi-ACCDOA predicts activities and corresponding locations of all target classes. Auxiliary duplicating permutation invariant training (ADPIT) is also proposed to train each track of the multi-ACCDOA with original targets and duplicated targets, enabling each track to regard the same target as the single one. The multi-ACCDOA representation is shown in Fig. 1. Its outputs are track-wise and class-wise Cartesian DOA vectors. Each vector length indicates the activity of the event. Besides the activity threshold, multi-ACCDOA employs angle thresholds to determine whether the predicted objects are the same or different. ## 3 Meta-SELD ### The SELD model Without loss of generality, in this study, we adopt a simple Convolutional Recurrent Neural Network (CRNN) as our network, which is similar to the baseline of Task 3 of DCASE 2022 Challenge [2] but with ACCDOA format. The network has three convolution blocks followed by a one-layer bidirectional gated recurrent unit (BiGRU). The network takes the concatenation of log-mel spectrograms and intensity vectors as input and predicts active sound events with corresponding Cartesian DOA vectors for each time step. The network architecture of CRNN is shown in Table 1. ### Meta-SELD training Given a model represented by a parameterized function \(f_{\Theta}\) with parameters \(\Theta\), MAML [14] learns the initial parameters \(\Theta_{0}\) from general tasks \(\mathcal{T}_{i}\) sampled from the training set \(\mathcal{D}_{\texttt{train}}\) and is expected to perform well on unseen tasks from the test set \(\mathcal{D}_{\texttt{test}}\) after a few iterations of parameters update with a small number of samples from the corresponding task. These initial parameters are very sensitive to being further optimized on a specific task. Each task \(\mathcal{T}_{i}\) consists of a labeled support set \(\mathcal{S}_{i}\) of \(K\) samples and a labeled query set \(\mathcal{Q}_{i}\) of \(Q\) samples. A new task is expected to be quickly adapted with \(K\) samples, which is known as \(K\)-shot learning. The loss function of MAML is defined as \[\mathcal{L}=\sum_{\mathcal{T}_{i}\sim p(\mathcal{T})}\mathcal{L}_{\mathcal{T} _{i}}(f_{\Theta}) \tag{1}\] where \(p(\mathcal{T})\), which is sampled from \(\mathcal{D}_{\texttt{train}}\), is a distribution over tasks that we want our model to be able to adapt to. In contrast to supervised deep learning methods, the objective of which is to find optimal parameters to minimize the loss function across all training samples, MAML tries to find generalized initial parameters for different tasks. MAML will then update the initial parameters after several iterations of training on data of new tasks. There are two groups of parameters in the MAML algorithm, meta-parameters and adapt-parameters. In the meta-training phase, MAML starts with randomly initialized meta-parameters \(\Theta\) and then adapts to a new specific task \(\mathcal{T}_{i}\) with several update iterations using \(\mathcal{S}_{i}\). The meta-parameters \(\Theta\) become adapt-parameters \(\Theta^{\prime}_{i}\): \[\Theta^{\prime}_{i}=\Theta-\alpha\nabla_{\Theta}\mathcal{L}_{\mathcal{T}_{i} }\left(f_{\Theta},\mathcal{S}_{i}\right) \tag{2}\] where \(\alpha\) is the adaptation learning rate for adapt-parameters updates. After updates across a batch of tasks, the meta-parameters are updated as: \[\Theta=\Theta-\beta\nabla_{\Theta}\sum_{\mathcal{T}_{i}}\mathcal{L}_{\mathcal{ T}_{i}}\left(f_{\Theta^{\prime}_{i}},\mathcal{Q}_{i}\right) \tag{3}\] \begin{table} \begin{tabular}{c} \hline Log-mel spectrogram \& Intensity vectors \\ \hline (Conv2d \(3\times 3\) @ \(32\), BatchNorm2d, ReLU) \(\times\) 2, Avg Pooling \(2\times 2\) \\ \hline (Conv2d \(3\times 3\) @ \(64\), BatchNorm2d, ReLU) \(\times\) 2, Avg Pooling \(2\times 2\) \\ \hline (Conv2d \(3\times 3\) @ \(128\), BatchNorm2d, ReLU) \(\times\) 2, Avg Pooling \(2\times 2\) \\ \hline (Conv2d \(3\times 3\) @ \(256\), BatchNorm2d, ReLU) \(\times\) 2, Avg Pooling \(1\times 2\) \\ \hline Global average pooling @ frequency \\ \hline 1-layer BiGRU of \(128\) hidden size, \(256\times 39\) linear layer, Tanh \\ \hline Mean Square Error \\ \hline \end{tabular} \end{table} Table 1: The network architecture of CRNN Figure 1: The multi-ACCDOA representation of the SELD model. There is no track dimension in the ACCDOA representation. where \(\beta\) is the meta step size. The loss \(\mathcal{L}_{\mathcal{T}_{i}}\) is calculated by the parameterized function \(f_{\Theta^{\prime}}\) on the query set \(\mathcal{Q}_{i}\). After updating \(\Theta\) on the query set, \(\Theta\) will be used as initial parameters for the following meta-training steps. We aim to adapt to an unseen environment with \(K\) samples (\(K\)-shot). The objective of MAML is to find optimal initial parameters across several tasks, so we need to construct a set of tasks from the training set \(\mathcal{D}_{\text{train}}\). \(\mathcal{D}_{\text{train}}\) is split according to the different recording rooms. Audio clips recorded in different rooms belong to different tasks. We first sample a batch of tasks from all tasks and then sample \(K+Q\) samples in each task, where \(K\) samples for a support set \(\mathcal{S}_{i}\) and \(Q\) samples for a query set \(\mathcal{Q}_{i}\). The overall training procedure of MAML is summarized in Algorithm 1. Step 8 in Algorithm 1 is an inner-loop update for adapt-parameters, while Step 12 is outer-loop updates for meta-parameters. ### Meta-SELD test In the meta-testing phase, a specific unseen task \(\mathcal{T}_{j}^{\text{test}}\) created using \(\mathcal{D}_{\text{test}}\) is used. \(\mathcal{T}_{j}^{\text{test}}\) consists of a labeled support set \(\mathcal{S}_{j}^{\text{test}}\) of \(K\) samples, and an unlabeled query set \(\mathcal{Q}_{j}^{\text{test}}\) of \(Q\) samples. After training the model using well-trained parameter \(\Theta\) from the meta-training phase as the initial parameters on \(\mathcal{S}_{j}^{\text{test}}\), we get updated parameters \(\Theta_{j}\). We then use \(f_{\Theta^{\prime}_{j}}\) to evaluate on \(\mathcal{Q}_{j}^{\text{test}}\). The meta processes for testing and training are slightly different. Similar to the training, the test set \(\mathcal{D}_{\text{test}}\) is split according to the recording room of each audio clip. For clips of each room, we also chose \(K\) samples for meta-test support set \(\mathcal{S}_{j}^{\text{test}}\) and all remaining samples for meta-test query set \(\mathcal{Q}_{j}^{\text{test}}\). After \(N\) iterations of parameters update on \(\mathcal{S}_{j}^{\text{test}}\), the meta-parameters \(\Theta\) are updated to \(\Theta_{j,N}\). The final performance is evaluated on \(\mathcal{Q}_{j}^{\text{test}}\) with \(f_{\Theta_{j,N}}\). ## 4 Experiments ### Dataset There are 16 different recording rooms in total in the development set of the STARSS23 dataset, including nine recording rooms in _dev-train-set_ and seven recordings rooms in _dev-test-set_. The development set of STARSS23, which contains roughly 7.5 hours of recordings, has less data than the development set in DCASE 2021, which contains roughly 13 hours of synthetic recordings [17]. Considering the complexity of the real-scene environment, we use additional datasets to improve the performance. We generated simulated data using the generator code provided by DCASE1. We synthesize multi-channel spatial recordings by convolving monophonic sound event examples with multi-channel Spatial Room Impulse Responses (SRIRs). Samples of sound events are selected from AudioSet [18] and FSD50K [19], based on the affinity of the labels in those datasets to target classes in STARSS23. PANNs [20] are then employed to clean the selection of the clips. We use pre-trained PANNs to infer these clips and select high-quality clips based on output probability above 0.8. We extracted SRIRs from the TAU Spatial Room Impulse Response Database (TAU-SRIR DB)2, which contains SRIRs captured in 9 rooms at Tampere University. It was used for official synthetic datasets in DCASE 2019-2021 [1, 17, 21]. Footnote 1: [https://github.com/danielkransuse/DCASE2022-data-generator](https://github.com/danielkransuse/DCASE2022-data-generator) Footnote 2: [https://zenodo.org/record/6408611](https://zenodo.org/record/6408611) The 2700 1-minute audio clips that we synthesized using the abovementioned SRIRs from 9 rooms are used for \(\mathcal{D}_{\text{train}}\), and all of _dev-set_ of STARSS23, recorded in 16 rooms, are used for \(\mathcal{D}_{\text{test}}\). ### Experimental setup The sampling rate of the dataset is 24 kHz. We extracted 64-dimensional log mel spectrograms from four-channel first-order ambisonics (FOA) signals with a Hanning window of 1024 points, and a hop size of 320. Each audio clip is segmented to a fixed length of five seconds with no overlap for training and inference. In the meta-training phase, the training set and test set are divided into 9 tasks and 16 tasks, respectively, corresponding to 9 rooms and 16 rooms. We first sample a batch of rooms randomly and then sample a batch of examples from each of the rooms. The batch of samples of each room constructs a task, and a part of the samples are support samples while the remaining samples are query samples. The batch size of rooms and samples is 4 and 64, respectively. A batch of samples contains 30 support samples and 34 query samples. In the meta-test phase, we sort the audio clips according to the filename, and select the first 30 samples of recordings of each room as samples from the support set \(\mathcal{S}_{j}^{\text{test}}\). The remaining samples of each room are as samples from the test set \(\mathcal{Q}_{j}^{\text{test}}\). The AdamW optimizer is used for updates of meta-parameters of MAML, while the SGD optimizer is used to update adapt-parameters. The meta step size \(\beta\) begins with 0.001 in the first 100 epochs out of 150 epochs in total and is then decreased by 10% every 20 epochs. The adaptation step size and the number of update iterations are always kept at 0.01 and 5, respectively. To demonstrate the effectiveness of Meta-SELD, we compare Meta-SELD with the fine-tuning method from the pre-trained SELD model. Firstly, we train a SELD model with AdamW optimizer in \(\mathcal{D}_{\text{train}}\) from scratch. The learning rate is 0.0003 for the first 70 epochs and then decreases to 0.00003 for the following 20 epochs. Secondly, we initialize the parameters from the previously trained SELD model and then use \(\mathcal{S}_{i}^{\text{test}}\) and \(\mathcal{Q}_{i}^{\text{test}}\) as the training set and the test set of the \(i\)-th room to fine-tune. Similar to the process of the adapt-parameters updates in MAML, the SGD optimizer with a step size of 0.01 and update iterations of 5 are used for fine-tuning. A joint metric of localization and detection [22, 23] is used: location-dependent F-score (\(\text{F}_{\leq 20^{\circ}}\)) and error rate (\(\text{ER}_{\leq 20^{\circ}}\)), and class-dependent localization recall (\(\text{LR}_{\text{CD}}\)) and localization error (\(\mathrm{LE_{CD}}\)). \(\mathrm{F_{\leq 20^{\circ}}}\) and \(\mathrm{ER_{\leq 20^{\circ}}}\) consider true positives predicted under a spatial threshold \(20^{\circ}\) from the ground truth. \(\mathrm{LE_{CD}}\) and \(\mathrm{LR_{CD}}\) are computed for localization predictions in the case that the types of sound events are predicted correctly. A macro-average of \(\mathrm{F_{\leq 20^{\circ}}}\), \(\mathrm{LR_{CD}}\) and \(\mathrm{LE_{CD}}\) is used. We use an aggregated SELD metric which was computed as \[\mathcal{E}_{\mathtt{BELD}}=\frac{1}{4}\left[\mathrm{ER_{\leq 20^{\circ}}}+ \left(1-\mathrm{F_{\leq 20^{\circ}}}\right)+\frac{\mathrm{LE_{CD}}}{180^{\circ}}+ \left(1-\mathrm{LR_{CD}}\right)\right]. \tag{4}\] ### Experimental results Table 2 shows the performance of the Meta-SELD method compared with the fine-tuning method from the pre-trained SELD models. The pre-trained SELD models are trained without using samples from \(\mathcal{D}_{\mathrm{test}}\). According to the last row of Table 2, the _overall_ score, which is a micro average across all rooms, shows that all of \(\mathrm{ER_{\leq 20^{\circ}}}\), \(\mathrm{F_{\leq 20^{\circ}}}\), \(\mathrm{LE_{CD}}\), and \(\mathrm{LR_{CD}}\) are improved using Meta-SELD compared with the fine-tuning method. We observe a drop in \(\mathcal{E}_{\mathtt{SELD}}\) in fold3_room4 and fold4_room8 even though some new samples of unseen environments are used for training. This may be due to the fact that the new samples do not have valid information for training. We also observe the Meta-SELD method improves \(\mathcal{E}_{\mathtt{BELD}}\) by a large margin in fold3_room22, fold4_room2, and fold4_room23 where the pre-trained model has poor performance across all rooms. Specifically, \(\mathrm{ER_{\leq 20^{\circ}}}\), \(\mathrm{F_{\leq 20^{\circ}}}\), and \(\mathrm{LR_{CD}}\) of fold3_room22 and fold4_room23 outperform other methods. Meta-SELD mainly improves the performance of SED in fold3_room22 and fold4_room23. All metrics of fold4_room2 are improved in Meta-SELD compared with the fine-tuning method, especially in DOA estimation. In fold4_room2, all of the pre-trained model, the fine-tuning method, and Meta-SELD achieve \(\mathrm{LR_{CD}}\) of over 70%, but \(\mathrm{LE_{CD}}\) of three methods is always high compared with \(\mathrm{LE_{CD}}\) of other rooms. Meta-SELD decreases \(14.8^{\circ}\) and \(8.3^{\circ}\) of \(\mathrm{LE_{CD}}\) compared with the pre-trained model and the fine-tuning method in fold4_room2, hence directly leading to the increase of \(\mathrm{F_{\leq 20^{\circ}}}\) and the decrease of \(\mathrm{ER_{\leq 20^{\circ}}}\). However, performance degradation happens in fold3_room4, fold3_room7, fold3_room14, and fold4_room16, where Meta-SELD has the worst metric scores. There is no significant change in \(\mathrm{LE_{CD}}\), and the decline in SED performance is the main factor. One of the possible reasons for this observation could be that there are some conflicts in optimizing Meta-SELD across a batch of rooms. Experimental results demonstrate that Meta-SELD can find better initial parameters across a batch of tasks than the fine-tuning method, especially in rooms where the pre-trained model and the fine-tuning method perform worse. Meta-SELD reduces the risk of overfitting when using a small number of samples, which usually happens in the fine-tuning method. ## 5 Conclusion In this paper, we presented Meta-SELD, which employed Model-Agnostic Meta-Learning (MAML) to the sound event localization and detection task to achieve fast adaptation to unseen environments. The method only utilizes a small number of samples and a few update iterations of training. We use the STARSS23 dataset and synthesized 2700 1-minute samples that are convolved using monophonic sound event clips with multi-channel spatial room impulse responses. The sound event clips are extracted from FSD50K and AudioSet and are further filtered by the PANNs model through a probability threshold. The SRIRs used are from TAU-SRIR DB. Our methods are trained on synthetic datasets and evaluated on all development sets of the STARSS23 dataset. Audio clips recorded from the same room or synthesized using SRIRs collected from the same room are regarded as the same task for MAML. The experimental results show that the Meta-SELD method improves \(\mathcal{E}_{\mathtt{BELD}}\) significantly in those rooms where both the pre-trained model and the fine-tuning method perform unsatisfactorily. The overall score demonstrates that the Meta-SELD method outperforms the fine-tuning method on average. ## 6 Acknowledgement This work was supported in part by Grant "XJTLU RDF-22-01-084", UK Engineering and Physical Sciences Research Council (EPSRC) Grant EP/T019751/1 "AI for Sound (AI4S)". For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. \begin{table} \begin{tabular}{c|c c c|c c c c|c c c|c c c} \hline \hline Room & \multicolumn{3}{c|}{\(\mathrm{ER_{20^{\circ}}}\)\(\downarrow\)} & \multicolumn{3}{c|}{\(\mathrm{F_{20^{\circ}}}\)\(\uparrow\)} & \multicolumn{3}{c|}{\(\mathrm{LE_{CD}}\)\(\downarrow\)} & \multicolumn{3}{c}{\(\mathrm{LR_{CD}}\)\(\uparrow\)} & \multicolumn{3}{c}{\(\mathrm{LR_{CD}}\)\(\uparrow\)} & \multicolumn{3}{c}{\(\mathrm{\mathcal{E}_{\mathtt{Ann.}}}\)\(\downarrow\)} \\ & Pre-train & Fine-time & Meta & Pre-train & Fine-time & Meta & Pre-train & Fine-time & Meta & Pre-train & Fine-time & Meta \\ \hline fold3\_room4 & 0.624 & **0.574** & 0.603 & **44.59** & 40.4\% & 29.8\% & 17.8\({}^{\mathrm{st}}\) & **17.6\({}^{\mathrm{st}}\)** & 21.5\({}^{\mathrm{st}}\) & **64.6\%** & 61.2\% & 54.4\% & **0.408** & 0.414 & 0.470 \\ fold3\_room66 & 0.639 & 0.607 & **0.594** & 38.0\% & **40.50** & 40.4\% & 18.0\({}^{\mathrm{st}}\) & **17.2\({}^{\mathrm{st}}\)** & 17.4\({}^{\mathrm{st}}\) & **65.3\%** & 63.8\% & 61.1\% & 0.427 & **0.415** & 0.419 \\ fold3\_room7 & 0.610 & **0.666** & **0.660** & **31.1\%** & 30.7\% & 20.8\({}^{\mathrm{st}}\) & 26.3\({}^{\mathrm{st}}\) & 24.1\({}^{\mathrm{st}}\) & **22.5\({}^{\mathrm{st}}\)** & 59.9\% & **60.5\%** & 48.3\% & 0.458 & 0.457 & 0.523 \\ fold3\_room9 & 0.673 & **0.601** & 0.688 & 41.7\% & 46.6\% & **47.5\%** & 19.1\({}^{\mathrm{st}}\) & 18.6\({}^{\mathrm{st}}\) & **18.3\({}^{\mathrm{st}}\)** & **78.2\%** & 73.3\% & 0.389 & **0.364** & 0.375 \\ fold3\_room12 & 0.685 & **0.659** & 0.689 & 28.0\% & 29.3\% & **33.0\%** & 26.8\({}^{\mathrm{st}}\) & **26.1\({}^{\mathrm{st}}\)** & 33.3\({}^{\mathrm{st}}\) & 43.1\% & 43.6\% & **46.3\%** & 0.531 & **0.518** & 0.520 \\ fold3\_room13 & 0.650 & 0.599 & **0.594** & 37.7\% & **39.4\%** & 36.1\% & 17.5\({}^{\mathrm{st}}\) & 16.9\({}^{\mathrm{st}}\) & **59.0\%** & **48.9\%** & 37.1\% & 0.465 & **0.453** & 0.488 & 0.489 \\ fold3\_room14 & 0.633 & **0.582** & 0.613 & **40.2\%** & 37.4\% & 28.6\% & **23.2\({}^{\mathrm{st}}\)** & 23.7\({}^{\mathrm{st}}\) & 23.7\({}^{\mathrm{st}}\) & 24.4\({}^{\mathrm{st}}\) & 58.3\% & 54.0\% & 47.2\% & 0.452 & **0.49** & 0.498 \\ fold4\_room21 & 0.757 & 0.750 & **0.735** & 19.3\% & **21.6\%** & 18.9\% & 20.5\({}^{\mathrm{st}}\) & **18.9\({}^{\mathrm{st}}\)** & 20.6\({}^{\mathrm{st}}\) & 39.3\% & 31.4\% & **43.8\%** & 0.571 & 0.581 & **0.556** \\ fold3\_room2 & 0.850 & 0.818 & **0.800** & 11.4\% & 12.8\% & **16.7\%** & 31.6\% & 29.5\({}^{\mathrm{st}}\) & **29.0\({}^{\mathrm{st}}\)** & 45.6\% & 43.8\% & **48.8\%** & 0.614 & 0.647 & **0.577** \\ fold4\_room22 & 0.809 & 0.774 & **0.733** &
2305.11332
Equivariant cohomology of even-dimensional complex quadrics from a combinatorial point of view
The purpose of this paper is to determine the ring structure of the graph equivariant cohomology of the GKM graph induced from the even-dimensional complex quadrics. We show that the graph equivariant cohomology is generated by two types of subgraphs in the GKM graph, which are subject to four different types of relations. By utilizing this ring structure, we establish the multiplicative relation for the generators of degree 2n and provide an alternative computation of the ordinary cohomology ring of 4n-dimensional complex quadrics, as previously computed by H. Lai. Additionally, we provide a combinatorial explanation for why the square of the 2n degree generator x vanishes when n is odd and is non-vanishing when n is even.
Shintaro Kuroki
2023-05-18T22:42:33Z
http://arxiv.org/abs/2305.11332v3
# Equivariant cohomology of even-dimensional complex quadrics from a combinatorial point of view ###### Abstract. The purpose of this paper is to determine the ring structure of the torus equivariant cohomology of the complex quadrics \(Q_{2n}\) by computing the graph equivariant cohomology of their corresponding GKM graphs. We show that the graph equivariant cohomology of \(Q_{2n}\) is generated by two types of subgraphs in the GKM graph, namely \(M_{v}\) and \(\Delta_{K}\), which are subject to four different types of relations. By utilizing this ring structure, we establish the multiplicative relation for the generators \(\Delta_{K}\) of degree \(2n\) and provide an alternative computation of the ordinary cohomology ring of \(Q_{2n}\), as previously computed by H. Lai. ## 1. Introduction In the paper [1], Goresky, Kottwiz, and MacPherson established a framework for studying the class of manifolds with a torus action, known as _equivariantly formal_, by using their fixed points and one-dimensional orbits. These manifolds are now commonly referred to as _GKM manifold_. Expanding on their work, Guillemin and Zara introduced the notion of an abstract _GKM graph_ in [1] as a combinatorial counterpart of GKM manifolds, thus initiating the study of spaces with torus actions using the combinatorial structure of GKM graphs. Since then, the research of GKM manifolds and GKM graphs, commonly known as GKM theory, has been the subject of extensive research (e.g., [1, 1, 2, 3]). One can view GKM theory as a methodology for computing equivariant cohomology based on the combinatorial structure of a graph. For an equivariantly formal GKM manifold, its equivariant cohomology is isomorphic to the _graph equivariant cohomology_ of its corresponding GKM graph, see (3.1). On the other hand, for abstract GKM graphs, the graph equivariant cohomology can be defined independently of geometry, leading to its study in various articles (e.g., [1, 1, 2, 1, 1, 2, 3, 4, 5]). In particular, in [10], Maeda-Masuda-Panov introduced the combinatorial counterpart of a torus manifold, where a _torus manifold_ is defined by a \(2n\)-dimensional \(T^{n}\)-manifold with fixed points. This combinatorial object is called a _torus graph_, and its properties have been extensively studied. Notably, they established that the graph equivariant cohomology of a torus graph is isomorphic to its _face ring_, which is defined using the simplicial poset induced from the subgraphs of a torus graph, relying solely on algebraic and combinatorial arguments. The advantage of establishing such a result for abstract GKM graphs, without relying on geometry, is that it can be applied to a wider class of equivariantly formal GKM manifolds (or spaces) that share the same GKM graph. Hence, the result in [10] can be regarded as a generalization of the computation of the equivariant cohomology ring of torus manifolds presented in [10]. In our paper, we focus on the study of GKM graphs corresponding to _even-dimensional complex quadrics_. An even-dimensional complex quadric \(Q_{2n}\) is defined by \[Q_{2n}:=\{[z_{1}:\cdots:z_{2n+2}]\in\mathbb{C}P^{2n+1}\ |\ \sum_{i=1}^{n+1}z_{i}z_{2n+3-i}=0\},\] having the natural \(T^{n+1}\)-action \[[z_{1}:\cdots:z_{2n+2}]\mapsto[z_{1}t_{1}:z_{2}t_{2}:\cdots z_{n+1}t_{n+1}:t_{n +1}^{-1}z_{n+2}:t_{n}^{-1}z_{n+3}:\cdots t_{1}^{-1}z_{2n+2}], \tag{1.1}\] where \((t_{1},\ldots,t_{n+1})\in T^{n+1}\). Since \(Q_{2n}\simeq SO(2n+2)/SO(2n)\times SO(2)\), this action is equivalent to the restriction of the transitive \(SO(2n+2)\)-action to the maximal torus \(T^{n+1}\)-action. As \(T^{n+1}\) is also a maximal torus of \(SO(2n)\times SO(2)\) (i.e., \(SO(2n)\times SO(2)\) is a maximal rank subgroup of \(SO(2n+2)\)), it follows from [1] that the fixed points and one-dimensional orbits of the \(T^{n+1}\)-action have the structure of a graph. Therefore, the GKM graph of \(Q_{2n}\) with the \(T^{n+1}\)-action (1.1) can be constructed by labeling the edges with tangential representations. Although the action (1.1) has a finite kernel \(\mathbb{Z}_{2}=\{\pm 1\}\subset T^{n+1}\), we can obtain an effective \(T^{n+1}\)-action on \(Q_{2n}\) by considering the quotient \(T^{n+1}/\mathbb{Z}_{2}\). In this paper, we denote by \(\mathcal{Q}_{2n}\) the GKM graph obtained from this effective \(T^{n+1}\)-action, see Section 2.2. On the other hand, the ordinary cohomology ring \(H^{*}(Q_{2n})\) of \(Q_{2n}\) over the integer coefficient was computed by H. Lai in [10, 11] (also see [1, Excercise 68.3] for \(H^{*}(Q_{m})\) as the Chow ring1). In particular, we have the following isomorphisms: Footnote 1: Since \(Q_{m}\) can also be regarded as the homogeneous space of the affine algebraic group \(SO(m+2,\mathbb{C})\), it follows from [1, Appendix C.3.4] that its Chow ring is isomorphic to its cohomology ring, i.e., \(A^{*}(Q_{m})\simeq H^{2*}(Q_{m};\mathbb{Z})\). We also note that the rational cohomology ring of \(Q_{2n-1}\) is isomorphic to that of \(\mathbb{C}P^{2n-1}\) (e.g. see [12]); however, these two cohomologies are not isomorphic over integer coefficients (e.g. see [11, 13]). \[H^{*}(Q_{m})\simeq\left\{\begin{array}{ll}\mathbb{Z}[c,x]/\langle c^{2n+1}- 2cx,x^{2}-c^{2n}x\rangle&\text{if }m=4n,&\text{where }\deg c=2,\ \deg x=4n\\ \mathbb{Z}[c,x]/\langle c^{2n+2}-2cx,x^{2}\rangle&\text{if }m=4n+2,&\text{ where }\deg c=2,\ \deg x=4n+2\end{array}\right. \tag{1.2}\] Using this formula, one can conclude that \(H^{odd}(Q_{2n})=0\) which means that \(Q_{2n}\) is an _equivariantly formal GKM manifold_. Therefore, the equivariant cohomology \(H^{*}_{T^{n+1}}(Q_{2n})\) of the effective \(T^{n+1}\)-action on \(Q_{2n}\) can be computed by using the graph equivariant cohomology of its GKM graph, denoted by \(\mathcal{Q}_{2n}\). The main goal of this paper is to determine the equivariant cohomology ring of the effective \(T^{n+1}\)-action on \(Q_{2n}\) by explicitly describing its generators and relations in terms of the GKM graphs. The main theorem of this paper, which is presented in Section 5 precisely, is as follows: **Theorem 1.1**.: _There exists the following isomorphism as a ring:_ \[H^{*}_{T^{n+1}}(Q_{2n})\simeq\mathbb{Z}[\mathcal{Q}_{2n}].\] Since \(Q_{2n}\) is equivariantly formal, the Serre spectral sequence of the fiber bundle \(Q_{2n}\to ET\times_{T}Q_{2n}\to BT\) collapses at the \(E_{2}\)-term. This implies that the ordinary cohomology \(H^{*}(Q_{2n})\) can be obtained as the quotient of the equivariant cohomology \(H^{*}_{T^{n+1}}(Q_{2n})\) by \(H^{>0}(BT)\). It is worth noting that the ring structure of \(H^{*}(Q_{2n})\), as shown in (1.2), depends on whether \(n\) is even or odd. We provide a combinatorial explanation for the difference between \(H^{*}(Q_{4n})\) and \(H^{*}(Q_{4n+2})\) using Theorem 1.1 (see Lemma 7.2 and Corollary 7.3 precisely). The paper is organized as follows, consisting of Sections 2 through 7. In Section 2, we compute the GKM graph \(\mathcal{Q}_{2n}\) of the effective \(T^{n+1}\)-action on \(Q_{2n}\). In Section 3, we introduce the graph equivariant cohomology \(H^{*}(\mathcal{Q}_{2n})\) and define the generators \(M_{v}\) and \(\Delta_{K}\), studying their properties. In Section 4, we present the four relations among \(M_{v}\) and \(\Delta_{K}\). The main theorem (Theorem 5.1) is proved in Section 5. Section 6 and Section 7 serve as additional sections with applications of Theorem 1.1. In Section 6, we establish multiplicative relations among \(\Delta_{K}\)'s of degree \(2n\). In Section 7, the ordinary cohomology ring of \(Q_{2n}\) is studied from a GKM theoretical perspective. ## 2. GKM graphs of even-dimensional complex quadrics \(Q_{2n}\) In this section, we compute the GKM graph of the effective \(T^{n+1}\)-action on \(Q_{2n}\) (see [1, 10]) about the basic facts of the GKM graph). In this paper, we identify the cohomology ring \(H^{*}(BT^{n+1})\) as the following polynomial ring generated by degree \(2\) generators \(x_{1},\ldots,x_{n+1}\): \[H^{*}(BT^{n+1})\simeq\mathbb{Z}[x_{1},\ldots,x_{n+1}]. \tag{2.1}\] It is worth noting that the generator \(x_{i}\), for \(i=1,\ldots,n+1\), is the equivariant first Chern class of the \(T^{n+1}\)-equivariant complex line bundle over a point, where the action on the unique fiber is defined by the \(i\)th coordinate projection \(p_{i}:T^{n+1}\to S^{1}\in\operatorname{Hom}(T^{n+1},S^{1})\). This gives the following identifications: \[H^{2}(BT^{n+1})\simeq\operatorname{Hom}(T^{n+1},S^{1})\simeq(\mathfrak{t}_{ \mathbb{Z}}^{n+1})^{*}\simeq\mathbb{Z}^{n+1},\] where \(\mathfrak{t}_{\mathbb{Z}}^{n+1}\) is the lattice of the Lie algebra of \(T^{n+1}\). In this paper, we often use this identification. ### The GKM graph of the natural \(T^{n+1}\)-action on \(Q_{2n}\) Suppose that the \(T^{n+1}\)-action on \(Q_{2n}\) is defined by (1.1). We first compute the GKM graph of this non-effective \(T^{n+1}\)-action. By definition, the GKM graph consists of the fixed points (vertices) and the invariant 2-spheres (edges), and the labels on edges (the _axial function_ of the GKM graph) which are defined by the tangential representations on fixed points. It is easy to check from the definition (1.1) that the fixed points of \(Q_{2n}\) are \[Q_{2n}^{T}=\{[e_{i}]\ |\ i=1,\dots,2n+2\},\] where \([e_{i}]=[0:\dots:0:1:0:\dots:0]\in\mathbb{C}P^{2n+1}\) (only the \(i\)th coordinate is 1). Then, we may denote the invariant 2 spheres by the following symbol: \[[z_{i}:z_{j}]:=[0:\dots:0:z_{i}:0:\dots:0:z_{j}:0\dots:0]\in Q_{2n+2} \tag{2.2}\] where \(i+j\neq 2n+3\). Therefore, we obtain the following graph from the \(T^{n+1}\)-action on \(Q_{2n}\): * the set of vertices \(V_{2n}=[2n+2]:=\{1,2,\dots,2n+2\}\); * the set of edges \(E_{2n}=\{ij\ |\ i,j\in[2n+2]\text{ such that }i\neq j,\ i+j\neq 2n+3\}\). We denote this graph as \(\Gamma_{2n}:=(V_{2n},E_{2n})\), see Figure 1. _Remark 2.1_.: For convenience, we often denote the vertex \(j\in V_{2n}\) such that \(i+j=2n+3\) by \(\overline{i}\). Namely, the set of vertices can be written by \[V_{2n}=[2n+2]=\{1,2,\dots,n+1,\overline{n+1},\overline{n},\dots,\overline{1}\}.\] Moreover, by using this notation, the set of edges can be written by \[E_{2n}=\{ij\ |\ i,j\in V_{2n}\text{ such that }j\neq i,\overline{i}\}\] We next compute the tangential representations around the fixed points and put the label on edges denoting as \(\widetilde{\alpha}:E_{2n}\to H^{2}(BT^{n+1})\), called an _axial function_ on edges. Recall that the tangential representations around the fixed points decompose into the complex 1-dimensional irreducible representations. One can also regard each complex 1-dimensional irreducible representation as the tangential representation on the fixed point of the invariant 2-sphere. This implies that, to compute the tangential representations around fixed points, it is enough to compute the tangential representation on each invariant 2-sphere \([z_{i}:z_{j}]\in Q_{2n}\), see (2.2). By the definition of the \(T^{n+1}\)-action on \([z_{i}:z_{j}]\), we may write the action \(t=(t_{1},\dots,t_{n+1})\in T^{n+1}\) on \([z_{i}:z_{j}]\) as \[[z_{i}:z_{j}]\mapsto[p_{i}(t)z_{i}:p_{j}(t)z_{j}],\] giving the \(T^{n+1}\)-actions on the two fixed points of the 2-sphere \([z_{i}:z_{j}]\) by \[[1:z_{j}]\mapsto[1:p_{i}(t)^{-1}p_{j}(t)z_{j}],\quad[z_{i}:1]\mapsto[p_{i}(t)p_ {j}(t)^{-1}z_{i}:1],\] Figure 1. The left graph is \(\Gamma_{4}\) (\(n=2\)) induced from the \(T^{3}\)-action on \(Q_{4}\), and the right graph is \(\Gamma_{6}\) (\(n=3\)) induced from the \(T^{4}\)-action on \(Q_{6}\). where \(p_{i}:T^{n+1}\to S^{1}\) is the surjective homomorphism defined as follows: \[p_{i}(t)=\left\{\begin{array}{ll}t_{i}&\text{if }i\in[n+1]\\ t_{\overline{i}}^{-1}&\text{if }i\in\{n+2,\ldots,2n+2\}.\end{array}\right.\] Therefore, the axial function \(\widetilde{\alpha}:E_{2n}\to H^{2}(BT^{n+1})\) is defined by the following equation (see Figure 2): \[\widetilde{\alpha}(ij)=x_{j}-x_{i}, \tag{2.3}\] where \(x_{i}\in H^{2}(BT^{n+1})\) is the element such that * for \(i\in[n+1]\), \(x_{i}\) is the generator of \(H^{2}(BT^{n+1})\) corresponds to the \(i\)th coordinate projection \(p_{i}\), also see (2.1); * for \(i\in\{n+2,\ldots,2n+2\}\), \(x_{i}:=-x_{\overline{i}}\). ### The GKM graph of the effective \(T^{n+1}\)-action on \(Q_{2n}\) Since the \(T^{n+1}\)-action (1.1) on \(Q_{2n}\) is not effective, the axial function \(\widetilde{\alpha}\) defined by (2.3) does not satisfy the effectiveness conditions (see [14, Section 2.1]). For example, around the vertex \(1\in V_{2n}\), the axial functions are \[x_{2}-x_{1},\ \ldots,\ x_{n+1}-x_{1},\ -x_{n+1}-x_{1},\ -x_{n}-x_{1},\ \ldots,\ -x_{2}-x_{1}\in(\mathfrak{t}_{ \mathbb{Z}}^{n+1})^{*}, \tag{2.4}\] and it is easy to check that these vectors span the lattice \(\langle x_{2}-x_{1},\ldots,x_{n+1}-x_{1},-x_{n+1}-x_{1}\rangle_{\mathbb{Z}}\) which is the proper subspace in \((\mathfrak{t}_{\mathbb{Z}}^{n+1})^{*}\). This is also similar to the axial functions on the other vertices. To apply the GKM theory, we will identify \(\langle x_{2}-x_{1},\ldots,x_{n+1}-x_{1},-x_{n+1}-x_{1}\rangle_{\mathbb{Z}}\) as \((\mathfrak{t}_{\mathbb{Z}}^{n+1})^{*}\). In this paper, we replace as follows: * \(x_{i}-x_{1}\) as \(x_{i-1}\) for \(i=2,\ldots,n+1\); * \(-x_{n+1}-x_{1}\) as \(x_{n+1}\). For the other vectors in (2.4), we have the following equalities: \[-x_{i}-x_{1}=-(x_{i}-x_{1})+(x_{n+1}-x_{1})+(-x_{n+1}-x_{1})\] for \(i=2,\ldots,n\). Therefore, we may replace the vectors in (2.4) with the following vectors (respectively) \[x_{1},\ \ldots,\ x_{n},\ x_{n+1},\ -x_{n-1}+x_{n}+x_{n+1},\ \ldots,\ -x_{1}+x_{n}+x_{n+1}. \tag{2.5}\] Notice that the vectors in (2.5) are primitive generators of \((\mathfrak{t}_{\mathbb{Z}}^{n+1})^{*}\). This gives the axial function induced by the effective \(T^{n+1}(\simeq T^{n+1}/\mathbb{Z}_{2})\)-action on \(Q_{2n}\), where \(\mathbb{Z}_{2}=\{\pm 1\}\) is the kernel of the \(T^{n+1}\)-action in (1.1). Applying a similar way to the other axial functions around each vertex (see (2.3)), we can define the axial function of the effective \(T^{n+1}\)-action as follows (see Figure 3): **Definition 2.2**.: Set \(f:V_{2n}\to H^{2}(BT^{n+1})\) as \[f(j)=\left\{\begin{array}{ll}x_{j-1}-x_{n+1}&j=1,\ldots,n+2\\ x_{n}-x_{2n+2-j}&j=n+3,\ldots,2n+2\end{array}\right.\] Figure 2. The axial function \(\widetilde{\alpha}\) around the vertex \(1\) in \(\Gamma_{4}\). This corresponds to the GKM graph induced from the \(T^{3}\)-action on \(Q_{4}\) defined by (1.1). Note that \(\overline{6}=1\), \(\overline{5}=2\), \(\overline{4}=3\). where \(x_{0}=0\) and \(\langle x_{1},\ldots,x_{n+1}\rangle=H^{2}(BT^{n+1})\). Then we define the axial function \(\alpha:E_{2n}\to H^{2}(BT^{n+1})\) as \[\alpha(ij):=f(j)-f(i)\] for \(j\neq i,\vec{i}\). In this paper, the symbol \(\mathcal{Q}_{2n}\) represents the GKM graph \((\Gamma_{2n},\alpha)\) (or equivalently \((\Gamma_{2n},f)\), called a _\(0\)-cochain presentation_) for \(\Gamma_{2n}=(V_{2n},E_{2n})\) defined in Definition 2.2. We exhibit some useful properties for the GKM graph \(\mathcal{Q}_{2n}\). **Lemma 2.3**.: _For every vertix \(i\in V_{2n}\) in the GKM graph \(\mathcal{Q}_{2n}\), the following equation holds:_ \[f(i)+f(\vec{i})=x_{n}-x_{n+1}.\] Proof.: Since \(i+\vec{i}=2n+3\), we have that \(x_{i-1}=x_{2n+2-\vec{i}}\). The equality is immediately followed by Definition 2.2. **Lemma 2.4**.: _For every edge \(ij\in E_{2n}\) in the GKM graph \(\mathcal{Q}_{2n}\), the following equation holds:_ \[\alpha(ij)=-\alpha(\overline{i}\overline{j}).\] Proof.: By Lemma 2.3, we have \[\alpha(ij) =f(j)-f(i)=(x_{n}-x_{n+1}-f(\overline{j}))-(x_{n}-x_{n+1}-f( \vec{i}))\] \[=f(\overline{i})-f(\overline{j})=\alpha(\overline{j}\overline{i}) =-\alpha(\overline{j}\overline{j}).\] **Lemma 2.5**.: _For every \(j\in V_{2n}\setminus\{i,\vec{i}\}\), the following equation holds:_ \[\alpha(ij)+\alpha(i\overline{j})=x_{n}-x_{n+1}-2f(i)\] Proof.: By definition of the axial function \(\alpha:E_{2n}\to H^{2}(BT^{n+1})\) and Lemma 2.3, we have that \[\alpha(ij)+\alpha(i\overline{j}) =(f(j)-f(i))+(f(\overline{j})-f(i))=f(j)+f(\overline{j})-2f(i)\] \[=x_{n}-x_{n+1}-2f(i).\] **Lemma 2.6**.: _The GKM graph \(\mathcal{Q}_{2n}\) is three-independent, i.e., for every vertex \(i\in V_{2n}\) and every distinct three vertices \(j_{1},j_{2},j_{3}\in V_{2n}\setminus\{i,\vec{i}\}\), the axial functions \(\alpha(ij_{1}),\alpha(ij_{2}),\alpha(ij_{3})\) are linearly independent._ Proof.: This is straightforward from Definition 2.2. Figure 3. The GKM graph \(\mathcal{Q}_{2n}\) when \(n=2\) (also see the left graph in Figure 1). The right figure shows that the axial function \(\alpha:E_{4}\to H^{2}(BT^{3})\) of \(\mathcal{Q}_{4}\) around the vertex \(1\). The left figure shows its \(0\)-cochain presentation \(f:V_{4}\to H^{2}(BT^{3})\). ## 3. Two generators The _graph equivariant cohomology_ of the GKM graph \(\mathcal{Q}_{2n}\) is defined by \[H^{*}(\mathcal{Q}_{2n}):=\{h:V_{2n}\to H^{*}(BT^{n+1})\ |\ h(i)-h(j)\equiv 0\mod \alpha(ij)\ \text{for}\ ij\in E_{2n}\}. \tag{3.1}\] The equation \(h(i)-h(j)\equiv 0\mod\alpha(ij)\) in (3.1) is also called a _congruence relation_. Note that \(H^{*}(\mathcal{Q}_{2n})\) has the graded \(H^{*}(BT^{n+1})\)-algebra structure induced by the graded algebra structure of \(\bigoplus_{t\geq 0}R_{t}\), where \(R_{t}\) is the degree \(t\) part defined by \(R_{t}:=H^{t}(BT^{n+1})\). In particular, there is the injective homomorphism \[\iota:H^{*}(BT^{n+1})\to H^{*}(\mathcal{Q}_{2n}) \tag{3.2}\] such that the image of \(x\in H^{*}(BT^{n+1})\), say \(\iota(x):V_{2n}\to H^{*}(BT^{n+1})\), is defined by the function \[\iota(x)(v)=x\] for all \(v\in V_{2n}\). This induces the \(H^{*}(BT^{n+1})\)-action on \(H^{*}(\mathcal{Q}_{2n})\). The following lemma holds: **Lemma 3.1**.: _For the effective \(T^{n+1}\)-action on \(Q_{2n}\), the following graded \(H^{*}(BT^{n+1})\)-algebra isomorphism holds:_ \[H^{*}_{T^{n+1}}(Q_{2n})\simeq H^{*}(\mathcal{Q}_{2n}).\] Proof.: Because the effective \(T^{n+1}\)-action on \(Q_{2n}\) is obtained by the quotient \(T^{n+1}/\mathbb{Z}_{2}\) by the finite kernel of the action (1.1). This implies that all isotropy subgroups of the effective \(T^{n+1}\)-action are connected. Therefore, by using \(H^{odd}(Q_{2n})=0\) and [10] (also see [11, Theorem 2.12]), we have the statement. Lemma 3.1 means that to compute the equivariant cohomology \(H^{*}(Q_{2n})\) is equivalent to compute the graph equivariant cohomology \(H^{*}(\mathcal{Q}_{2n})\). The goal of this paper is to describe its generators and relations by the combinatorial data of the GKM graph \(\mathcal{Q}_{2n}\); this will be proved in Theorem 5.1. The injective homomorphism (3.2) for \(H^{*}(\mathcal{Q}_{2n})\) is also given in Proposition 3.6; toghether with Theorem 5.1, this establishes the \(H^{*}(BT^{n+1})\)-algebra structure on \(H^{*}(\mathcal{Q}_{2n})\). To prove it, in this section, we introduce two types of elements in \(H^{*}(\mathcal{Q}_{2n})\) which will be the ring generators of \(H^{*}(\mathcal{Q}_{2n})\). ### Degree \(2\) generators We first define the degree two element, denoted by \(M_{v}\), in \(H^{2}(\mathcal{Q}_{2n})\) for every \(v\in V_{2n}\). **Definition 3.2** (degree \(2\) generators).: Take a vertex \(v\in V_{2n}=[2n+2]\). We define the function \(M_{v}:V_{2n}\to H^{2}(BT^{n+1})\) by \[M_{v}(j)=\left\{\begin{array}{ll}0&j=v\\ \alpha(jv)=f(v)-f(j)&j\neq v,\overline{v}\\ \alpha(\overline{v}k)+\alpha(\overline{v}\overline{k})=x_{n}-x_{n+1}-2f( \overline{v})&j=\overline{v}\end{array}\right.\] The equality for \(M_{v}(\overline{v})\) is obtained by Lemma 2.5; this means that \(M_{v}(\overline{v})\) does not depend on the choice of \(k\in V_{2n}\setminus\{v,\overline{v}\}\). The following proposition holds. **Proposition 3.3**.: _For every \(v\in V_{2n}\), \(M_{v}\in H^{2}(\mathcal{Q}_{2n})\)._ Proof.: We claim that \(M_{v}(j)-M_{v}(k)\equiv 0\mod\alpha(jk)\) for every \(jk\in E_{2n}\) by case-by-case checking. **The case when \(j=v\):** For every \(k\in V_{2n}\setminus\{j,\overline{j}\}=V_{2n}\setminus\{v,\overline{v}\}\), by Definition 3.2 we have \[M_{v}(j)-M_{v}(k)=0-(f(v)-f(k))=f(k)-f(v)=\alpha(vk)\equiv 0\mod\alpha(vk)= \alpha(jk).\] **The case when \(j=\overline{v}\):** For every \(k\in V_{2n}\setminus\{j,\overline{j}\}=V_{2n}\setminus\{v,\overline{v}\}\), we have that \[M_{v}(j)-M_{v}(k) =\alpha(\overline{v}k)+\alpha(\overline{v}\overline{k})-\alpha(kv)\] \[=\alpha(\overline{v}k)+\alpha(\overline{v}\overline{k})-\alpha( \overline{v}\overline{k})\quad\text{(by Lemma \ref{lem:2.4} and Definition \ref{lem:2.2})}\] \[=\alpha(\overline{v}k)\equiv 0\mod\alpha(\overline{v}k)=\alpha(jk).\] **The case when \(j\neq v,\overline{v}\):** With the method similar to that demonstrated as above for the two cases (\(k\neq\overline{v}\) and \(k=\overline{v}\)), we can easily check that \(M_{v}(j)-M_{v}(k)\equiv 0\mod\alpha(jk)\). Therefore, we have that \(M_{v}\in H^{2}(\mathcal{Q}_{2n})\). **Example 3.4**.: For \(n=2\), Figure 4 represents the class \(M_{6}\in H^{2}(\mathcal{Q}_{4})\). ### Some properties for degree \(2\) generators \(M_{v}\) Before we define the higher degree generators, we introduce three properties for \(M_{v}\)'s. For the vertices \(W\subset V_{2n}\), we denote the _full-subgraph_ with vertices \(W\) by \(\Gamma_{W}\), i.e., \(\Gamma_{W}\) consists of the following data: * the vertices \(W\); * the edges \(E_{W}:=\{ij\in E_{2n}\ |\ i,j\in W\}\). Note that, by Definition 3.2, the value of \(M_{v}(j)\in H^{2}(BT^{n+1})\) for \(j\neq\overline{v}\) coincides with the normal axial function \(\alpha(jv)\) on the vertex \(j\) of the full-subgraph \(\Gamma_{I}\), where \(I=V_{2n}\setminus\{v\}\). The following proposition shows that an element in \(H^{2}(\mathcal{Q}_{2n})\) with such a property is uniquely determined. **Proposition 3.5**.: _If an element \(A\in H^{2}(\mathcal{Q}_{2n})\) satisfies that \(A(j)=M_{v}(j)\) for every \(j\in V_{2n}\setminus\{v,\overline{v}\}\), then \(A=M_{v}\)._ Proof.: We first claim that \(A(v)=0=M_{v}(v)\). By using the congruence relations on the edges \(jv\) for all \(j\in V_{2n}\setminus\{v,\overline{v}\}\), we have \[A(j)-A(v)=M_{v}(j)-A(v)=\alpha(jv)-A(v)\equiv-A(v)\equiv 0\mod\alpha(jv).\] This shows that for every \(j\in V_{2n}\setminus\{v,\overline{v}\}\) there exists an integer \(k_{j}\) such that \[A(v)=-k_{j}\alpha(jv)=k_{j}\alpha(vj).\] In particular, for every \(j_{1},\ j_{2}\in V_{2n}\setminus\{v,\overline{v}\}\), \[A(v)=k_{j_{1}}\alpha(vj_{1})=k_{j_{2}}\alpha(vj_{2}).\] By Lemma 2.6, this gives that \(k_{j}=0\), thus establishing \(A(v)=0\). We next claim that \(A(\overline{v})=x_{n}-x_{n+1}-2f(\overline{v})=M_{v}(\overline{v})\). By using the congruence relations on the edges \(j\overline{v}\) for all \(j\in V_{2n}\setminus\{v,\overline{v}\}\), we have \[A(j)-A(\overline{v})=M_{v}(j)-A(\overline{v})=\alpha(jv)-A(\overline{v})\equiv 0 \mod\alpha(j\overline{v}).\] This shows that for every \(j\in V_{2n}\setminus\{v,\overline{v}\}\) there exists an integer \(k_{j}\) which satisfies the following equation: \[A(\overline{v}) =\alpha(jv)+k_{j}\alpha(j\overline{v})\] \[=\alpha(\overline{v}\overline{j})+k_{j}\alpha(v\overline{j})\quad \text{(by Lemma \ref{lem:2.4} and Definition \ref{lem:2.2})}\] \[=x_{n}-x_{n+1}-2f(\overline{v})-\alpha(\overline{v}j)+k_{j} \alpha(v\overline{j})\quad\text{(by Lemma \ref{lem:2.5})}\] \[=x_{n}-x_{n+1}-2f(\overline{v})-(1+k_{j})\alpha(\overline{v}j) \quad\text{(by Lemma \ref{lem:2.4})}\] Figure 4. The element \(M_{6}\in H^{2}(\mathcal{Q}_{4})\). In particular, this equation holds for every \(j_{1},\ j_{2}\in V_{2n}\setminus\{v,\overline{v}\}\). Therefore, by using the similar method for the proof of \(A(v)=0\), we obtain \(1+k_{j}=0\), thus \(k_{j}=-1\). Therefore, by Lemma 2.4 and Lemma 2.5, \[A(\overline{v})=\alpha(jv)-\alpha(j\overline{v})=x_{n}-x_{n+1}-2f(\overline{v }).\] This establishes \(A=M_{v}\). Recall \(\iota:H^{*}(BT^{n+1})\to H^{*}(\mathcal{Q}_{2n})\) in (3.2). By abuse of notation, we also denote \(\iota(x):V_{2n}\to H^{*}(BT^{n+1})\) as \(x:V_{2n}\to H^{*}(BT^{n+1})\) for an element \(x\in H^{*}(BT^{n+1})\). The following proposition shows that \(x\) can be also presented by \(M_{v}\)'s. **Proposition 3.6**.: _The generator \(x_{i}\in H^{*}(BT^{n+1})\) for \(i=1,\dots,n+1\) is obtained by the following equality:_ \[x_{i}=M_{i+1}-M_{1}.\] Proof.: Because \(i=1,\dots,n+1\), for all \(j\in V_{2n}\setminus\{\overline{1},\overline{i+1}\}\), we have that \[M_{i+1}(j)-M_{1}(j)=f(i+1)-f(j)-(f(1)-f(j))=f(i+1)-f(1)=x_{i}-x_{n+1}-(x_{0}-x_ {n+1})=x_{i}.\] For \(j=\overline{1}=2n+2\), we have \[M_{i+1}(2n+2)-M_{1}(2n+2) =f(i+1)-f(2n+2)-(x_{n}-x_{n+1}-2f(2n+2))\] \[=f(i+1)+f(2n+2)-(f(i+1)+f(\overline{i+1}))\quad\text{(by Lemma \ref{lem:2.3})}\] \[=f(2n+2)-f(\overline{i+1})=x_{i}\quad\text{(by Definition \ref{lem:2.2}).}\] For \(j=\overline{i+1}=2n+2-i\), we have \[M_{i+1}(2n+2-i)-M_{1}(2n+2-i) =(x_{n}-x_{n+1}-2f(2n+2-i))-(f(1)-f(2n+2-i))\] \[=x_{n}-f(2n+2-i)\quad\text{(by Definition \ref{lem:2.2}).}\] In this case, by using Definition 2.2 again, \[x_{n}-f(2n+2-i)=\left\{\begin{array}{ll}x_{n}-(x_{n}-x_{i})&i=1,\dots,n-1\\ x_{n}-(x_{2n+1-i}-x_{n+1})&i=n,n+1\end{array}\right.\] Therefore, \(M_{i+1}(2n+2-i)-M_{1}(2n+2-i)=x_{i}\). These equations show that \(M_{i+1}(v)-M_{1}(v)=x_{i}\) for all \(v\in V_{2n}\). This establishes the statement. We also have the following proposition for the \(0\)-cochain presentation \(f:V_{2n}\to H^{2}(BT^{n+1})\) defined in Definition 2.2. **Proposition 3.7**.: _The \(0\)-cochain presentaion \(f:V_{2n}\to H^{2}(BT^{n+1})\) satisfies that \(f=-M_{n+2}\)._ Proof.: By definitions of \(f\) and \(M_{n+2}\), we can easily check the statement. **Example 3.8**.: The left figure of Figure 3 in Section 2.2 also represents that \(f=-M_{4}\). ### Higher degree generators We next define the degree \(2l\) element \(\Delta_{K}\) in \(H^{2l}(\mathcal{Q}_{2n})\) for some \(K\subset V_{2n}\) such that \(|K|=l+1\), where \(|K|\) is the cardinality of \(K\). For a non-empty subset \(K\subset V_{2n}\), by definition of \(\Gamma_{2n}\), the following two properties are equivalent: * the full-subgraph \(\Gamma_{K}\) is the complete subgraph of \(\Gamma_{2n}\); * if \(i\in K\), then \(\overline{i}\not\in K\) (or equivalently \(\{i,\overline{i}\}\not\subset K\) for all \(i\in V_{2n}\)). We call one of these properties _the property_\((*)\). Note that if \(K\) satisfies the property \((*)\), then its cardinality satisfies \(1\leq|K|\leq n+1\). **Definition 3.9** (degree \((\geq)2n\) generators).: Let \(K\subset V_{2n}=[2n+2]\) be a non-empty subset that satisfies the property \((*)\). We define the function \(\Delta_{K}:V_{2n}\to H^{4n-2|K|+2}(BT^{n+1})\) by \[\Delta_{K}(j)=\left\{\begin{array}{ll}\prod_{k\not\in K\cup\{\overline{j} \}}\alpha(jk)=\prod_{k\not\in K\cup\{\overline{j}\}}(f(k)-f(j))&j\in K\\ 0&j\not\in K\end{array}\right. \tag{3.3}\] Note that \(\Delta_{K}\) is nothing but the _Thom class_ of the GKM subgraph \(\Gamma_{K}\) (see [14, Section 4]). Therefore, by the similar arguments for the proof of [14, Lemma 4.1], we have the following lemma: **Lemma 3.10**.: _If \(K\subset V_{2n}\) satisfies the property \((*)\), then \(\Delta_{K}\in H^{4n-2|K|+2}(\mathcal{Q}_{2n})\)._ _Remark 3.11_.: Geometrically, \(\Delta_{K}\) is the equivariant Thom class of the invariant submanifold in \(Q_{2n}\) (see [10]) which is diffeomorphic to the projective space whose fixed points consisting of \(K\). For example, there exists the following subspace which is diffeomorphic to \(\mathbb{C}P^{l-1}\) in \(Q_{2n}\) for every \(1\leq l\leq n+1\): \[\{[z_{1}:z_{2}:\cdots:z_{l}:0:\cdots:0]\in Q_{2n}\ |\ z_{i}\in\mathbb{C}\} \simeq\mathbb{C}P^{l-1}.\] In this case, \(K=[l]\subset[2n+2]\) and the class \(\Delta_{K}\in H^{4n-2l+2}(\mathcal{Q}_{2n})\) corresponds to the equivariant Thom class of \(\mathbb{C}P^{l-1}\subset Q_{2n}\) in the equivariant cohomology \(H_{T}^{4n-2l+2}(Q_{2n})\). **Example 3.12**.: For the GKM graph \(\mathcal{Q}_{4}\), the set of vertices \(K=\{1,2,3\}\) satisfies the property \((*)\). Figure 5 represents the class \(\Delta_{K}\in H^{4}(\mathcal{Q}_{4})\). **Example 3.13**.: For the GKM graph \(\mathcal{Q}_{4}\), the set of vertices \(L=\{1,2\}\) also satisfies the property \((*)\). Figure 6 represents the class \(\Delta_{L}\in H^{6}(\mathcal{Q}_{4})\). ## 4. Four relations In this section, we introduce the four types of relations among \(M_{v}\)'s and \(\Delta_{K}\)'s. **Relation 1**.: We use the following notation for \(J\subset V_{2n}\). \[G_{J}:=\left\{\begin{array}{ll}M_{v}&\text{if $J=V_{2n}\setminus\{v\}$ for a vertex $v\in V_{2n}$}\\ \Delta_{J}&\text{if $J$ satisfies that the property $(*)$, i.e., $\{i,\vec{i}\}\not\subset J$ for every $i\in V_{2n}$}\end{array}\right. \tag{4.1}\] By Definition 3.2 and Definition 3.9, we have the following relation in \(H^{*}(\mathcal{Q}_{2n})\) (see Figure 7): **Lemma 4.1**.: _There is the following relation:_ \[\prod_{\cap J=\emptyset}G_{J}=0. \tag{4.2}\] **Relation 2**.: We define the element \(X\in H^{2}(\mathcal{Q}_{2n})\) by the map \(X:V_{2n}\to H^{2}(BT^{n+1})\) which satisfies \[X(k):=x_{n}-x_{n+1}-2f(k)\] for all \(k\in V_{2n}\). Then, by Lemma 2.5, for every \(j\in V_{2n}\) such that \(j,\overline{j}\neq k\), there exists the following equation: \[X(k)=\alpha(kj)+\alpha(k\overline{j}).\] By Lemma 2.3 and Definition 3.2, we have the following relation (see Figure 8): **Lemma 4.2**.: _For every \(v\in V_{2n}\), there is the following relation:_ \[M_{v}+M_{\overline{v}}=X. \tag{4.3}\] **Relation 3**.: Assume that the subset \(I\subset V_{2n}\) satisfies that \(|I|=n\) and the property \((*)\). Then, because \(|V_{2n}|=2n+2\), there exists the unique pair \(\{a,\overline{a}\}\subset I^{c}=V_{2n}\setminus I\) such that \[\Delta_{K},\quad\Delta_{L}\in H^{2n}(\mathcal{Q}_{2n})\] for \(K=(I\cup\{a\})^{c}=I^{c}\setminus\{a\}\) and \(L=(I\cup\{\overline{a}\})^{c}=I^{c}\setminus\{\overline{a}\}\). Then, the following relation holds (see Figure 9 for \(n=2\) and Figure 10 for \(n=3\)): **Lemma 4.3**.: _For every \(I\subset V_{2n}\) as above, there is the following relation:_ \[\prod_{i\in I}M_{i}=\Delta_{(I\cup\{a\})^{c}}+\Delta_{(I\cup\{\overline{a}\}) ^{c}}. \tag{4.4}\] Proof.: If \(v\in I\), then \(\prod_{i\in I}M_{i}(v)=0=\Delta_{(I\cup\{a\})^{c}}(v)+\Delta_{(I\cup\{\overline{a} \})^{c}}(v)\). If \(v\in I^{c}\setminus\{a,\overline{a}\}\), then \(\overline{v}\in I\). Thus, we have \[\prod_{i\in I}M_{i}(v)=M_{\overline{v}}(v)\prod_{i\in I\setminus \{\overline{v}\}}\alpha(vi) =(\alpha(va)+\alpha(v\overline{a}))\prod_{i\in I\setminus\{ \overline{v}\}}\alpha(vi)\] \[=\Delta_{(I\cup\{a\})^{c}}(v)+\Delta_{(I\cup\{\overline{a}\})^{ c}}(v).\] For the vertex \(a(\not\in I)\), we have \[\Delta_{(I\cup\{a\})^{c}}(a)+\Delta_{(I\cup\{\overline{a}\})^{c}}(a)=0+\prod_{ i\in I}\alpha(ai)=\prod_{i\in I}M_{i}(a).\] Similarly, we have \(\prod_{i\in I}M_{i}(\overline{a})=\Delta_{(I\cup\{a\})^{c}}(\overline{a})+ \Delta_{(I\cup\{\overline{a}\})^{c}}(\overline{a})\). This establishes the statement. **Relation 4**.: For two generators \(\Delta_{K}\) and \(M_{i}\), we have the following relation (see Figure 11): **Lemma 4.4**.: _Fix \(i\in V_{2n}\). If a subset \(K\subset V_{2n}\) satisfies \(\{i\}\subsetneq K\) and the property \((*)\), then there is the following relation:_ \[\Delta_{K}\cdot M_{i}=\Delta_{K\setminus\{i\}}. \tag{4.5}\] Proof.: The multiplication of \(\Delta_{K}\) and \(M_{i}\) is not zero only on \(K\cap(V_{2n}\setminus\{i\})=K\setminus\{i\}\). Therefore, we have \[\Delta_{K}\cdot M_{i}(v)=\left\{\begin{array}{ll}\prod_{j\not\in(K\setminus \{i\})\cup\{\overline{v}\}}\alpha(vj)&\text{if $v\in K\setminus\{i\}$}\\ 0&\text{if $v\not\in K\setminus\{i\}$}\end{array}\right.\] This shows the equation \(\Delta_{K}\cdot M_{i}=\Delta_{K\setminus\{i\}}\). Figure 10. For the GKM graph \(\mathcal{Q}_{6}\) (where the vertices are defined in Figure 1), this represents the following equation (Relation 3): \[M_{1}\cdot M_{2}\cdot M_{3}=\Delta_{\{5,6,7,8\}}+\Delta_{\{4,6,7,8\}},\] where \(I=\{1,2,3\}\subset V_{6}\) (for \(n=3\)). Figure 9. For the GKM graph \(\mathcal{Q}_{4}\) (where the vertices are defined in Figure 1), this represents the following equation (Relation 3): ## 5. Main theorem and its proof In this section, we prove the main theorem (Theorem 5.1). To state the main theorem precisely, we first prepare some notations. We denote the set of elements defined in Section 3 as follows: **Generator 1:**: \(\mathcal{M}:=\{M_{v}\ |\ v\in V_{2n}\}\); **Generator 2:**: \(\mathcal{D}:=\{\Delta_{K}\ |\ K\subset V_{2n}\ \text{with the property }(*)\}\). Let \(\mathbb{Z}[\mathcal{M},\mathcal{D}]\) be the polynomial ring which generated by all elements in \(\mathcal{M}\) and \(\mathcal{D}\). We define the degree of elements by * \(\deg M_{v}=2\) for every \(M_{v}\in\mathcal{M}\); * \(\deg\Delta_{K}=2(2n-(|K|-1))=4n-2|K|+2\) for every \(\Delta_{K}\in\mathcal{D}\). Let \(\mathcal{I}\) be the ideal in \(\mathbb{Z}[\mathcal{M},\mathcal{D}]\) generated by the four relations defined in Section 4. Namely, the ideal \(\mathcal{I}\) in \(\mathbb{Z}[\mathcal{M},\mathcal{D}]\) is generated by the following four types of elements: **Relation 1:**: \(\prod_{\cap J=\emptyset}G_{J}\) for \(G_{J}\in\mathcal{M}\sqcup\mathcal{D}\); **Relation 2:**: \((M_{i}+M_{\overline{i}})-(M_{j}+M_{\overline{j}})\) for every distinct \(i,j\in V_{2n}\); **Relation 3:**: \(\prod_{i\in I}M_{i}-(\Delta_{(I\cup\{a\})^{c}}+\Delta_{(I\cup\{\overline{a}\}) ^{c}})\) for every subset \(I\subset V_{2n}\) which satisfies the property \((*)\) and \(|I|=n\), where \(\{a,\overline{a}\}\) is the unique pair in \(V_{2n}\setminus I\); **Relation 4:**: \(\Delta_{K}\cdot M_{i}-\Delta_{K\setminus\{i\}}\) for \(\{i\}\subsetneq K\). We use the following notation: \[\mathbb{Z}[\mathcal{Q}_{2n}]:=\mathbb{Z}[\mathcal{M},\mathcal{D}]/\mathcal{I}. \tag{5.1}\] Because of Section 3 and Section 4, there exists the well-defined homomorphism \[\psi:\mathbb{Z}[\mathcal{Q}_{2n}]\to H^{*}(\mathcal{Q}_{2n})\] by the induced homomorphism from \[\widetilde{\psi}:\mathbb{Z}[\mathcal{M},\mathcal{D}]\to H^{*}(\mathcal{Q}_{2n }).\] Namely, \(\psi\) is induced from the following commutative diagram: (5.2) where the vertical map is the natural projection. The following theorem is the main theorem of this paper. **Theorem 5.1**.: _The homomorphism \(\psi\) is the isomorphism, i.e.,_ \[\mathbb{Z}[\mathcal{Q}_{2n}]\simeq H^{*}(\mathcal{Q}_{2n}).\] Together with Lemma 3.1, we obtain Theorem 1.1. In the proofs below, we often regard \(M_{i}\) and \(\Delta_{K}\) as the elements \(\widetilde{\psi}(M_{i}),\widetilde{\psi}(\Delta_{K})\in H^{*}(\mathcal{Q}_{2 n})\). ### Surjectivity of \(\psi:\mathbb{Z}[\mathcal{Q}_{2n}]\to H^{*}(\mathcal{Q}_{2n})\) We first prove the surjectivity of \(\psi\). To prove it, we use the inductive argument for vertices which is often used in GKM theory (see e.g. [10, Lemma 4.4] or [13, Lemma 5.6]). **Lemma 5.2**.: _The homomorphism \(\psi:\mathbb{Z}[\mathcal{Q}_{2n}]\to H^{*}(\mathcal{Q}_{2n})\) is surjective._ Proof.: By the commutative diagram (5.2), it is enough to prove that \(\widetilde{\psi}\) is surjective. Take an element \(f\in H^{*}(\mathcal{Q}_{2n})\). By definition, for the vertex \(1\in V_{2n}\), the polynomial \(f(1)\in H^{*}(BT^{n+1})\) can be written by \[f(1)=\sum_{\mathbf{a}}k_{\mathbf{a}}x^{\mathbf{a}}=g_{1}\] where \(k_{\mathbf{a}}\in\mathbb{Z}\) and \(x_{\mathbf{a}}:=x_{1}^{a_{1}}\cdots x_{n+1}^{a_{n+1}}\) for \(\mathbf{a}=(a_{1},\cdots,a_{n+1})\in(\mathbb{N}\cup\{0\})^{n+1}\). By definition of \(M_{2},\ldots,M_{n+2}\) in Definition 3.2, we have \(M_{2}(1)=x_{1},\ldots,M_{n+2}(1)=x_{n+1}\); therefore, \[x^{\mathbf{a}}=x_{1}^{a_{1}}\cdots x_{n+1}^{a_{n+1}}=M_{2}^{a_{1}}\cdots M_{n +1}^{a_{n}}M_{n+2}^{a_{n+1}}(1).\] This means that we may take an element from \(\mathbb{Z}[M_{2},\ldots,M_{n+2}]\subset\mathbb{Z}[\mathcal{M},\mathcal{D}]\) whose image of \(\widetilde{\psi}\) coincides with \(f(1)\) on the vertex \(1\in V\). We next put \[f_{2}=f-g_{1}.\] Then, \(f_{2}(1)=0\). So, by the congruence relations on the edge \(21\in E_{2n}\), we have \[f_{2}(2)-f_{2}(1)\equiv 0\mod\alpha(21)=M_{1}(2).\] Therefore, we have that \(f_{2}(2)=g_{2}M_{1}(2)\) for some \(g_{2}\in H^{*}(BT^{n+1})\). By Proposition 3.6, we have that \[x_{1}=M_{2}-M_{1},\ \ldots,\ x_{n+1}=M_{n+2}-M_{1}.\] This implies that \(g_{2}\in\mathbb{Z}[M_{2}-M_{1},M_{3}-M_{1},\ldots,M_{n+2}-M_{1}]\subset\mathbb{ Z}[\mathcal{M},\mathcal{D}]\). Note that we may also regard as \(g_{2}\in\mathbb{Z}[M_{1},M_{2},M_{3},\ldots,M_{n+2}]\). This shows that \(g_{2}M_{1}\) is in the image of \(\widetilde{\psi}\). Put \[f_{3}=f_{2}-g_{2}M_{1}(=f-g_{1}-g_{2}M_{1}).\] Then, by \(f_{2}(1)=M_{1}(1)=0\) and \(f_{2}(2)=g_{2}M_{1}(2)\), we have \[f_{3}(1)=f_{3}(2)=0.\] By the similar argument, we may write \(f_{3}(3)=g_{3}M_{1}M_{2}(3)\) for some \(g_{3}\in\mathbb{Z}[M_{1},M_{2},M_{3},\ldots,M_{n+2}]\). Similarly, we can also check that \(f_{4}:=f_{3}-g_{3}M_{1}M_{2}\) satisfies \(f_{4}(1)=f_{4}(2)=f_{4}(3)=0\). Iterating similar arguments \(n+2\) times (note that \(\overline{n+1}=n+2\)), we obtain an element \[f_{n+2}:=f_{n+1}-g_{n+1}M_{1}\cdots M_{n}\in H^{*}(\mathcal{Q}_{2n})\] such that \(g_{n+1}\in\mathbb{Z}[M_{1},M_{2},\ldots,M_{n+2}]\) and \(f_{n+1}\in H^{*}(\mathcal{Q}_{2n})\) satisfies that \(f_{n+1}(1)=\cdots=f_{n+1}(n)=0\) and \(f_{n+1}(n+1)=g_{n+1}M_{1}\cdots M_{n}(n+1)\). Consequently, we have that \[f_{n+2} =f_{n+1}-g_{n+1}M_{1}\cdots M_{n}\] \[=f_{n}-(g_{n}M_{1}\cdots M_{n-1}+g_{n+1}M_{1}\cdots M_{n})\] \[\quad\vdots\] \[=f-(g_{1}+g_{2}M_{1}+\cdots+g_{n}M_{1}\cdots M_{n-1}+g_{n+1}M_{1} \cdots M_{n}). \tag{5.3}\] Note that \(f_{n+2}\) satisfies that \(f_{n+2}(1)=\cdots=f_{n+2}(n+1)=0\). Therefore, by the definition of \(\Delta_{\{n+2,\ldots,2n+2\}}\) and the congruence relation (see (3.1)), there exists an element \(g_{n+2}\in\mathbb{Z}[M_{1},M_{2},\ldots,M_{n+2}]\) such that \[f_{n+2}(n+2)=g_{n+2}\Delta_{\{n+2,\ldots,2n+2\}}(n+2).\] Since \(\Delta_{\{n+2,\ldots,2n+2\}}(1)=\cdots=\Delta_{\{n+2,\ldots,2n+2\}}(n+1)=0\), if we put \(f_{n+3}:=f_{n+2}-g_{n+2}\Delta_{\{n+2,\ldots,2n+2\}}\), then \[f_{n+3}(1)=\cdots=f_{n+3}(n+2)=0.\] Similarly, for \(k\geq 2\), there exists \(g_{n+k}\in\mathbb{Z}[M_{1},M_{2},\ldots,M_{n+2}]\) such that \[f_{n+k+1}:=f_{n+k}-g_{n+k}\Delta_{\{n+k,\ldots,2n+2\}} \tag{5.4}\] and \(f_{n+k+1}(1)=\cdots=f_{n+k+1}(n+k)=0\). Then, in the case when \(k=n+2\), there exists \(g_{2n+2}\in\mathbb{Z}[M_{1},M_{2},\ldots,M_{n+2}]\) such that \[f_{2n+3}:=f_{2n+2}-g_{2n+2}\Delta_{\{2n+2\}}\] and \(f_{2n+3}(v)=0\) for all \(v\in V_{2n}\). Therefore, \(f_{2n+2}=g_{2n+2}\Delta_{\{2n+2\}}\). Substituting this equation to (5.4) for \(k=n+1\), we get \(f_{2n+1}\); and iterating this argument from \(k=n\) to \(k=2\), we have \[f_{2n+1} =g_{2n+2}\Delta_{\{2n+2\}}+g_{2n+1}\Delta_{\{2n+1,2n+2\}};\] \[f_{2n} =g_{2n+2}\Delta_{\{2n+2\}}+g_{2n+1}\Delta_{\{2n+1,2n+2\}}+g_{2n} \Delta_{\{2n,2n+1,2n+2\}};\] \[\vdots\] \[f_{n+2} =g_{2n+2}\Delta_{\{2n+2\}}+g_{2n+1}\Delta_{\{2n+1,2n+2\}}+\cdots+ g_{n+2}\Delta_{\{n+2,\ldots,2n+2\}}.\] Together with (5.3), every element \(f\in H^{*}(\mathcal{Q}_{2n})\) can be written by the elements in \(\mathbb{Z}[\mathcal{M},\mathcal{D}]\) as follows: \[f=g_{1}+g_{2}M_{1}+\cdots+g_{n+1}M_{1}\cdots M_{n}\\ +g_{n+2}\Delta_{\{n+2,\ldots,2n+2\}}+g_{n+3}\Delta_{\{n+3,\ldots,2 n+2\}}+\cdots+g_{2n+2}\Delta_{\{2n+2\}} \tag{5.5}\] for some \(g_{1}\in\mathbb{Z}[M_{2},\ldots,M_{n+2}]\) and \(g_{2},\ldots,g_{2n+2}\in\mathbb{Z}[x_{1},\ldots,x_{n+1}]\simeq\mathbb{Z}[M_{2} -M_{1},\ldots,M_{n+2}-M_{1}]\subset\mathbb{Z}[M_{1},\ldots,M_{n+2}]\subset \mathbb{Z}[\mathcal{M},\mathcal{D}]\). This shows the subjectivity of \(\widetilde{\psi}:\mathbb{Z}[\mathcal{M},\mathcal{D}]\to H^{*}(\mathcal{Q}_{2n})\). ### Injectivity of \(\psi:\mathbb{Z}[\mathcal{Q}_{2n}]\to H^{*}(\mathcal{Q}_{2n})\) We next prove the injectivity of \(\psi\). **Lemma 5.3**.: _The homomorphism \(\psi:\mathbb{Z}[\mathcal{Q}_{2n}]\to H^{*}(\mathcal{Q}_{2n})\) is injective._ To show this lemma, we will use the combinatorial counterpart of the localization theorem which will be stated in Corollary 5.5. To state that, we prepare the following notation. For \(v\in V_{2n}=[2n+2]\), the subset \(I_{v}\subset[n+2]\subset V_{2n}\) is defined by * \(I_{v}=[n+2]\setminus\{v\}\) for \(1\leq v\leq n+1\); * \(I_{v}=[n+2]\setminus\{\overline{v}\}\) for \(n+2\leq v\leq 2n+2\). **Lemma 5.4**.: _The following isomorphism holds for every \(v\in V_{2n}\):_ \[\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ v\not\in J\rangle\simeq \mathbb{Z}[M_{i}\ |\ i\in I_{v}]\simeq H^{*}(BT^{n+1}),\] _where \(\langle G_{J}\ |\ v\not\in J\rangle\) is the ideal in \(\mathbb{Z}[\mathcal{Q}_{2n}]\) generated by \(G_{J}\) with \(v\not\in J\) (see Relation 1 in Section 4)._ Proof.: We will prove the statement only for the vertex \(v=1\in V_{2n}\) because the proofs for the other vertices in \(V_{2n}=[2n+2]=\{1,\ldots,2n+2\}\) are similar. Suppose that \(v=1\in V_{2n}\). We shall prove that \[\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle\simeq \mathbb{Z}[M_{2},\ldots,M_{n+1},M_{n+2}]\simeq H^{*}(BT^{n+1}).\] We first claim that every element in \(\mathcal{D}\) can be written by the elements in \(\mathcal{M}\) in \(\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle\). Assume that \(K\subset V_{2n}\) satisfies \(\{i,\overline{i}\}\not\subset K\) for every \(i=1,\ldots,n+1\). If \(1\not\in K\), then \(\Delta_{K}=0\) in \(\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle\). If \(1\in K\) and \(|K|<n+1\), then by Relation 4, we have that \[\Delta_{K}=\Delta_{K\cup\{j\}}\cdot M_{j}\ \text{for}\ j,\overline{j}\not\in K.\] This implies that in \(\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle\) the generators in \(\mathcal{D}\) can be written by \(\Delta_{K}\)'s such that \(1\in K\) and \(|K|=n+1\). We next assume that \(I\subset V_{2n}\) satisfies \(|I|=n\) such that \(1\not\in I\) and there is the unique pair \(\{a,\overline{a}\}\subset V_{2n}\setminus I\). Put \(I=\{j_{1},\ldots,j_{n}\}\) and \(I^{c}=V_{2n}\setminus I=\{1,i_{1},\ldots,i_{n-1},a,\overline{a}\}\). Then, by Relation 3, we have \[\Delta_{\{1,i_{1},\ldots,i_{n-1},\overline{a}\}}=M_{j_{1}}\cdots M_{j_{n}}- \Delta_{\{1,i_{1},\ldots,i_{n-1},a\}}\in\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G _{J}\ |\ 1\not\in J\rangle.\] This shows that for the generator \(\Delta_{K}\in\mathcal{D}\) such that \(1\in K\) and \(|K|=n+1\), if there is the vertex \(\overline{a}\in K\) for \(a=2,\ldots n+1\), then we may replace \(\Delta_{K}\) into \(\Delta_{(K\setminus\{\overline{a}\})\cup\{a\}}\) by using elements in \(\mathcal{M}\). Therefore, we may reduce the generators in \(\mathcal{D}\) into the only one generator \(\Delta_{\{1,\ldots,n+1\}}\). Moreover, by Relation 3 and \(1\not\in\{2,\ldots,n+1,2n+2\}\), we have \[M_{n+2}\cdots M_{2n+1}=\Delta_{\{1,2,\ldots,n+1\}}+\Delta_{\{2,\ldots,n+1,2n+2 \}}=\Delta_{\{1,2,\ldots,n+1\}}\in\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle.\] This shows that every element in \(\mathcal{D}\) can be written by the elements in \(\mathcal{M}\). Next, by the definition of \(M_{1}\), we have \(M_{1}=G_{V_{2n}\setminus\{1\}}=0\) in \(\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle\). Therefore, together with Relation 2, we have that \[M_{\overline{1}}=M_{2}+M_{\overline{2}}=\cdots=M_{n+1}+M_{\overline{n+1}}\in \mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ 1\not\in J\rangle.\] Therefore, \(M_{\overline{1}}=M_{n+1}+M_{n+2}\) and \(M_{\overline{k}}=M_{n+1}+M_{n+2}-M_{k}\) for \(k=2,\ldots,n\). This implies that the generators in \(\mathcal{M}\) can be reduced into \[M_{2},\ldots,M_{n+1},M_{n+2}.\] This shows that there is the surjective homomorphism \[p:\mathbb{Z}[M_{2},\ldots,M_{n+2}]\to\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_ {I}\ |\ 1\not\in I\rangle.\] We finally consider the following composition homomorphism: \[\mathbb{Z}[M_{2},\ldots,M_{n+2}]\xrightarrow{p}\mathbb{Z}[\mathcal{Q}_{2n}]/ \langle G_{I}\ |\ 1\not\in I\rangle\xrightarrow{\iota_{1}}H^{*}(BT^{n+1}),\] where \(\iota_{1}\) is the induced homomorphism from \(\mathbb{Z}[\mathcal{Q}_{2n}]\xrightarrow{\psi}H^{*}(\mathcal{Q}_{2n})\to H^{ *}(BT^{n+1})\) such that \(f\mapsto\psi(f)(1)\) for \(f\in\mathbb{Z}[\mathcal{Q}_{2n}]\). By the definition of \(M_{i}\), we have \(\iota_{1}\circ p(M_{i})=x_{i-1}\) for \(i=2,\ldots,n+2\). Therefore, the composition map \(\iota_{1}\circ p\) is an isomorphism. This shows that \(p\) is injective. Consequently, \(p\) is an isomorphism. This establishes that \(\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{I}\ |\ 1\not\in I\rangle\simeq \mathbb{Z}[M_{2},\ldots,M_{n+2}]\simeq H^{*}(BT^{n+1})\). Therefore, by the definition of the graph equivariant cohomology and Lemma 5.4, we have **Corollary 5.5**.: _There is the following injective homomorphism:_ \[H^{*}(\mathcal{Q}_{2n})\hookrightarrow\bigoplus_{v\in V_{2n}}H^{*}(BT^{n+1}) \simeq\bigoplus_{v\in V_{2n}}\mathbb{Z}[\mathcal{Q}_{2n}]/\langle G_{J}\ |\ v\not\in J\rangle\simeq\bigoplus_{v\in V_{2n}}\mathbb{Z}[M_{i}\ |\ i\in I_{v}].\] Notice that Corollary 5.5 may be regarded as the counterpart of the localization theorem for the usual equivariant cohomology. Now we may prove Lemma 5.3. Proof of Lemma 5.3.: It is enough to prove that the following composition map \(\varphi\) is injective: \[\varphi:\mathbb{Z}[\mathcal{Q}_{2n}]\xrightarrow{\psi}H^{*}(\mathcal{Q}_{2n} )\hookrightarrow\bigoplus_{v\in V_{2n}}H^{*}(BT^{n+1})\simeq\bigoplus_{v\in V _{2n}}\mathbb{Z}[M_{i}\ |\ i\in I_{v}].\] Assume that \(\varphi(f)=0\) for an element \(f\in\mathbb{Z}[\mathcal{Q}_{2n}]\). We will prove that \(f=0\). In the proof, we use the following restriction map for \(w\in V_{2n}\): \[\rho_{w}:\bigoplus_{v\in V_{2n}}\mathbb{Z}[M_{i}\ |\ i\in I_{v}]\to\mathbb{Z}[M_{i} \ |\ i\in I_{w}]\] and the image of \(f\in\mathbb{Z}[\mathcal{Q}_{2n}]\) by the composition map \(\rho_{w}\circ\varphi\) by \(f(w)(:=\rho_{w}\circ\varphi(f))\). The assumption \(\varphi(f)=0\) is equivalent to that \(\rho_{v}\circ\varphi(f)=f(v)=0\in\mathbb{Z}[M_{i}\ |\ i\in I_{v}]\) for all \(v\in V_{2n}\). In the proof of surjective (see (5.5)), we also show the following fact: for any element \(f\in\mathbb{Z}[\mathcal{Q}_{2n}]\), there exists \(g_{i},g_{n}^{\prime}\in\mathbb{Z}[M_{2}-M_{1},\ldots,M_{n+2}-M_{1}]\subset \mathbb{Z}[M_{1},\ldots,M_{n+2}]\) for \(i=1,\ldots,2n\) and \(g_{0}\in\mathbb{Z}[M_{2},\ldots,M_{n+2}]\) such that \[f =g_{0}+g_{1}M_{1}+\cdots+g_{n}M_{1}\cdots M_{n}+g_{n}^{\prime} \Delta_{\{n+2,\ldots,2n+2\}}+g_{n+1}\Delta_{\{n+3,\ldots,2n+2\}}+\cdots+g_{2n} \Delta_{\{2n+2\}}\] \[=g_{0}+\sum_{i=1}^{n}g_{i}M_{1}\cdots M_{i}+X(\Delta), \tag{5.6}\] where \(X(\Delta)\) is the \(\Delta_{K}\) terms. Note that \(\psi(g_{j}),\psi(g^{\prime}_{n})\in\mathbb{Z}[x_{1},\ldots,x_{n+1}]\) (see (3.2)) for all \(j=0,\ldots,2n\). This implies that if there is a vertex \(v\in V_{2n}\) such that \(g_{j}(v)=0\) (resp. \(g^{\prime}_{n}(v)=0\)), then \(g_{j}=0\) (resp. \(g^{\prime}_{n}=0\)). We first claim that \(f\) can be written by \(\Delta_{K}\) terms only. Since \(M_{1}(1)=0\) and \(X(\Delta)(1)=0\), by (5.6), we have that \[g_{0}(1)=f(1)-\left(\sum_{i=1}^{n}g_{i}M_{1}\cdots M_{i}+X(\Delta)\right)(1)=f (1)=0.\] Therefore, we have \(g_{0}=0\). Similarly, by using \(g_{0}=0\) and (5.6), we have that \[g_{1}(2)M_{1}(2)=f(2)-\left(\sum_{i=2}^{n}g_{i}M_{1}\cdots M_{i}+X(\Delta) \right)(2)=0.\] Now \(g_{1}(2),M_{1}(2)\in\mathbb{Z}[M_{i}\ |\ i\in I_{2}=\{1,3,\ldots,n+2\}]\) and \(M_{1}(2)\neq 0\). Since the polynomial ring \(\mathbb{Z}[M_{i}\ |\ i\in I_{2}]\) is the integral domain, we see that \(g_{1}(2)=0\); therefore, \(g_{1}=0\). Iterating the similar arguments for \(i=3,\ldots,n-2\), we also have that \(g_{2}=\cdots=g_{n-1}=0\), i.e., \[f =g_{n}M_{1}\cdots M_{n}+X(\Delta)\] \[=g_{n}(\Delta_{\{n+2,\ldots,2n+2\}}+\Delta_{\{n+1,n+3,\ldots,2n+2 \}})+X(\Delta)\quad\text{(by Relation 3)}.\] Therefore, if \(f(v)=0\) for every \(v\in V_{2n}\), then \(f\) can be written by the \(\Delta_{K}\) terms only; more precisely, \[f=g_{n}\Delta_{\{n+1,n+3,\ldots,2n+2\}}+(g_{n}+g^{\prime}_{n})\Delta_{\{n+2, \ldots,2n+2\}}+g_{n+1}\Delta_{\{n+3,\ldots,2n+2\}}+\cdots+g_{2n}\Delta_{\{2n +2\}}. \tag{5.7}\] We next claim that \(f=0\) if \(f(v)=0\) for every \(v\in V_{2n}\). The equality (5.7) implies that for the vertex \(n+1\in V_{2n}\), \[g_{n}(n+1)\Delta_{\{n+1,n+3,\ldots,2n+2\}}(n+1)\] \[=f(n+1)-\left((g_{n}+g^{\prime}_{n})\Delta_{\{n+2,\ldots,2n+2\}}+ g_{n+1}\Delta_{\{n+3,\ldots,2n+2\}}+\cdots+g_{2n}\Delta_{\{2n+2\}}\right)(n+1)=0.\] Since \(\Delta_{\{n+1,n+3,\ldots,2n+2\}}(n+1)\neq 0\), by the similar reason as above, we have \(g_{n}=0\). Iterating the similar arguments for \(i=n+2,\ldots,2n+2\), we have that \(g^{\prime}_{n}=g_{n+1}=\cdots=g_{2n}=0\). This establishes that \(f=0\). Consequently, \(\varphi\) is injective. ## 6. Multiplicative formula of \(\Delta_{K},\Delta_{H}\) with \(|K|=|H|=n+1\) In this section, we show some multiplicative formula in \(H^{*}(\mathcal{Q}_{2n})\) which gives a typical difference between \(H^{*}(\mathcal{Q}_{2n})\) and the graph equivariant cohomology ring of a torus graph, i.e., the face ring proved in [10]. Let \(K,H\subset V_{2n}\) be the subsets with the property \((*)\) and \(|K|=|H|=n+1\), i.e., there is the class \(\Delta_{K},\Delta_{H}\in H^{2n}(\mathcal{Q}_{2n})\). Note that if \(K\cap H\neq\emptyset\), then we can also define \(\Delta_{K\cap H}\in H^{4n-2k}(\mathcal{Q}_{2n})\) for \(k=|K\cap H|-1\). If \(K\cap H=\emptyset\), then we put \(\Delta_{\emptyset}=0\). Recall that the elementary symmetric polynomial with degree \(j\) is defined by \[\mathfrak{S}_{j}(r_{i}\ |\ i=1,\ldots,n):=\sum_{\begin{subarray}{c}a_{1}+\cdots+a_{ n}=j,\\ 0\leq a_{i}\leq 1\end{subarray}}r_{1}^{a_{1}}\cdots r_{n}^{a_{n}}.\] Moreover, because of Relation 2, for every \(v=1,\ldots,n+1\), we may put \[X:=M_{v}+M_{\overline{v}}\in H^{2}(\mathcal{Q}_{2n}).\] There is the following multiplicative formula in \(H^{*}(\mathcal{Q}_{2n})\simeq\mathbb{Z}[\mathcal{Q}_{2n}]\) (see Figure 12 and Figure 13). **Theorem 6.1**.: _The following formula holds:_ \[\Delta_{K}\cdot\Delta_{H}=\Delta_{K\cap H}\cdot\left(\sum_{i=0}^{k}(-1)^{i}X^ {i}\cdot\mathfrak{S}_{k-i}(M_{v}\ |\ v\not\in K\cup H)\right)\in H^{4n}(\mathcal{Q}_{2n}), \tag{6.1}\] _where \(k=|K\cap H|-1\)._ Proof.: If \(K\cap H=\emptyset\), then the statement follows from Relation 1 and \(\Delta_{\emptyset}=0\). So we may assume \(K\cap H\neq\emptyset\). Because \(\Delta_{K},\Delta_{H}\in H^{2n}(\mathcal{Q}_{2n})\), their multiplication satisfies \(\Delta_{K}\cdot\Delta_{H}\in H^{4n}(\mathcal{Q}_{2n})\). Moreover, the degree of each term on the right-hand side in (6.1) satisfies that \[\deg\Delta_{K\cap H}+\deg X^{i}+\deg\mathfrak{S}_{k-i}(M_{v}\ |\ v\not\in K\cup H)=(4n-2k)+2i+2(k-i)=2n.\] For every \(p\not\in K\cap H\), because \(\Delta_{K}\cdot\Delta_{H}(p)=\Delta_{K\cap H}(p)=0\), the relation (6.1) holds. For \(p\in K\cap H\), by the definitions of \(\Delta_{K}\)'s and \(M_{v}\)'s, it is easy to check that \[\Delta_{K}\cdot\Delta_{H}(p)=\Delta_{K\cap H}(p)\cdot\prod_{v\not\in K\cup H \cup\{\overline{p}\}}M_{v}(p).\] Because \(|K\cap H|=k+1\) for \(0\leq k\leq n\), we may put \[K\cap H=\{a_{0},a_{1},\ldots,a_{k}\}\subset V_{2n},\] where we assume \(p=a_{0}\). Because \(|K|=|H|=n+1=\frac{|V_{2n}|}{2}\), we also have that \(K^{c}=\{\overline{a}\ |\ a\in K\}\) and \(H^{c}=\{\overline{b}\ |\ b\in H\}\). Therefore, \(K^{c}\cap H^{c}=\{\overline{x}\ |\ x\in K\cap H\}=\{\overline{a_{0}}, \overline{a_{1}},\ldots,\overline{a_{k}}\}\). This shows that \(v\not\in K\cup H\cup\{\overline{p}\}\) if and only if \(v\in(K\cup H\cup\{\overline{p}\})^{c}=(K^{c}\cap H^{c})\setminus\{\overline{p} \}=\{\overline{a_{1}},\ldots,\overline{a_{k}}\}\). Therefore, if we put \(\mathcal{A}:=\{\overline{a_{1}},\ldots,\overline{a_{k}}\}\), \[\Delta_{K}\cdot\Delta_{H}(p)=\Delta_{K\cap H}(p)\cdot\prod_{v\in\mathcal{A}}M_ {v}(p). \tag{6.2}\] On the other hand, \[\sum_{i=0}^{k}(-1)^{i}X^{i}\cdot\mathfrak{S}_{k-i}(M_{v}\ |\ v\not\in K\cup H)(p)=\sum_{i=0}^{k}(-1)^{i}X(p)^{i}\cdot \mathfrak{S}_{k-i}(M_{v}\ |\ v\in\{\overline{p}\}\cup\mathcal{A})(p). \tag{6.3}\] By the definition of \(M_{v}\), we have \(M_{\overline{p}}(p)=X(p)\). Therefore, for \(0\leq i\leq k-1\), \[\mathfrak{S}_{k-i}(M_{v}\ |\ v\in\{\overline{p}\}\cup\mathcal{A})(p)\] \[=\mathfrak{S}_{k-i}(M_{v}\ |\ v\in\mathcal{A})(p)+X(p)\cdot \mathfrak{S}_{k-i-1}(M_{v}\ |\ v\in\mathcal{A})(p).\] Substituting this into (6.3), we have \[\sum_{i=0}^{k}(-1)^{i}X(p)^{i}\cdot\mathfrak{S}_{k-i}(M_{v}\ |\ v \in\{\overline{p}\}\cup\mathcal{A})(p)\] \[= \sum_{i=0}^{k-1}(-1)^{i}X(p)^{i}\cdot\mathfrak{S}_{k-i}(M_{v}\ |\ v\in \mathcal{A})(p)+\sum_{i=0}^{k-1}(-1)^{i}X(p)^{i+1}\cdot\mathfrak{S}_{k-i-1}(M_ {v}\ |\ v\in\mathcal{A})(p)+(-1)^{k}X(p)^{k}\] \[= \mathfrak{S}_{k}(M_{v}\ |\ v\in\mathcal{A})(p)\] \[= \prod_{v\in\mathcal{A}}M_{v}(p)\quad(\text{by }|\mathcal{A}|=k).\] Combining (6.2) and (6.3), we obtain (6.1). Figure 12. This represents the following relation (also see Figure 13): \[\Delta_{\{2,3,6\}}\cdot\Delta_{\{3,5,6\}}=\Delta_{\{3,6\}}\cdot(\mathfrak{S}_ {1}(M_{1},M_{4})-X)=\Delta_{\{3,6\}}\cdot(M_{1}+M_{4}-X),\] because \(K\cap H=\{3,6\}\) and \(K\cup H=\{2,3,5,6\}\subset I\) (so \(V_{4}\setminus(K\cup H)=\{1,4\}\)). ## 7. Comparison of two ordinary cohomology rings \(H^{*}(Q_{4n})\) and \(H^{*}(Q_{4n+2})\) Since \(H^{odd}(Q_{2n})=0\) by [10], \(Q_{2n}\) is the equivariantly formal GKM manifold (see [1]). Therefore, its ordinary cohomology also can be computed by the quotient of \(H^{*}_{T}(Q_{2n})\) by \(H^{>0}(BT^{n+1})\). Thus, by using Theorem 5.1 and Proposition 3.6, we also have the ordinary cohomology of \(Q_{2n}\) by the different way of [10]: **Corollary 7.1**.: _The ordinary cohomology \(H^{*}(Q_{2n})\) is isomorphic to \(\mathbb{Z}[\mathcal{Q}_{2n}]/\mathcal{J}\), where \(\mathcal{J}\) is generated by_ \[M_{i+1}-M_{1}\] _for \(i=1,\ldots,n+1\)._ Recall that the cohomology ring formula of \(Q_{2n}\) depends on \(n\) is even or odd, i.e., by [10], \[H^{*}(Q_{4n})\simeq\mathbb{Z}[c,x]/\langle c^{2n+1}-2cx,x^{2}-c^{2n}x\rangle;\] \[H^{*}(Q_{4n+2})\simeq\mathbb{Z}[c,x]/\langle c^{2n+2}-2cx,x^{2}\rangle.\] In this final section, we give the combinatorial reason why this difference occurs by using Corollary 7.1. To do that, the following lemma is essential: **Lemma 7.2**.: _If \(K\subset V_{2n}\) is the following subset with the property \((*)\):_ \[K=\{i_{1},\ldots,i_{n+1}\}.\] _Then, there is the following formula in \(\mathbb{Z}[\mathcal{Q}_{2n}]/\mathcal{J}\):_ \[\Delta_{K}=\Delta_{\{i_{1},\ldots,i_{n-1},\overline{i_{n},i_{n+1}}\}}.\] Proof.: By definition of \(\mathcal{J}\), in \(\mathbb{Z}[\mathcal{Q}_{2n}]/\mathcal{J}\), we have \[M_{1}=M_{2}=\cdots=M_{n+1}=M_{n+2}.\] By Relation 2, \(M_{n+1}+M_{n+2}=M_{i}+M_{\overline{i}}\) for all \(i=1,\ldots,n\). Therefore, we also have that \[M_{1}=M_{n+3}=\cdots=M_{2n+2}.\] Because \(K\) satisfies the property \((*)\), for every \(a\in K\), the subset \(I:=K\setminus\{a\}\subset V_{2n}\) satisfies that \(|I|=n\); moreover, there are unique pair \(\{a,\overline{a}\}\subset V_{2n}\setminus I\). Therefore, we may apply Relation 3 for \(K\setminus\{a\}\). Together with \(M_{1}=M_{i}\) for all \(i\in V_{2n}\) as above, we have \[M_{1}^{n}=\Delta_{K^{c}}+\Delta_{((K\setminus\{a\})\cup\{\overline{a}\})^{c} }=\Delta_{K^{c}}+\Delta_{(K^{c}\setminus\{\overline{a}\})\cup\{a\}}. \tag{7.1}\] Note that this equation (7.1) holds for all \(K\subset V_{2n}\) which satisfies the assumption of this lemma. Therefore, we have \[M_{1}^{n}=\Delta_{\{i_{1},\ldots,i_{n+1}\}}+\Delta_{\{i_{1},\ldots,i_{n}, \overline{i_{n+1}}\}}=\Delta_{\{i_{1},\ldots,i_{n},\overline{i_{n+1}}\}}+ \Delta_{\{i_{1},\ldots,i_{n-1},\overline{i_{n},i_{n+1}}\}}.\] Thus, \(\Delta_{\{i_{1},\ldots,i_{n+1}\}}=\Delta_{\{i_{1},\ldots,i_{n-1},\overline{ i_{n},i_{n+1}}\}}\). Consequently, we have Figure 13. This figure represents the term \(A=M_{1}+M_{4}-X\) in Figure 12. Note that \(A(3)=A(6)=-x_{2}\) by Figure 3. Moreover, \(A(1),A(2),A(4),A(5)\) might not be \(0\in H^{2}(BT^{3})\); however, \(\Delta_{\{3,6\}}(1)=\Delta_{\{3,6\}}(2)=\Delta_{\{3,6\}}(4)=\Delta_{\{3,6\}}( 5)=0\). **Corollary 7.3**.: _For \(K\subset V_{2m}\) which satisfies the assumption of Lemma 7.2, there are the following relations:_ * \(\Delta_{K}\cdot(M_{i}^{2n}-\Delta_{K})=0\) _in_ \(\mathbb{Z}[\mathcal{Q}_{4n}]/\mathcal{J}\) _if_ \(m=2n\)_;_ * \(\Delta_{K}^{2}=0\) _in_ \(\mathbb{Z}[\mathcal{Q}_{4n+2}]/\mathcal{J}\) _if_ \(m=2n+1\)_._ Proof.: Suppose \(m=2n+1\), i.e., \(m\equiv 1\mod 2\). By iterating to use Lemma 7.2, we have \(\Delta_{K^{c}}=\Delta_{K}\). Therefore, by \(K\cap K^{c}=\emptyset\) and Relation 1, we have the 2nd relation in the statement. Suppose \(m=2n\), i.e., \(m\equiv 0\mod 2\). In this case, \(K=\{i_{1},\ldots,i_{2n+1}\}\). By iterating to use Lemma 7.2, we obtain \[\Delta_{K}=\Delta_{\{i_{1},\overline{i_{2}},\ldots,\overline{i_{2n+1}}\}}.\] By applying (7.1) to \(i_{1}\in\{\overline{i_{1}},i_{2}\ldots,i_{2n+1}\}^{c}=\{i_{1},\overline{i_{2 }}\ldots,\overline{i_{2n+1}}\}\), it follows from this relation that \[M_{1}^{2n}=\Delta_{\{i_{1},\overline{i_{2}},\ldots,\overline{i_{2n+1}}\}}+ \Delta_{\{\overline{i_{1}},\overline{i_{2}},\ldots,\overline{i_{2n+1}}\}}= \Delta_{K}+\Delta_{K^{c}}.\] Hence, we have \(\Delta_{K^{c}}=M_{1}^{2n}-\Delta_{K}\). Therefore, by \(K\cap K^{c}=\emptyset\) and Relation 1, we have the 1st relation in the statement. Figure 14 and Figure 15 show the difference of the ordinary cohomology for \(n=2\) and \(n=3\). For example, in Figure 14, \(K=\{3,5,6\}\) (see Figure 1). In this case, by applying Lemma 7.2, \(\Delta_{K}=\Delta_{H}\) for \(H=\{1,2,3\}=\{2,4,6\}=\{1,2,5\}\). This shows that \(K\cap H\neq\emptyset\); therefore, \(\Delta_{K}^{2}\neq 0\). However, in Figure 15, we can take such \(H\) as \(K^{c}\); this gives \(\Delta_{K}^{2}=0\) ## Acknowledgment The author was supported by JSPS KAKENHI Grant Number 21K03262. He also would like to thank Professor Fuichi Uchida (1938-2021) who was the supervisor in his master course. He learned a lot from Professor Uchida including complex quadrics.
2304.09654
Uniform Generation of Temporal Graphs with Given Degrees
Uniform sampling from the set $\mathcal{G}(\mathbf{d})$ of graphs with a given degree-sequence $\mathbf{d} = (d_1, \dots, d_n) \in \mathbb N^n$ is a classical problem in the study of random graphs. We consider an analogue for temporal graphs in which the edges are labeled with integer timestamps. The input to this generation problem is a tuple $\mathbf{D} = (\mathbf{d}, T) \in \mathbb N^n \times \mathbb N_{>0}$ and the task is to output a uniform random sample from the set $\mathcal{G}(\mathbf{D})$ of temporal graphs with degree-sequence $\mathbf{d}$ and timestamps in the interval $[1, T]$. By allowing repeated edges with distinct timestamps, $\mathcal{G}(\mathbf{D})$ can be non-empty even if $\mathcal{G}(\mathbf{d})$ is, and as a consequence, existing algorithms are difficult to apply. We describe an algorithm for this generation problem which runs in expected time $O(M)$ if $\Delta^{2+\epsilon} = O(M)$ for some constant $\epsilon > 0$ and $T - \Delta = \Omega(T)$ where $M = \sum_i d_i$ and $\Delta = \max_i d_i$. Our algorithm applies the switching method of McKay and Wormald $[1]$ to temporal graphs: we first generate a random temporal multigraph and then remove self-loops and duplicated edges with switching operations which rewire the edges in a degree-preserving manner.
Daniel Allendorf
2023-04-19T13:44:38Z
http://arxiv.org/abs/2304.09654v2
# Uniform Generation of Temporal Graphs with ###### Abstract Uniform sampling from the set \(\mathcal{G}(\mathbf{d})\) of graphs with a given degree-sequence \(\mathbf{d}=(d_{1},\ldots,d_{n})\in\mathbb{N}^{n}\) is a classical problem in the study of random graphs. We consider an analogue for temporal graphs in which the edges are labeled with integer timestamps. The input to this generation problem is a tuple \(\mathbf{D}=(\mathbf{d},T)\in\mathbb{N}^{n}\times\mathbb{N}_{>0}\) and the task is to output a uniform random sample from the set \(\mathcal{G}(\mathbf{D})\) of temporal graphs with degree-sequence \(\mathbf{d}\) and timestamps in the interval \([1,T]\). By allowing repeated edges with distinct timestamps, \(\mathcal{G}(\mathbf{D})\) can be non-empty even if \(\mathcal{G}(\mathbf{d})\) is, and as a consequence, existing algorithms are difficult to apply. We describe an algorithm for this generation problem which runs in expected linear time \(O(M)\) if \(\Delta^{2+\epsilon}=O(M)\) for some constant \(\epsilon>0\) and \(T-\Delta=\Omega(T)\) where \(M=\sum_{i}d_{i}\) and \(\Delta=\max_{i}d_{i}\). Our algorithm applies the switching method of McKay and Wormald [1] to temporal graphs: we first generate a random temporal _multigraph_ and then remove self-loops and duplicated edges with switching operations which rewire the edges in a degree-preserving manner. Random Graph, Temporal Graph, Uniform Sampling, Degree Sequence, Switching Algorithm 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 202 202 2022 202 202 202 202 2022 202 2022 202 2022 202 2022 202 2022 2022 2022 2022 2022 2022 2222 2022 2022 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 2222 22222 22222 222222 22222 22222 22222 222222 22222 222222 22222 222222 222222 222222 22222 222222 222222 222222 22222 222222 222222 2222222 2222222 222222 222222 222222 2222222 22222222 2222222 22222222 22222222 222222222 222222222 22222222222 expected time \(O(n)\) if the exponent satisfies \(\gamma>(21+\sqrt{61})/10\)[10, 8]. Alternatively, there are efficient solutions to various relaxations of the problem. For instance, we may allow the graph to match the sequence only in expectation [11], or use a markov chain to approximate the uniform distribution to an arbitrary degree by increasing the number of transitions performed [12, 13, 14]. See also [15] for a survey of relevant techniques and results. _Temporal graphs_ are capable of modeling not only the topology but also the time structure of networks [16]. Possibly the most common type of temporal graph augments each edge of a classical graph with an integer timestamp. Here, we work by the following definition, which is well suited towards uniform generation. [Temporal Graph] A temporal (multi-)graph \(G=(V,E)\) consists of a set of nodes \(V=\{v_{1},\ldots,v_{n}\}\) and a (multi-)set of temporal edges \(E=\{e_{1},\ldots,e_{m}\}\) where each temporal edge is a tuple \((\{u,v\},t)\in\{\{u,v\}:u,v\in V\}\times\mathbb{N}_{>0}\). In terms of semantics, the presence of an edge \((\{u,v\},t)\) in the edge set \(E\) indicates that the nodes \(u\) and \(v\) are connected at time \(t\). This allows us to express new relationships between the nodes. For instance, we may consider a node \(v\)_reachable_ from a node \(u\) if and only if there exists a path from \(u\) to \(v\) which traverses its edges in ascending order of time. Note that temporal graphs of this type can be seen as an example of edge-labeled graphs (see [17]). While our findings here extend to general edge-labeled graphs, our labelings arise simply due to choosing a graph uniformly at random, whereas in the study of edge-labeled graphs one is more concerned with special kinds of labelings (for instance, see [18]). For the purpose of modeling networks, it makes sense to restrict ourselves to _simple_ temporal graphs which exclude certain types of "unnatural" edges. Here, we call a temporal multigraph \(G=(V,E)\)_simple_ if the edge set \(E\) contains no loops and no multi-edges between the same nodes and with the same timestamp, i.e. iff \(u\neq v\) for all \((\{u,v\},t)\in E\) and \(e\neq e^{\prime}\) for all \(e,e^{\prime}\in E\). We remark that there are alternative definitions. A more restrictive possibility is to consider a temporal graph simple only if each pair of nodes is connected at most once via a temporal edge (for instance, see [19] for connectivity thresholds of such graphs). However, there are examples of networks in which repeated connections do indeed carry semantic value, and it is rather straightforward to generate a graph without repeated connections by assigning timestamps to the edges of a classical simple graph (see [20] for a model of this kind). The definition used here also suffices to ensure that a process which generates a random temporal multigraph outputs each simple temporal graph with the same probability (see section 2), which is crucial towards uniform generation. Equipped with these definitions we are now able to state the uniform generation problem for temporal graphs. Given a tuple \(\mathbf{D}=(\mathbf{d},T)\in\mathbb{N}^{n}\times\mathbb{N}_{>0}\), say that a simple temporal graph \(G\) over the node set \(V=\{v_{1},\ldots,v_{n}\}\) realizes \(\mathbf{D}\) if the timestamps of all edges lie in the interval \([1,T]\) and the sum of the numbers of incident edges at node \(v_{i}\) over all \(T\) timestamps equals \(d_{i}\) for each \(1\leq i\leq n\). In addition, define the set \(\mathcal{G}(\mathbf{D})\) of realizations of \(\mathbf{D}\) as a simple temporal graph and consider the problem of sampling a realization \(G\in\mathcal{G}(\mathbf{D})\) uniformly at random. Observe that as a consequence of allowing repeated connections, a given tuple \(\mathbf{D}=(\mathbf{d},T)\) can be realizable as a simple temporal graph even if the sequence \(\mathbf{d}\) is not realizable as a classical simple graph. In fact, the classical generation problem corresponds to the case \(T=1\). This also implies that existing switching algorithms are difficult to apply. Even if we attempt to distribute the degrees among individual sequences \(\mathbf{d}_{t},1\leq t\leq T\) such that \(\mathbf{d}=\sum_{t}\mathbf{d}_{t}\) and generate a classical graph for each sequence, it is not trivial to obtain an overall temporal graph with the correct distribution as different choices of the sequences can correspond to vastly different numbers of realizations. In fact, it is not difficult to see that there are choices of the sequences \(\mathbf{d}_{t},1\leq t\leq T\) which are not realizable at all. Thus, one would expect that this generation problem requires switchings which operate on the temporal graph as a whole. Indeed, this is the key feature of the method which we investigate here. The resulting algorithm, which we call T-Gen, generates simple temporal graphs with bounded degrees. Our main result is as follows (see section 4 for the proof). Given a realizable tuple \(\mathbf{D}=(\mathbf{d},T)\) which satisfies \(\Delta^{2+\epsilon}=O(M)\) for a constant \(\epsilon>0\) and \(T-\Delta=\Omega(T)\), T-Gen outputs a uniform random sample \(G\in\mathcal{G}(\mathbf{D})\) in expected time \(O(M)\). As is customary, we assume an underlying sequence of tuples \((\mathbf{D})_{M}\) and give the asymptotic runtime as \(M\to\infty\). In particular, the conditions on the input tuple can be understood as being imposed on functions \(\Delta(M)\), \(T(M)\). The general idea of T-Gen is to apply the switching method of [1] to temporal graphs. To this end, we define a _temporal configuration model_ (see section 2) which samples a random temporal multigraph with the property that the probability of a given graph only depends on the contained loops and temporal multi-edges, i.e. multiple edges between the same nodes and with the same timestamp. In particular, this implies that a simple temporal graph output by this model has the uniform distribution. Usually, the obtained graph is not simple, but if the input tuple \(\mathbf{D}\) satisfies the conditions imposed in Theorem 2, then the number of non-simple edges is sufficiently small to allow for efficient removal. A property of the random model with interesting consequences is that the expected number of loops is not affected by the temporal component, whereas the expected number of temporal double-edges scales with \(O(1/T)\) (see Lemma 4). This shifted balance between the types of non-simple edges allows for a looser bound on the degrees in terms of the number of edges (e.g. \(\Delta^{2+\epsilon}=O(M)\) as opposed to \(\Delta^{4}=O(m)\)). A challenging aspect of generating simple temporal graphs is that the degree-sequences \(\mathbf{d}_{t},1\leq t\leq T\) implied by a random temporal multigraph may not be realizable. This necessitates the use of switching operations which rewire edges across different time slices of the graph and assign fresh timestamps to newly created edges (see Definition 7 and Figure 1 for an example). However, this in turn implies that the number of available timestamps for an edge affects the distribution of graphs after each switching, and to ensure uniformity, it becomes necessary to account for the number of timestamps when correcting the distribution. To discuss this matter, we briefly describe the technique used by [1] to correct the distribution after each switching. Generally speaking, when defining a switching operation \(\theta\) we fix two subsets \(\mathcal{S},\mathcal{S}^{\prime}\subseteq\mathcal{M}(\mathbf{d})\) of the set \(\mathcal{M}(\mathbf{d})\) of all multigraphs matching the sequence \(\mathbf{d}\). Specifying the edges rewired by \(\theta\) then associates each graph \(G\in\mathcal{S}\) with a subset \(\mathcal{F}(G)\subseteq\mathcal{S}^{\prime}\) of graphs in \(\mathcal{S}^{\prime}\) which can be produced by performing a type \(\theta\) switching on \(G\), and each graph \(G^{\prime}\in\mathcal{S}^{\prime}\) with a subset \(\mathcal{B}(G^{\prime})\subseteq\mathcal{S}\) of graphs on which we can perform a type \(\theta\) switching which produces \(G^{\prime}\). Given this setup, the goal is to start from a uniform random graph \(G\in\mathcal{S}\) and perform a type \(\theta\) switching to obtain a uniform random graph \(G^{\prime}\in\mathcal{S}^{\prime}\). To this end, let \(f(G)=|\mathcal{F}(G)|\), \(b(G^{\prime})=|\mathcal{B}(G^{\prime})|\), and assume that \(\mathcal{F}(G)\neq\emptyset\) for every \(G\in\mathcal{S}\) and \(\mathcal{B}(G^{\prime})\neq\emptyset\) for every \(G^{\prime}\in\mathcal{S}^{\prime}\). Then, if we start from a graph \(G\) uniformly distributed in \(\mathcal{S}\), and perform a uniform random type \(\theta\) switching on \(G\), the probability of producing a given graph \(G^{\prime}\in\mathcal{S}^{\prime}\) is \[\sum_{G\in\mathcal{B}(G^{\prime})}\frac{1}{|\mathcal{S}|f(G)}.\] which depends on \(G^{\prime}\) if \(\mathcal{F}(G)\) and \(\mathcal{B}(G^{\prime})\) vary over different choices of \(G\) and \(G^{\prime}\). To correct this, _rejection_ steps can be used which restart the algorithm with a certain probability. Before performing the switching, we _f-reject_ (forward reject) with probability \(1-f(G)/\overline{f}(\mathcal{S})\) where \(\overline{f}(\mathcal{S})\) is an upper bound on \(f(G)\) over all graphs \(G\in\mathcal{S}\), and after performing the switching, we _b-reject_ (backward reject) with probability \(1-\underline{b}(\mathcal{S}^{\prime})/b(G^{\prime})\) where \(\underline{b}(\mathcal{S}^{\prime})\) is a lower bound on \(b(G^{\prime})\) over all graphs \(G^{\prime}\in\mathcal{S}^{\prime}\). The probability of producing \(G^{\prime}\) is now \[\sum_{G\in\mathcal{B}(G^{\prime})}\frac{1}{|\mathcal{S}|f(G)}\frac{f(G)}{ \overline{f}(\mathcal{S})}\frac{\underline{b}(\mathcal{S}^{\prime})}{b(G^{ \prime})}=\frac{b(G^{\prime})}{|\mathcal{S}|\overline{f}(\mathcal{S})}\frac{ \underline{b}(\mathcal{S}^{\prime})}{b(G^{\prime})}=\frac{\underline{b}( \mathcal{S}^{\prime})}{|\mathcal{S}|\overline{f}(\mathcal{S})}\] which only depends on \(\mathcal{S}\) and \(\mathcal{S}^{\prime}\), implying that \(G^{\prime}\) has the uniform distribution if \(G\) does. Of course, this way of correcting the distribution is efficient only if the typical values of \(f(G)\) and \(b(G^{\prime})\) do not deviate too much from \(\overline{f}(\mathcal{S})\) and \(\underline{b}(\mathcal{S}^{\prime})\). To avoid a high probability of restarting in cases where this does not hold, Gao and Wormald [9] introduced the idea of using additional switchings which partially equalize the probabilities. This is done by defining each phase of a switching algorithm as a markov process which in each step either performs a main kind of switching to remove a non-simple edge, or an additional switching to equalize the probabilities. T-Gen similarly uses additional switchings but without the use of a markov chain. Instead, we always perform the main kind of switching first and then an additional switching which targets specific edges involved in the main switching performed. Concretely, the issue due to timestamps is that the typical number of available timestamps for an edge is \(\Omega(T)\), whereas the corresponding lower bound is \(T-(\Delta-1)\) due to the possibility of a graph in which the edge in question has a multiplicity of \(\Delta\). Fortunately, the conditions imposed in Theorem 2 suffice to ensure that the highest multiplicity of any edge in the initial graph is bounded by a constant \(\eta=O(1)\) with high probability (see Lemma 5, section 2), and we can also show that any edges created by the switching algorithm itself only increase the multiplicity up to a constant \(\mu=O(1)\) (Lemma 6, section 3). Now, after performing a main kind of switching, we partition the subset \(\mathcal{S}^{\prime}\) which contains the obtained graph into the subsets \(\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) and \(\mathcal{S}^{\prime}\setminus\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) by the multiplicities \(\mathbf{m}\) of specific edges involved in the switching. We then equalize the probabilities of the graphs in \(\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) with the standard rejection step (which is now efficient due to \(\mu=O(1)\)), and reset the probability of graphs in \(\mathcal{S}^{\prime}\setminus\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) to zero by rejecting these graphs. To equalize the probabilities between graphs in \(\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) and \(\mathcal{S}^{\prime}\setminus\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\), we define auxiliary switching operations which map the graphs in \(\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) to graphs in \(\mathcal{S}^{\prime}\setminus\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) and an identity switching which maps any graph in \(\mathcal{S}^{\prime}_{\mathbf{m}<\mu}\) to itself, and specify a probability distribution over these two kinds of switchings which ensures that all graphs in \(\mathcal{S}^{\prime}\) are produced with the same probability via switchings which involve the specific edges. ## 2 Temporal Configuration Model The temporal configuration model samples a random temporal multigraph matching a given tuple \(\mathbf{D}=(\mathbf{d},T)\in\mathbb{N}^{n}\times\mathbb{N}_{>0}\) (provided that \(M=\sum_{i}d_{i}\) is even). It can be implemented as follows. First, for each node index \(i\in\{1,\ldots,n\}\), put \(d_{i}\) marbles labeled \(i\) into an urn. Then, starting from the empty graph \(G=(V,\emptyset)\) on the node set \(V=\{v_{1},\ldots,v_{n}\}\), add edges by iteratively performing the following steps until the urn is empty: 1. Draw two marbles from the urn uniformly at random (without replacement), and let \(i,j\) denote the labels of those marbles. 2. Draw a timestamp \(t\) uniformly at random from the set of timestamps \([1,T]\). 3. Add the temporal edge \((\{v_{i},v_{j}\},t)\) to the graph \(G\). ### Allendorf In the following, we analyze the output distribution of this random model. We first give some definitions to characterize edges in a temporal multigraph. Given two nodes \(v_{i},v_{j}\in V\) and a timestamp \(t\in[1,T]\), define \(w_{i,j,t}\) as the number of edges between \(v_{i}\) and \(v_{j}\) with timestamp \(t\) in the graph, and call \(w_{i,j,t}\) the _temporal multiplicity_ of the edge \((\{v_{i},v_{j}\},t)\). Then, if \(w_{i,j,t}\geq 2\), say that the edge is contained in a _temporal multi-edge_, and in the special cases \(w_{i,j,t}=2\) and \(w_{i,j,t}=3\), refer to the multi-edge as a _double-edge_ and _triple-edge_, respectively. In addition, define \(m_{i,j}=\sum_{t}w_{i,j,t}\) as the total number of edges between \(v_{i}\) and \(v_{j}\) over all timestamps, call \(m_{i,j}\) the _multiplicity_ of the node set \(\{v_{i},v_{j}\}\), and if \(m_{i,j}\geq 2\), say that \(\{v_{i},v_{j}\}\) is contained in a _ordinary multi-edge_. Finally, call an edge \((\{v_{i}\},t)\) which connects a node \(v_{i}\) to itself a _loop_ at \(v_{i}\), and in the cases where \(w_{i,t}=1\) and \(w_{i,t}=2\), refer to the edge as a _temporal single-loop_ and _temporal double-loop_, respectively. Now, let \(\mathcal{M}(\mathbf{D})\) denote the set of temporal multigraphs matching a realizable tuple \(\mathbf{D}\), \(\mathbf{W}(G)\) the \(n\times n\times T\) matrix such that the entries \(\mathbf{W}_{i,j,t}\) where \(i\neq j\) contain the temporal multiplicities of the temporal multi-edges in a graph \(G\) and the entries \(\mathbf{W}_{i,i,t}\) the temporal multiplicities of the loops, and let \(\mathcal{S}(\mathbf{W})\subseteq\mathcal{M}(\mathbf{D})\) denote the subset of multigraphs with temporal multi-edge and loop multiplicities \(\mathbf{W}\). Then, we obtain the following result. Let \(G\) be a temporal multigraph output by the temporal configuration model on an input tuple \(\mathbf{D}\). Then, \(G\) is uniformly distributed in the set \(\mathcal{S}(\mathbf{W}(G))\subseteq\mathcal{M}(\mathbf{D})\). Proof of Theorem 3.: For a given tuple \(\mathbf{D}=(\mathbf{d},T)\), let \(P_{i}=\{i_{1},\ldots,i_{d_{i}}\}\) where \(1\leq i\leq n\), \(P=\bigcup_{1\leq i\leq n}P_{i}\), and define a _temporal configuration_ of \(\mathbf{D}\) as a partition of the set \(P\) into \(M/2\) subsets of size two in which each subset is assigned an integer \(t\in[1,T]\). Then, the temporal configuration model samples a temporal configuration uniformly at random and outputs the corresponding temporal multigraph (by identifying the nodes corresponding to the labels and replacing the subsets together with the assigned integers by temporal edges). Thus, the probability of a given graph \(G\) is proportional to the number of temporal configurations corresponding to \(G\), which equals the number of ways to label the edges in \(G\) with the labels in \(P\) which give a distinct temporal configuration. Denote this number by \(C_{P}(G)\), and observe that if \(H_{1}=(V,E_{1})\), \(H_{2}=(V,E_{2})\) are the subgraphs of the non-simple edges and simple edges in \(G=(V,E)\), respectively, then as \(E_{1}\cap E_{2}=\emptyset\) and \(E_{1}\cup E_{2}=E\), we have \(C_{P}(G)=C_{P_{1}}(H_{1})C_{P_{2}}(H_{2})\prod_{i}\binom{d_{i}}{k_{i}}\) where \(\mathbf{k}\) denotes the degree-sequence of \(H_{2}\) (arbitrarily), \(P_{1}=\bigcup_{i}\{i_{1},\ldots,i_{d_{i}-k_{i}}\}\) and \(P_{2}=\bigcup_{i}\{i_{1},\ldots,i_{k_{i}}\}\). In addition, we have \(C_{P_{2}}(H_{2})=\prod_{i}k_{i}!\) as all edges incident at the nodes in the simple temporal graph \(H_{2}\) are distinct, which implies that all possible labelings result in a distinct temporal configuration. Finally, observe that \(\mathbf{W}(G)\) determines both \(H_{1}\) and \(\mathbf{k}\), and thus \(C_{P}(G)\) only depends on \(\mathbf{W}(G)\). Note that the set of _simple_ temporal graphs matching \(\mathbf{D}\) corresponds to the special case \(\mathcal{G}(\mathbf{D})=\mathcal{S}(\mathbf{0}^{n\times n\times T})\subseteq \mathcal{M}(\mathbf{D})\). Thus, Theorem 3 implies that a simple temporal graph output by the random model is uniformly distributed in the set \(\mathcal{G}(\mathbf{D})\). In general, the probability of obtaining a simple temporal graph is small (see Lemma 4 below). Still, there are conditions under which the numbers and multiplicities of non-simple edges are manageable. We state these conditions in terms of the following properties of a degree sequence \(\mathbf{d}\in\mathbb{N}_{>0}^{n}\): \[\Delta=\max_{1\leq i\leq n}d_{i},\qquad\qquad M=\sum_{1\leq i\leq n}d_{i}, \qquad\qquad M_{2}=\sum_{1\leq i\leq n}d_{i}(d_{i}-1).\] i.e. \(\Delta\) is the maximum degree, \(M\) the first moment and \(M_{2}\) the second moment. We start by giving a condition under which the numbers and multiplicities of temporal multi-edges and temporal multi-loops is not too large. **Lemma 4**.: _Let \(\mathbf{D}=(\mathbf{d},T)\) be a tuple which satisfies \(\Delta^{2}=o(M)\) and \(\Delta=O(T)\), and \(G\) a graph output by the temporal configuration model when given \(\mathbf{D}\) as input. Then, the expected number of temporal double-edges in \(G\) is at most \(O(M_{2}^{2}/M^{2}T)\), the expected number of temporal single-loops is at most \(O(M_{2}/M)\), and with high probability there are no temporal double-loops or temporal triple-edges._ Proof of Lemma4.: It is straightforward to check that the probability that \(G\) contains \(m\) given temporal edges is \(O(M^{-m}T^{-m})\). Thus, the expected number of temporal double-edges in a graph output by the temporal configuration model is \[O\left(\sum_{1\leq i,j\leq n}\sum_{1\leq t\leq T}\frac{4\binom{d_{i}}{2} \binom{d_{j}}{2}}{M^{2}T^{2}}\right)=O\left(\frac{M_{2}^{2}}{M^{2}T}\right),\] the expected number of temporal single-loops is \[O\left(\sum_{1\leq i,j\leq n}\sum_{1\leq t\leq T}\frac{\binom{d_{i}}{2}}{MT} \right)=O\left(\frac{M_{2}}{M}\right),\] the expected number of temporal triple-edges is \[O\left(\sum_{1\leq i,j\leq n}\sum_{1\leq t\leq T}\frac{6\binom{d_{i}}{3} \binom{d_{j}}{3}}{M^{3}T^{3}}\right)=O\left(\frac{\Delta^{2}M_{2}^{2}}{M^{3}T ^{2}}\right)=O\left(\frac{\Delta^{2}}{M}\right)=o(1),\] and the expected number of temporal double-loops is \[O\left(\sum_{1\leq i\leq n}\sum_{1\leq t\leq T}\frac{3\binom{d_{i}}{4}}{M^{2} T^{2}}\right)=O\left(\frac{\Delta^{2}M_{2}}{M^{2}T}\right)=O\left(\frac{\Delta^{2}} {M}\right)=o(1).\qed\] In addition, the following condition ensures that there is only a small probability that a node is incident with many non-simple edges, or that the graph contains an ordinary multi-edge of high multiplicity. **Lemma 5**.: _Let \(\mathbf{D}=(\mathbf{d},T)\) be a tuple which satisfies \(\Delta^{2+\epsilon}=O(M)\) for a constant \(\epsilon>0\) and \(\Delta=O(T)\), and \(G\) a graph output by the temporal configuration model when given \(\mathbf{D}\) as input. Then, with high probability, the largest number of incident temporal double-edges at any node in \(G\) is at most \(\kappa=\lfloor 1+1/\epsilon\rfloor\), the largest number of incident temporal single-loops at any node in \(G\) is at most \(\lambda=\lfloor 1+1/\epsilon\rfloor\), and the highest multiplicity of any edge in \(G\) is at most \(\eta=\lfloor 2+2/\epsilon\rfloor\)._ Proof of Lemma5.: Define \(\delta=\frac{\epsilon}{2+\epsilon}\) and note that \(\Delta^{2+\epsilon}=O(M)\implies\Delta=O(M^{\delta/\epsilon})\), and if \(\epsilon>0\), then \(\Delta^{2}/M=O(M^{-\delta})=o(1)\). Let \(K_{m}\) denote the number of nodes incident with \(m\) temporal double-edges in a graph output by the temporal configuration model. Then \[\mathbb{E}[K_{m}] =O\left(\sum_{1\leq i\leq n}\sum_{1\leq j_{1},\ldots,j_{m}\leq n} \sum_{1\leq t_{1},\ldots,t_{m}\leq T}\frac{\binom{2m}{2,\ldots,2}\binom{d_{i} }{2m}\prod_{k=1}^{m}\binom{d_{j_{k}}}{2}}{M^{2m}T^{2m}}\right)\] \[=O\left(\frac{M_{2m}M_{2}^{m}}{M^{2m}T^{m}}\right)=O\left(\frac{ \Delta^{2m-1}}{M^{m-1}}\right).\] where \(M_{k}=\sum_{1\leq i\leq n}\prod_{1\leq j\leq k}(d_{i}-j+1)\) and the last equality follows by \(M_{k+1}<\Delta M_{k}\). Thus, if \(\Delta^{2+\epsilon}=O(M)\) for some constant \(\epsilon>0\), the probability of at least one node incident with more than \(\kappa=\lfloor 1+1/\epsilon\rfloor\) temporal double-edges is at most \[\sum_{m=\kappa+1}^{\lfloor\Delta/2\rfloor}\mathbb{E}[K_{m}] =O\left(\sum_{m=\kappa+1}^{\lfloor\Delta/2\rfloor}\frac{\Delta^{2 m-1}}{M^{m-1}}\right)\] \[=O\left(\sum_{m=\kappa+1}^{\lfloor\Delta/2\rfloor}M^{-\delta(m-1 -1/\epsilon)}\right)\] \[=O\left(\sum_{m=\kappa+2}^{\lfloor\Delta/2\rfloor}M^{-\delta(m-1 -1/\epsilon)}\right)+O\left(M^{-\delta(\underbrace{\kappa+1}_{\quad\quad\quad \quad\quad\quad-1-1/\epsilon)}}_{\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad ordinary multi-edges of high multiplicity and graphs which do not. Stage 2 (subsection 3.3) removes all temporal double-edges, i.e. double-edges which share the same timestamp. Doing this efficiently requires five kinds of switchings, two of which remove a temporal double-edge between two specified nodes and with a specified timestamp, and three of which are auxiliary switchings. Once all non-simple edges have been removed, the resulting simple temporal graph is output. ### Initial Conditions The initial conditions for the random multigraph \(G\) are as follows. Define \[B_{L}=\frac{M_{2}}{M},\qquad\quad B_{D}=\frac{M_{2}^{2}}{M^{2}T},\] let \(L=\sum_{i,t}\mathbf{W}_{i,i,t}(G)\) and \(D=\sum_{i\neq j,t}\mathbf{W}_{i,j,t}(G)\) denote the sums of the multiplicities of loops and temporal multi-edges of \(G\), respectively, and choose three constants \[\lambda\geq 1+1/\epsilon,\qquad\qquad\kappa\geq 1+1/\epsilon,\qquad\qquad\mu\geq 3 +2/\epsilon\] where \(\epsilon>0\) is a constant such that \(\Delta^{2+\epsilon}=O(M)\) (or set \(\lambda=\kappa=\mu=\Delta\) if no such constant exists). Then, \(G\) satisfies the initial conditions if \(L\leq B_{L}\), \(D/2\leq B_{D}\), there are no temporal multi-loops of multiplicity \(w\geq 2\) or temporal multi-edges of multiplicity \(w\geq 3\), and no node is incident with more than \(\lambda\) temporal single-loops or \(\kappa\) temporal double-edges. Observe that the lower bounds on the constants \(\lambda\) and \(\kappa\) are due to Lemma 5. The additional constant \(\mu\) is due to the following result (proof in subsection 4.2). If the input tuple \(\mathbf{D}=(\mathbf{d},T)\) satisfies \(\Delta^{2+\epsilon}=O(M)\) for a constant \(\epsilon>0\) and \(T-\Delta=\Omega(T)\), then with high probability none of the graphs visited during a given run of \(T\)-Gen contain an edge of multiplicity higher than \(\mu=\lfloor 3+2/\epsilon\rfloor\). As a simple temporal graph is allowed to contain ordinary multi-edges, the constant \(\mu\) cannot be enforced by rejecting violating initial graphs. Instead, we equalize the probabilities between graphs which contain such edges and graphs which do not via the auxiliary switchings mentioned above. We describe this approach in detail in subsection 3.2 and subsection 3.3. Finally, there are two special cases. First, if the initial multigraph \(G\) is simple, then this graph can be output without checking the preconditions or going through any of the stages. Second, if the input tuple does not satisfy \[M>16\Delta^{2}+4\Delta+2B_{L}+4B_{D},\qquad\qquad T>\Delta-1\] then T-Gen restarts until a simple graph is found and output. These requirements are satisfied if \(\Delta^{2+\epsilon}=O(M)\) and \(T-\Delta=\Omega(T)\) but we make them explicit here to ensure correctness on any input tuple. In all other cases, T-Gen restarts if the graph \(G\) does not satisfy the initial conditions. Otherwise, the algorithm enters Stage 1 to remove the temporal single-loops in \(G\). ### Stage 1: Removal of Temporal Single-Loops Stage 1 removes the temporal single-loops in the graph. Doing this efficiently requires multiple kinds of switching operations. In total, we use three kinds of switchings which we denote as TL, \(\mathsf{A}_{m,n}\) and \(\mathsf{I}\). We formally define each kind of switching below. The switching of the main kind written as TL removes a temporal single-loop at a specified node and with a specified timestamp. After performing this kind of switching, an \(\mathsf{A}_{m,n}\) auxiliary switching is performed with a certain probability. This switching adds up to two ordinary multi-edges with multiplicities \(\max\{m,n\}\geq\mu\) to the graph to equalize the probability of producing graphs with or without these kinds of edges. In addition, we define the identity switching \(\mathsf{I}\) which maps each graph to itself. This is done to formalize the event in which no auxiliary switching is performed. Formal definitions of the \(\mathsf{TL}\) and \(\mathsf{A}_{m,n}\) switchings are as follows. [TL switching at \(v_{1},t_{1}\)] For a graph \(G\) such that \((\{v_{1}\},t_{1})\) is a temporal single-loop, let \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) be edges and \(t_{4},t_{5},t_{6}\in[1,T]\) timestamps such that * none of the edges \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) is a loop or in a temporal double-edge, * the nodes \(v_{2},v_{3},v_{4},v_{5}\) are distinct from \(v_{1}\), and \(v_{4}\) is distinct from \(v_{5}\), and * none of the edges \((\{v_{1},v_{2}\},t_{4})\), \((\{v_{1},v_{3}\},t_{5})\), \((\{v_{4},v_{5}\},t_{6})\) exist. Then, a \(\mathsf{TL}\) switching replaces the edges \((\{v_{1}\},t_{1})\), \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) with \((\{v_{1},v_{2}\},t_{4})\), \((\{v_{1},v_{3}\},t_{5})\), \((\{v_{4},v_{5}\},t_{6})\) (see Figure 1). We stress that the integer subscripts of nodes are used in place of generic indices to reduce visual clutter and simplify the descriptions. Naturally, these labels may still refer to any node in the graph. [A switching at \(v_{1},v_{2},v_{3},v_{4},v_{5}\)] For a graph \(G\) such that \(\{v_{2},v_{4}\}\) and \(\{v_{3},v_{5}\}\) are non-edges, let \((\{v_{2},v_{2i+4}\},t_{i})\), \((\{v_{4},v_{2i+5}\},t_{m+i})\), \(1\leq i\leq m\) be incident edges at \(v_{2}\), \(v_{4}\), \((\{v_{3},v_{2m+2i+4}\},t_{2m+i})\), \((\{v_{5},v_{2m+2i+5}\},t_{2m+n+i})\), \(1\leq i\leq n\) incident edges at \(v_{3}\), \(v_{5}\), and \(t_{2m+2n+1},\ldots,t_{4m+4n}\in[1,T]\) timestamps such that * none of the edges \((\{v_{2},v_{2i+4}\},t_{i})\), \((\{v_{4},v_{2i+5}\},t_{m+i})\), \(1\leq i\leq m\), \((\{v_{3},v_{2m+2i+4}\},t_{2m+i})\) and none of \((\{v_{3},v_{2m+2i+5}\},t_{2m+n+i})\), \(1\leq i\leq n\) is a loop or in a temporal double-edge, * the nodes \(v_{1},\ldots,v_{2m+2n+5}\) are all distinct, and * none of the edges \((\{v_{2},v_{4}\},t_{2m+2n+i})\), \((\{v_{2i+4},v_{2i+5}\},t_{3m+2n+i})\), \(1\leq i\leq m\) and none of \((\{v_{3},v_{5}\},t_{4m+2n+i})\), \((\{v_{2m+2i+4},v_{2m+2i+5}\},t_{4m+3n+i})\), \(1\leq i\leq n\) exist. Then, an \(\mathsf{A}_{m,n}\) switching replaces the edges \((\{v_{2},v_{2i+4}\},t_{i})\), \((\{v_{4},v_{2i+5}\},t_{m+i})\), \(1\leq i\leq m\), \((\{v_{3},v_{2m+2i+4}\},t_{2m+i})\), \((\{v_{3},v_{2m+2i+5}\},t_{2m+n+i})\), \(1\leq i\leq n\) with \((\{v_{2},v_{4}\},t_{2m+2n+i})\), \((\{v_{2i+4},v_{2i+5}\},t_{3m+2n+i})\), \(1\leq i\leq m\), \((\{v_{3},v_{5}\},t_{4m+2n+i})\), \((\{v_{2m+2i+4},v_{2m+2i+5}\},t_{4m+3n+i})\), \(1\leq i\leq n\). In other words, the \(\mathsf{TL}\) switching chooses two edges and then rewires the specified loop and the two edges such that exactly the specified loop is removed and no other non-simple edges are created or removed. Likewise, the \(\mathsf{A}_{m,n}\) switching chooses \(m\) incident edges at two nodes \(v_{2},v_{4}\) each and \(n\) incident edges at two nodes \(v_{3},v_{5}\) each and then rewires the edges such Figure 1: The \(\mathsf{TL}\) switching removes a temporal single-loop with timestamp \(t_{1}\) at a node \(v_{1}\). A shaded region labeled with a timestamp contains all edges with this timestamp between the nodes. Red and green shades indicate non-simple and simple edges, respectively. that exactly \(m\) simple edges between the nodes \(v_{2},v_{4}\) and exactly \(n\) simple edges between the nodes \(v_{3},v_{5}\) are created and no non-simple edges are created or removed. After each TL switching, we perform an \(\mathsf{A}_{m,n}\) auxiliary switching or the identity switching. To decide which switching to perform we define a probability distribution over the different types of switchings which ensures uniformity. In total, the set of \(\mathsf{A}_{m,n}\) auxiliary switchings is \[\Theta_{\mathsf{A}}=\bigcup_{\begin{subarray}{c}0\leq m,n<\Delta\\ \mu\leq\max\{m,n\}\end{subarray}}\{\mathsf{A}_{m,n}\}.\] The switching to be performed is then sampled from the distribution \((\Theta_{\mathsf{A}}\cup\{\mathsf{I}\},P_{\mathsf{A}})\) where \[p_{\mathsf{A}}(\mathsf{A}_{m,n})=p_{\mathsf{A}}(\mathsf{I})\frac{\overline{f} _{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})}{\underline{b}_{\mathsf{A}_{m,n}}( \mathbf{W}^{\prime})},\hskip 28.452756ptp_{\mathsf{A}}(\mathsf{I})=1-\sum_{ \theta\in\Theta_{\mathsf{A}}}p_{\mathsf{A}}(\theta)\] for quantities \(\overline{f}_{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})\) and \(\underline{b}_{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})\) given further below. On a high level, Stage 1 runs in a loop until a rejection occurs or all temporal single-loops have been removed from \(G\). To this end, let \(\pi\) denote a permutation of the entries in \(\mathbf{W}(G)\) such that \(\mathbf{W}_{i,i,t}=1\). Then, Stage 1 iterates through the temporal single-loops in the order given by \(\pi\) and performs the following steps for each temporal single-loop. 1. Let \(G\) denote the current graph, \(\mathbf{W}=\mathbf{W}(G)\) and \((\{v_{1}\},t_{1})\) the loop. 2. Pick a uniform random TL switching \(S\) which removes \((\{v_{1}\},t_{1})\) from \(G\). 3. Restart (\(\mathbf{f}\)-reject) with probability \(1-\frac{f_{\mathsf{T}}(G)}{f_{\mathsf{T}}(\mathbf{W})}\). 4. Rewire the edges according to \(S\), let \(G^{\prime}\) denote the resulting graph and \(\mathbf{W}^{\prime}=\mathbf{W}(G^{\prime})\). 5. Let \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) denote the edges removed by \(S\). 6. Restart if \(m_{2,4}\geq\mu\) or \(m_{3,5}\geq\mu\). 7. Restart (\(\mathbf{b}\)-reject) with probability \(1-\frac{\underline{b}_{\mathsf{T}}(\mathbf{W}^{\prime};2)}{f_{\mathsf{T}}(G^{ \prime},v_{1}v_{2}v_{3}v_{4}v_{5};2)}\). 8. Choose a switching type \(\theta\sim(\Theta_{\mathsf{A}}\cup\{\mathsf{I}\},P_{\mathsf{A}})\). 9. If \(\theta=\mathsf{A}_{m,n}\) for some \(\mathsf{A}_{m,n}\in\Theta_{\mathsf{A}}\): 1. Restart if \(m_{2,4}\geq 1\) or \(m_{3,5}\geq 1\). 2. Pick a uniform random \(\mathsf{A}_{m,n}\) switching \(S^{\prime}\) which adds an edge with node set \(\{v_{2},v_{4}\}\) and multiplicity \(m\) and an edge with node set \(\{v_{3},v_{5}\}\) and multiplicity \(n\) to \(G^{\prime}\). 3. Restart (\(\mathbf{f}\)-reject) with probability \(1-\frac{f_{\mathsf{A}_{m,n}}(\mathbf{G}^{\prime})}{f_{\mathsf{A}_{m,n}}( \mathbf{W}^{\prime})}\). 4. Rewire the edges according to \(S^{\prime}\) and let \(G^{\prime\prime}\) denote the resulting graph. 5. Restart (\(\mathbf{b}\)-reject) with probability \(1-\frac{\underline{b}_{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})}{\underline{b}_{ \mathsf{A}_{m,n}}(G^{\prime\prime},v_{1}v_{2}v_{3}v_{4}v_{5})}\). 6. Set \(G^{\prime}\gets G^{\prime\prime}\). 10. Restart (\(\mathbf{b}\)-reject) with probability \(1-\frac{\underline{b}_{\mathsf{T}}(\mathbf{W}^{\prime};0)\underline{b}_{ \mathsf{T}}(\mathbf{W}^{\prime};1)}{b_{\mathsf{T}}(G^{\prime},v_{1};0)b_{ \mathsf{T}}(G^{\prime},v_{1}v_{2}v_{3};1)}\). 11. Set \(G\gets G^{\prime}\). To fully specify Stage 1, it remains to define the quantities used for the f- and b-rejection steps. For the f-rejection in step 3, define \(f_{\mathsf{TL}}(G)\) as the number of TL switchings which can be performed on the graph \(G\). The corresponding upper bound is \[\overline{f}_{\mathsf{TL}}(\mathbf{W})=M^{2}T^{3}.\] For the b-rejections in steps 7 and 10, define \(b_{\mathsf{TL}}(G^{\prime},v_{1}v_{2}v_{3}v_{4}v_{5};2)\) as the number of times-tamps \(t_{2},t_{3}\in[1,T]\) such that the edges \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) do not exist in \(G^{\prime}\), \(b_{\mathsf{TL}}(G^{\prime},v_{1}v_{2}v_{3};1)\) as the number of simple temporal edges \((\{v_{4},v_{5}\},t_{6})\) such that \(v_{4}\), \(v_{5}\) are distinct from \(v_{1}\), \(v_{2}\), \(v_{3}\), and \(b_{\mathsf{TL}}(G^{\prime},v_{1};0)\) as the number of distinct simple temporal edges \((\{v_{1},v_{2}\},t_{4})\), \((\{v_{1},v_{3}\},t_{5})\) incident at \(v_{1}\). The lower bounds on these quantities are \[\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};2)=(T-(\mu-1))^{2},\quad \underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};1)=M-2B_{L}-4B_{D}-4\Delta, \quad\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};0)=k_{1}(k_{1}-1)\] where \(k_{i}=d_{i}-\sum_{1\leq t\leq T}(2\mathbf{W}^{\prime}_{i,i,t}+\sum_{1\leq j \leq n:j\neq i}\mathbf{W}^{\prime}_{i,j,t})\). For the f-rejection in step \(9c\), define \(f_{\mathsf{A}_{m,n}}(G^{\prime})\) as the number of \(\mathsf{A}_{m,n}\) switchings which can be performed on the graph \(G^{\prime}\). The upper bound is \[\overline{f}_{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})=\Delta^{2(m+n)}T^{2(m+n )}.\] For the b-rejection in step \(9e\), define \(b_{\mathsf{A}_{m,n}}(G^{\prime\prime},v_{1}v_{2}v_{3}v_{4}v_{5})\) as the number of \(\mathsf{A}_{m,n}\) switchings which can produce the graph \(G^{\prime\prime}\). The corresponding lower bound is \[\underline{b}_{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})=(M-2B_{L}-4B_{D}-4(m+n +3)\Delta)^{m+n}(T-(\Delta-1))^{2(m+n)}.\] The correctness of Stage 1 is implied by the following result, the proof of which can be found in subsection 4.1. The graph \(G^{\prime}\) at the end of an iteration of Stage 1 is uniformly distributed in \(\mathcal{S}(\mathbf{W}^{\prime})\) given that the graph \(G\) at the start of the iteration is uniformly distributed in \(\mathcal{S}(\mathbf{W})\). Showing that Stage 1 is efficient requires showing that both the probability of restarting and the run time of all iterations is small. To this end, we show the following in subsection 4.2. The probability of not restarting in Stage 1 is \(\exp(-O(\Delta^{2}/M)-O(\Delta/T))\). The expected run time of Stage 1 is \(O(\Delta^{2})\). Stage 1 ends if all temporal single-loops have been removed. The algorithm then moves on to Stage 2 to remove the remaining temporal double-edges. ### Stage 2: Removal of Temporal Double-Edges Stage 2 uses five kinds of switchings which we denote as \(\mathsf{TD}_{1}\), \(\mathsf{TD}_{0}\), \(\mathsf{B}_{m,n}\), \(\mathsf{C}_{m,n,o,p}\) and \(\mathsf{I}\). The two main switchings \(\mathsf{TD}_{1}\) and \(\mathsf{TD}_{0}\) remove a temporal double-edge between two specific nodes and with a specific timestamp. The difference is that the \(\mathsf{TD}_{1}\) switching only removes one occurrence of the edge while the \(\mathsf{TD}_{0}\) switching erases both occurrences. This is done to equalize the probability between graphs in which the removed temporal double-edge is a non-edge, or single edge. After performing a \(\mathsf{TD}_{1}\) switching, the \(\mathsf{B}_{m,n}\) Figure 2: The \(\mathsf{TD}_{1}\) switching removes a temporal double-edge with timestamp \(t_{1}\) between two nodes \(v_{1},v_{2}\) while leaving a single-edge between the nodes. auxiliary switching may be performed, and after performing a \(\mathsf{TD}_{0}\) switching, the \(\mathsf{C}_{m,n,o,p}\) switching may be performed. These auxiliary switchings add ordinary multi-edges with multiplicity \(\max\{m,n\}\geq\mu\) or \(\max\{m,n,o,p\}\geq\mu\) to the graph to equalize the probabilities between graphs with or without these edges. We define the \(\mathsf{TD}_{1}\) switching as follows. [\(\mathsf{TD}_{1}\) switching at \((\{v_{1},v_{2}\},t_{1})\)] For a graph \(G\) such that \((\{v_{1},v_{2}\},t_{1})\) is contained in a temporal double-edge, let \((\{v_{3},v_{5}\},t_{2})\), \((\{v_{4},v_{6}\},t_{3})\) be edges and \(t_{4},t_{5},t_{6}\in[1,T]\) timestamps such that * none of the edges \((\{v_{3},v_{5}\},t_{2})\), \((\{v_{4},v_{6}\},t_{3})\) is contained in a temporal double-edge, * the nodes \(v_{3},v_{4},v_{5},v_{6}\) are distinct from \(v_{1}\) and \(v_{2}\), and \(v_{5}\) is distinct from \(v_{6}\), and * none of the edges \((\{v_{1},v_{3}\},t_{4})\), \((\{v_{2},v_{4}\},t_{5})\), \((\{v_{5},v_{6}\},t_{6})\) exist. Then, a \(\mathsf{TD}_{1}\) switching replaces the edges \((\{v_{1},v_{2}\},t_{1})\), \((\{v_{3},v_{5}\},t_{2})\), \((\{v_{4},v_{6}\},t_{3})\) with \((\{v_{1},v_{3}\},t_{4})\), \((\{v_{2},v_{4}\},t_{5})\), \((\{v_{5},v_{6}\},t_{6})\) (see Figure 2). The \(\mathsf{TD}_{0}\) switching can be defined analogously by using four edges in place of two to remove both edges contained in the temporal double-edge. To choose a \(\mathsf{TD}_{1}\) or \(\mathsf{TD}_{0}\) switching when removing a temporal double-edge, specify the probability distribution \((\{\mathsf{TD}_{1},\mathsf{TD}_{0}\},P)\) where \[p(\mathsf{TD}_{1})=p(\mathsf{TD}_{0})\frac{p_{\mathsf{C}}(\mathsf{I})}{p_{ \mathsf{B}}(\mathsf{I})}\frac{\overline{f}_{\mathsf{TD}_{1}}(\mathbf{W})b_{ \mathsf{TD}_{0}}(\mathbf{W}^{\prime})}{\overline{f}_{\mathsf{TD}_{0}}(\mathbf{ W})\overline{b}_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime})},\qquad\quad p(\mathsf{TD}_{0})=1-p( \mathsf{TD}_{1})\] for quantities \(\overline{f}_{\mathsf{TD}_{1}}(\mathbf{W})\), \(\overline{f}_{\mathsf{TD}_{0}}(\mathbf{W})\) and \(b_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime})=b_{\mathsf{TD}_{1}}(\mathbf{W}^{ \prime};0)b_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};1)b_{\mathsf{TD}_{1}}( \mathbf{W}^{\prime};2)\), \(b_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime})=b_{\mathsf{TD}_{0}}(\mathbf{W}^{ \prime};0)b_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};1)b_{\mathsf{TD}_{0}}( \mathbf{W}^{\prime};2)\) defined below and where \(p_{\mathsf{B}}(\mathsf{I})\) and \(p_{\mathsf{C}}(\mathsf{I})\) are the probabilities of not performing a \(\mathsf{B}_{m,n}\) and \(\mathsf{C}_{m,n,o,p}\) auxiliary switching, respectively. Continuing with the auxiliary switchings, we now define the \(\mathsf{B}_{m,n}\) switching. The \(\mathsf{C}_{m,n,o,p}\) switching can be defined analogously by expanding the number of ordinary multi-edges created from the two edges removed by the \(\mathsf{TD}_{1}\) switching to the four edges removed by the \(\mathsf{TD}_{0}\) switching. [\(\mathsf{B}_{m,n}\) switching at \(v_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\)] For a graph \(G\) such that \(\{v_{3},v_{5}\}\) and \(\{v_{4},v_{6}\}\) are non-edges, let \((\{v_{3},v_{2i+5}\},t_{i})\), \((\{v_{5},v_{2i+6}\},t_{m+i})\), \(1\leq i\leq m\) be incident edges at \(v_{3}\), \(v_{5}\), \((\{v_{4},v_{2m+2i+5}\},t_{2m+i})\), \((\{v_{6},v_{2m+2i+6}\},t_{2m+n+i})\), \(1\leq i\leq n\) incident edges at \(v_{4}\), \(v_{6}\), and \(t_{2m+2n+1},\ldots,t_{4m+4n}\in[1,T]\) timestamps such that * none of the edges \((\{v_{3},v_{2i+5}\},t_{i})\), \((\{v_{5},v_{2i+6}\},t_{m+i})\), \(1\leq i\leq m\), \((\{v_{4},v_{2m+2i+5}\},t_{2m+i})\), \((\{v_{6},v_{2m+2i+6}\},t_{2m+n+i})\), \(1\leq i\leq n\) is contained in a temporal double-edge, * the nodes \(v_{1},\ldots,v_{2m+2n+6}\) are all distinct, and * none of the edges \((\{v_{3},v_{5}\},t_{2m+2n+i})\), \((\{v_{2i+5},v_{2i+6}\},t_{3m+2n+i})\), \(1\leq i\leq m\) and none of the edges \((\{v_{4},v_{6}\},t_{4m+2n+i})\), \((\{v_{2m+2i+5},v_{2m+2i+6}\},t_{4m+3n+i})\), \(1\leq i\leq n\) exist. Then, a \(\mathsf{B}_{m,n}\) switching replaces the edges \((\{v_{3},v_{2i+5}\},t_{i})\), \((\{v_{5},v_{2i+6}\},t_{m+i})\), \(1\leq i\leq m\), \((\{v_{4},v_{2m+2i+5}\},t_{2m+i})\), \((\{v_{6},v_{2m+2i+6}\},t_{2m+n+i})\), \(1\leq i\leq n\) with \((\{v_{3},v_{5}\},t_{2m+2n+i})\), \((\{v_{2i+5},v_{2i+6}\},t_{3m+2n+i})\), \(1\leq i\leq m\), \((\{v_{4},v_{6}\},t_{4m+2n+i})\), \((\{v_{2m+2i+5},v_{2m+2i+6}\},t_{4m+3n+i})\), \(1\leq i\leq n\). The sets of \(\mathsf{B}_{m,n}\) and \(\mathsf{C}_{m,n,o,p}\) switchings are \[\Theta_{\mathsf{B}}=\bigcup_{\begin{subarray}{c}0\leq m,n<\Delta\\ \mu\leq\max\{m,n\}\end{subarray}}\{\mathsf{B}_{m,n}\}\qquad\qquad\qquad\Theta_{ \mathsf{C}}=\bigcup_{\begin{subarray}{c}0\leq m,n,o,p<\Delta\\ \mu\leq\max\{m,n,o,p\}\end{subarray}}\{\mathsf{C}_{m,n,o,p}\}\] and the associated type distributions are \((\Theta_{\mathsf{B}}\cup\{\{\},P_{\mathsf{B}})\) and \((\Theta_{\mathsf{C}}\cup\{\{\},P_{\mathsf{C}})\) where \[p_{\mathsf{B}}(\mathsf{B}_{m,n}) =p_{\mathsf{B}}(\mathsf{I})\frac{\overline{f}_{\mathsf{B}_{m,n}}( \mathbf{W}^{\prime})}{\underline{b}_{\mathsf{B}_{m,n}}(\mathbf{W}^{\prime})}, p_{\mathsf{B}}(\mathsf{I}) =1-\sum_{\theta\in\Theta_{\mathsf{B}}}p_{\mathsf{B}}(\theta),\] \[p_{\mathsf{C}}(\mathsf{C}_{m,n,o,p}) =p_{\mathsf{C}}(\mathsf{I})\frac{\overline{f}_{\mathsf{C}_{m,n,o,p}}(\mathbf{W}^{\prime})}{\underline{b}_{\mathsf{C}_{m,n,o,p}}(\mathbf{W}^{ \prime})}, p_{\mathsf{C}}(\mathsf{I}) =1-\sum_{\theta\in\Theta_{\mathsf{C}}}p_{\mathsf{C}}(\theta).\] The main loop of Stage 2 is as follows. Let \(\pi\) denote a permutation of the entries in \(\mathbf{W}(G)\) such that \(\mathbf{W}_{i,j,t}=2\) and \(i\neq j\). Then, Stage 2 iterates through the temporal double-edges in the order given by \(\pi\) and performs the following steps. 1. Let \(G\) denote the current graph, \(\mathbf{W}=\mathbf{W}(G)\) and \((\{v_{1},v_{2}\},t_{1})\) the temporal double-edge. 2. Choose a switching type \(\theta\sim(\{\mathsf{TD}_{0},\mathsf{TD}_{1}\},P)\). 3. Pick a uniform random \(\theta\) switching \(S\) which removes \((\{v_{1},v_{2}\},t_{1})\) from \(G\). 4. Restart (**f-reject**) with probability \(1-\frac{f_{\theta}(G)}{\overline{f}_{\theta}(\mathbf{W})}\). 5. Rewire the edges according to \(S\), let \(G^{\prime}\) denote the resulting graph and \(\mathbf{W}^{\prime}=\mathbf{W}(G^{\prime})\). 6. If \(\theta=\mathsf{TD}_{1}\): 1. Let \((\{v_{3},v_{5}\},t_{2})\), \((\{v_{4},v_{6}\},t_{3})\) denote the edges removed by \(S\). 2. Restart if \(m_{3,5}\geq\mu\) or \(m_{4,6}\geq\mu\). 3. Restart (**b-reject**) with probability \(1-\frac{\underline{b}_{\mathsf{TD}_{0}(\mathbf{W}^{\prime},2)}}{\underline{b}_ {\mathsf{TD}_{0}(G^{\prime},v_{1}\ldots v_{6};2)}}\). 4. Choose a switching type \(\theta_{\mathsf{B}}\sim(\Theta_{\mathsf{B}}\cup\{\{\},P_{\mathsf{B}})\). 5. If \(\theta=\mathsf{B}_{m,n}\) for some \(\mathsf{B}_{m,n}\in\Theta_{\mathsf{B}}\): 1. Restart if \(m_{3,5}\geq 1\) or \(m_{4,6}\geq 1\). 2. Pick a uniform random \(\mathsf{B}_{m,n}\) switching \(S^{\prime}\) which adds an edge with node set \(\{v_{3},v_{5}\}\) and multiplicity \(m\) and an edge with node set \(\{v_{4},v_{6}\}\) and multiplicity \(n\) to \(G^{\prime}\). 3. Restart (**f-reject**) with probability \(1-\frac{\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime},0)}{\underline{b}_ {\mathsf{TD}_{1}}(\mathbf{W}^{\prime},1)}\). 4. Restart (**b-reject**) with probability \(1-\frac{\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime},1)}{\underline{b}_ {\mathsf{TD}_{1}}(G^{\prime},v_{1}v_{2};0)\mathsf{b}_{\mathsf{TD}_{1}}(G^{ \prime},v_{1}v_{2}v_{3}v_{4};1)}\). 7. Else if \(\theta=\mathsf{TD}_{0}\): 1. Let \((\{v_{3},v_{7}\},t_{2})\), \((\{v_{4},v_{8}\},t_{3})\), \((\{v_{5},v_{9}\},t_{4})\), \((\{v_{6},v_{10}\},t_{5})\) denote the edges removed by \(S\). 2. Restart if \(m_{3,7}\geq\mu\), \(m_{4,8}\geq\mu\), \(m_{5,9}\geq\mu\) or \(m_{6,10}\geq\mu\). 3. Restart (**b-reject**) with probability \(1-\frac{\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime},2)}{\underline{b}_ {\mathsf{TD}_{0}}(G^{\prime},v_{1}\ldots v_{10};2)}\). 4. Choose a switching type \(\theta_{\mathsf{C}}\sim(\Theta_{\mathsf{C}}\cup\{\{\},P_{\mathsf{C}})\). 5. If \(\theta=\mathsf{C}_{m,n,o,p}\) for some \(\mathsf{C}_{m,n,o,p}\in\Theta_{\mathsf{C}}\): 1. Restart if \(m_{3,7}\geq 1\), \(m_{4,8}\geq 1\), \(m_{5,9}\geq 1\) or \(m_{6,10}\geq 1\). 2. Pick a uniform random \(\mathsf{C}_{m,n,o,p}\) switching \(S^{\prime}\) which adds an edge with node set \(\{v_{3},v_{7}\}\) and multiplicity \(m\), an edge with node set \(\{v_{4},v_{8}\}\) and multiplicity \(n\), an edge with node set \(\{v_{5},v_{9}\}\) and multiplicity \(o_{\mathsf{n}}\) and an edge with node set \(\{v_{6},v_{10}\}\) and multiplicity \(p\) to \(G^{\prime}\). 3. Restart (**f-reject**) with probability \(1-\frac{\underline{f}_{\mathsf{C}_{m,n,o,p}}(G^{\prime})}{\underline{f}_{ \mathsf{C}_{m,n,o,p}}(\mathbf{W}^{\prime})}\). 4. Rewire the edges according to \(S^{\prime}\) and let \(G^{\prime\prime}\) denote the resulting graph. 5. Restart (**b-reject**) with probability \(1-\frac{\underline{b}_{\mathsf{C}_{m,n,o,p}}(\mathbf{W}^{\prime})}{\underline{b}_ {\mathsf{C}_{m,n,o,p}}(G^{\prime},v_{1}\ldots v_{10})}\). * Set \(G^{\prime}\gets G^{\prime\prime}\). * Restart (**b-reject**) with probability \(1-\frac{\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};0)\underline{b}_{ \mathsf{TD}_{0}}(\mathbf{W}^{\prime};1)}{\underline{b}_{\mathsf{TD}_{0}}(G^{ \prime},v_{1}v_{2};0)\underline{b}_{\mathsf{TD}_{0}}(G^{\prime},v_{1}\ldots v _{6};1)}\). * Set \(G\gets G^{\prime}\). It remains to define the quantities required for the f- and b-rejection steps. For the f-rejection in step 4, define \(f_{\mathsf{TD}_{1}}(G)\) and \(f_{\mathsf{TD}_{0}}(G)\) as the number of \(\mathsf{TD}_{1}\) and \(\mathsf{TD}_{0}\) switchings which can be performed on the graph \(G\), respectively. The corresponding upper bounds are \[\overline{f}_{\mathsf{TD}_{1}}(\mathbf{W})=M^{2}T^{3},\qquad\qquad\overline{f }_{\mathsf{TD}_{0}}(\mathbf{W})=M^{4}T^{6}.\] For the b-rejections in steps \(6c\) and \(7c\), define \(b_{\mathsf{TD}_{1}}(G^{\prime},v_{1}\ldots v_{6};2)\) as the number of timestamps \(t_{2},t_{3}\in[1,T]\) such that \((\{v_{3},v_{5}\},t_{2})\), \((\{v_{4},v_{6}\},t_{3})\) do not exist in \(G^{\prime}\) and \(b_{\mathsf{TD}_{0}}(G^{\prime},v_{1}\ldots v_{10};2)\) as the number of timestamps \(t_{2},t_{3},t_{4},t_{5}\in[1,T]\) such that \((\{v_{3},v_{7}\},t_{2})\), \((\{v_{4},v_{8}\},t_{3})\), \((\{v_{5},v_{9}\},t_{4})\), \((\{v_{6},v_{10}\},t_{5})\) do not exist in \(G^{\prime}\). The lower bounds are \[\underline{b}_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};2)=(T-(\mu-1))^{2},\qquad \qquad\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};2)=(T-(\mu-1))^{4}.\] For the b-rejections in steps \(6f\) and \(7f\), define \(b_{\mathsf{TD}_{1}}(G^{\prime},v_{1}\ldots v_{4};1)\) as the number of simple edges \((\{v_{5},v_{6}\},t_{6})\) in \(G^{\prime}\) such that \(v_{5}\), \(v_{6}\) are distinct from \(v_{1},v_{2},v_{3},v_{4}\) and \(b_{\mathsf{TD}_{0}}(G^{\prime},v_{1}\ldots v_{6};1)\) as the number of distinct simple edges \((\{v_{7},v_{8}\},t_{10})\), \((\{v_{9},v_{10}\},t_{11})\) in \(G^{\prime}\) such that \(v_{7}\), \(v_{8}\), \(v_{9}\), \(v_{10}\) are distinct from \(v_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\). Then, define \(b_{\mathsf{TD}_{1}}(G^{\prime},v_{1}v_{2};0)\) as the number of simple edges \((\{v_{1},v_{3}\},t_{4})\), \((\{v_{2},v_{4}\},t_{5})\) incident at \(v_{1}\) and \(v_{2}\) in \(G^{\prime}\) and \(b_{\mathsf{TD}_{0}}(G^{\prime},v_{1}v_{2};0)\) as the number of distinct simple edges \((\{v_{1},v_{3}\},t_{4})\), \((\{v_{2},v_{4}\},t_{5})\), \((\{v_{1},v_{5}\},t_{6})\), \((\{v_{2},v_{6}\},t_{7})\) incident at \(v_{1}\) and \(v_{2}\) in \(G^{\prime}\). The lower bounds are \[\underline{b}_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};1) =M-4B_{D}-4\Delta,\qquad\quad\underline{b}_{\mathsf{TD}_{0}}( \mathbf{W}^{\prime};1)=(M-4B_{D}-4\Delta)^{2},\] \[\underline{b}_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};0) =k_{1}k_{2},\qquad\qquad\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^ {\prime};0)=k_{1}(k_{1}-1)k_{2}(k_{2}-1).\] For the f-rejections in steps \(6eiii\) and \(7eiii\), define \(f_{\mathsf{B}_{m,n}}(G^{\prime})\), \(f_{\mathsf{C}_{m,n,o,p}}(G^{\prime})\) as the number of \(\mathsf{B}_{m,n}\), \(\mathsf{C}_{m,n,o,p}\) switchings which can be performed on \(G^{\prime}\), respectively. In addition, define \[\overline{f}_{\mathsf{B}_{m,n}}(\mathbf{W}^{\prime})=\Delta^{2(m+n)}T^{2(m+n)}, \qquad\qquad\overline{f}_{\mathsf{C}_{m,n,o,p}}(\mathbf{W}^{\prime})=\Delta^{2( m+n+o+p)}T^{2(m+n+o+p)}.\] For the b-rejections in steps \(6ev\) and \(7ev\), define \(b_{\mathsf{B}_{m,n}}(G^{\prime\prime},v_{1}\ldots v_{6})\) and \(b_{\mathsf{C}_{m,n,o,p}}(G^{\prime\prime},v_{1}\ldots v_{10})\) as the number of \(\mathsf{B}_{m,n}\) and \(\mathsf{C}_{m,n,o,p}\) switchings which can produce the graph \(G^{\prime\prime}\), respectively. The corresponding lower bounds are \[\underline{b}_{\mathsf{B}_{m,n}}(\mathbf{W}^{\prime}) =(M-4B_{D}-4(m+n+3)\Delta)^{m+n}(T-(\Delta-1))^{2(m+n)},\] \[\underline{b}_{\mathsf{C}_{m,n,o,p}}(\mathbf{W}^{\prime}) =(M-4B_{D}-4(m+n+o+p+5)\Delta)^{m+n+o+p}(T-(\Delta-1))^{2(m+n+o+ p)}.\] We show the following in subsection 4.1 and subsection 4.2. The graph \(G^{\prime}\) at the end of an iteration of Stage 2 is uniformly distributed in \(\mathcal{S}(\mathbf{W}^{\prime})\) given that the graph \(G\) at the start of the iteration is uniformly distributed in \(\mathcal{S}(\mathbf{W})\). The probability of not restarting in Stage 2 is \(\exp(-O(\Delta^{3}/MT)-O(\Delta^{2}/T^{2}))\). The expected run time of Stage 2 is \(O(\Delta^{3}/T)\). Once Stage 2 ends, the final graph is simple and can be output. ## 4 Proof of Theorem 2 It remains to show Theorem 2. We start with the uniformity of the output distribution. ### Uniformity of T-Gen To show Lemma 3.1, we first show that the upper and lower bounds specified for Stage 1 hold. To this end, let \(\mathcal{M}_{0}(\mathbf{D})\subseteq\mathcal{M}(\mathbf{D})\) denote the set of temporal multigraphs which satisfy the initial conditions specified in subsection 3.1. Then, we obtain the following results. For all \(G\in\mathcal{S}(\mathbf{W})\subseteq\mathcal{M}_{0}(\mathbf{D})\) we have \[f_{\mathsf{TL}}(G)\leq\overline{f}_{\mathsf{TL}}(\mathbf{W}).\] Proof.: By Definition 3.1, a TL switching involves two edges \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) and three timestamps \(t_{4},t_{5},t_{6}\in[1,T]\). The total number of choices for the edges and timestamps constitutes an upper bound on the number of switchings which can be performed, and there are at most \(M^{2}\) choices the (oriented) edges and at most \(T^{3}\) choices the timestamps. For all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{0}(\mathbf{ D})\) where \(m_{2,4}<\mu\) and \(m_{3,5}<\mu\) we have \[\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};2)\leq b_{\mathsf{TL}}(G^{ \prime},v_{1}v_{2}v_{3}v_{4}v_{5};2)\leq T^{2}\] and for all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{0}(\mathbf{ D})\) we have \[\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};1)\leq b_{\mathsf{ TL}}(G^{\prime},v_{1}v_{2}v_{3};1)\leq M,\] \[\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};0)=b_{\mathsf{TL}} (G^{\prime},v_{1};0).\] Proof.: The first set of inequalities follows since there are at most \(T\) and at least \(T-(\mu-1)\) available timestamps for each edge with multiplicity at most \(\mu-1\). For the second set of inequalities, there are at most \(M\) choices for an edge, at most \(2B_{L}+4B_{D}\) choices such that the edge is not simple, and at most \(4\Delta\) choices such that some of the nodes are not distinct. For the equality, observe that \(\mathbf{W}^{\prime}\) (in addition to \(\mathbf{D}\)) determines the number of incident simple temporal edges at all nodes. For all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{0}(\mathbf{ D})\) we have \[f_{\mathsf{A}_{m,n}}(G^{\prime})\leq\overline{f}_{\mathsf{A}_{m,n}}(\mathbf{W}^{ \prime}).\] Proof.: The number of edges and timestamps needed for the \(\mathsf{A}_{m,n}\) switching is \(2(m+n)\) and there are at most \(\Delta\) choices for each incident edge, and at most \(T\) choices for each timestamp. For all \(G^{\prime\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{0}( \mathbf{D})\) we have \[\underline{b}_{\mathsf{A}_{m,n}}(\mathbf{W}^{\prime})\leq b_{\mathsf{A}_{m,n }}(G^{\prime\prime},v_{1}\ldots v_{5})\leq M^{m+n}T^{2(m+n)}.\] Proof.: The number of switchings which can produce a given graph corresponds to the number of choices for edges and timestamps needed to reverse the switching. Reversing the \(\mathsf{A}_{m,n}\) switching requires \(m+n\) edges and \(2(m+n)\) timestamps. For each edge, there are at most \(M\) choices, at most \(2B_{L}+4B_{D}\) choices such that the edge is not simple, and at most \(2\Delta\) choices for each node already chosen such that the nodes are not distinct. For each timestamp, there are at most \(T\) choices, and at least \(T-(\Delta-1)\) choices such that the edge does not exist in \(G^{\prime\prime}\). Proof of Lemma 9.: Let \(c_{i,j}=\sum_{1\leq t\leq T}\mathbf{1}_{w_{i,j,t}>0}\) denote the number of distinct temporal edges between two given nodes \(v_{i},v_{j}\) and \(N(v_{i})\) the (multi-)set of incident temporal edges at a given node \(v_{i}\). Then by Lemma 17, after the f-rejection in step 3 a given graph \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\) is produced with probability \[\sum_{\begin{subarray}{c}(\{v_{1},v_{2}\},t_{4})\neq(\{v_{1},v_{3}\},t_{5}) \in N(v_{1})\\ v_{1}\neq v_{2},w_{1,2,4}=1\\ v_{1}\neq v_{3},w_{1,3,5}=1\end{subarray}}\sum_{\begin{subarray}{c}(\{v_{4},v _{5}\},t_{4})\in E\\ v_{4}\neq v_{5},u_{4,5,6}=1\\ v_{4}\notin\{v_{1},v_{2}\}\\ v_{5}\notin\{v_{1},v_{3}\}\end{subarray}}\frac{(T-c_{2,4})(T-c_{3,5})}{\overline {f}_{\mathsf{TL}}(\mathbf{W})|\mathcal{S}(\mathbf{W})|}\] where \(v_{1}\) is the node at which the temporal single-loop was removed. Thus, in particular, \(G^{\prime}\) is produced via switchings where we fix the three created edges \((\{v_{1},v_{2}\},t_{4})\), \((\{v_{1},v_{3}\},t_{5})\), \((\{v_{4},v_{5}\},t_{6})\) to three given edges which satisfy the conditions with probability \[\frac{(T-c_{2,4})(T-c_{3,5})}{\overline{f}_{\mathsf{TL}}(\mathbf{W})|\mathcal{ S}(\mathbf{W})|}\propto b_{\mathsf{TL}}(G^{\prime},v_{1}v_{2}v_{3}v_{4}v_{5};2)\] and by Lemma 18 after steps 6 and 7 with probability proportional to \(\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};2)=T-(\mu-1)\) if \(m_{2,4},m_{3,5}<\mu\) (which implies \(c_{2,4},c_{3,5}<\mu\)) and probability \(0\) if any of \(m_{2,4}\geq\mu\), \(m_{3,5}\geq\mu\). Now, if \(m_{2,4}=m_{3,5}=0\), we perform a type \(\mathsf{A}_{m,n}\) switching with probability \(p_{\mathsf{A}}(\mathsf{A}_{m,n})\), or restart the algorithm with this probability if \(0<m_{2,4},m_{3,5}<\mu\). Thus, if \(0\leq m_{2,4},m_{3,5}<\mu\), the probability of producing \(G^{\prime}\) via switchings which create the three fixed edges is \[p_{\mathsf{A}}(\mathsf{I})\frac{\underline{b}_{\mathsf{TL}}(\mathbf{W}^{ \prime};2)}{\overline{f}_{\mathsf{TL}}(\mathbf{W})|\mathcal{S}(\mathbf{W})|}.\] If instead \(\mu\leq m_{2,4}<\min\{d_{2},d_{4}\}\) or \(\mu\leq m_{3,5}<\min\{d_{3},d_{5}\}\), then \(G^{\prime}\) is produced only via an \(\mathsf{A}_{m_{2,4},m_{3,5}}\) switching on a graph with \(m_{2,4}=m_{3,5}=0\) and by Lemma 19 and Lemma 20, after the f- and b-rejections in steps \(9c\) and \(9e\), the probability of producing \(G^{\prime}\) in this way is \[p_{\mathsf{A}}(\mathsf{A}_{m_{2,4},m_{3,5}})\frac{\underline{b}_{\mathsf{A}_{ m_{2,4},m_{3,5}}}(\mathbf{W}^{\prime})}{\overline{f}_{\mathsf{TL}}( \mathbf{W})|\mathcal{S}(\mathbf{W})|}.\] It is now straightforward to verify that the probabilities \(p_{\mathsf{A}}(\mathsf{I})\) and \(p_{\mathsf{A}}(\mathsf{A}_{m,n})\) as specified for Stage 1 equalize the expressions given above. Thus, after step 9, the probability of producing a given graph \(G^{\prime}\) via switchings which create the three fixed edges no longer depends on \(c_{2,4}\) or \(c_{3,5}\). It only remains to show that step 10 equalizes the probabilities over all choices of the three edges \((\{v_{1},v_{2}\},t_{4})\), \((\{v_{1},v_{3}\},t_{5})\) and \((\{v_{4},v_{5}\},t_{6})\). To this end, observe that the probability of producing \(G^{\prime}\) via switchings where we only fix the choices of \((\{v_{1},v_{2}\},t_{4})\) and \((\{v_{1},v_{3}\},t_{5})\) is \[\sum_{\begin{subarray}{c}(\{v_{4},v_{5}\},t_{6})\in E\\ v_{4}\neq v_{5},u_{4,5,6}=1\\ v_{4}\notin\{v_{1},v_{3}\}\\ v_{5}\notin\{v_{1},v_{3}\}\end{subarray}}p_{\mathsf{A}}(\mathsf{I})\frac{ \underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};2)}{\overline{f}_{\mathsf{TL}}( \mathbf{W})|\mathcal{S}(\mathbf{W})|}\propto b_{\mathsf{TL}}(G^{\prime},v_{1} v_{2}v_{3};1)\] and by Lemma 18 after the second b-rejection in step 10, this probability is proportional to \(\underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};1)\). Finally, to show that the probability is equal over all choices of \((\{v_{1},v_{2}\},t_{4})\), \((\{v_{1},v_{3}\},t_{5})\), observe that by Lemma 18, we have \[\sum_{\begin{subarray}{c}(\{v_{1},v_{2}\},t_{4})\neq(\{v_{1},v_{3}\},t_{5}) \in N(v_{1})\\ v_{1}\neq v_{2},w_{1,2,4}=1\\ v_{1}\neq w_{1},w_{1,3,5}=1\end{subarray}}p_{\mathsf{A}}(\mathsf{I})\frac{ \underline{b}_{\mathsf{TL}}(\mathbf{W}^{\prime};1)\underline{b}_{\mathsf{TL}}( \mathbf{W}^{\prime};2)}{\overline{f}_{\mathsf{TL}}(\mathbf{W})|\mathcal{S}( \mathbf{W})|}\propto b_{\mathsf{TL}}(G^{\prime},v_{1};0)=\underline{b}_{\mathsf{ TL}}(\mathbf{W}^{\prime};0)\] and thus \(G^{\prime}\) is produced with probability \[p_{\mathbf{A}}(\mathrm{I})\frac{b_{\mathsf{TL}}(\mathbf{W}^{\prime};0)b_{\mathsf{ TL}}(\mathbf{W}^{\prime};1)b_{\mathsf{TL}}(\mathbf{W}^{\prime};2)}{\overline{f}_{ \mathsf{TL}}(\mathbf{W})|\mathcal{S}(\mathbf{W})|}\] which only depends on \(\mathbf{W}\), \(\mathbf{W}^{\prime}\) and \(\mathbf{D}\). The proof of Lemma 14 requires showing that the upper and lower bounds for Stage 2 are correct. To this end, let \(\mathcal{M}_{1}(\mathbf{D})\subseteq\mathcal{M}_{0}(\mathbf{D})\) denote set of temporal multigraphs which satisfy the initial conditions and which are output by Stage 1, i.e. which contain no temporal single-loops. Then, we obtain the following results. For all \(G\in\mathcal{S}(\mathbf{W})\subseteq\mathcal{M}_{1}(\mathbf{D})\) we have \[f_{\mathsf{TD}_{1}}(G) \leq\overline{f}_{\mathsf{TD}_{1}}(\mathbf{W}),\] \[f_{\mathsf{TD}_{0}}(G) \leq\overline{f}_{\mathsf{TD}_{0}}(\mathbf{W}).\] Proof.: There are at most \(M^{2}\) choices for the (oriented) edges \((\{v_{3},v_{5}\},t_{2})\), \((\{v_{4},v_{6}\},t_{3})\) and at most \(T^{3}\) choices for the timestamps \(t_{4},t_{5},t_{6}\in[1,T]\) needed to perform a TD\({}_{1}\) switching. The TD\({}_{0}\) switching uses twice as many edges and timestamps. For all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{1}( \mathbf{D})\) where \(m_{3,5}<\mu\) and \(m_{4,6}<\mu\) we have \[b_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};2)\leq b_{\mathsf{TD}_{1}}(G^{\prime}, v_{1}\ldots v_{6};2)\leq T^{2},\] for all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{1}( \mathbf{D})\) where \(m_{3,7},m_{4,8},m_{5,9},m_{6,10}<\mu\) we have \[b_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};2)\leq b_{\mathsf{TD}_{0}}(G^{\prime}, v_{1}\ldots v_{10};2)\leq T^{4}\] and for all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{1}( \mathbf{D})\) we have \[b_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};1) \leq b_{\mathsf{TD}_{1}}(G^{\prime},v_{1}\ldots v_{4};1)\leq M,\] \[b_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};1) \leq b_{\mathsf{TD}_{0}}(G^{\prime},v_{1}\ldots v_{6};1)\leq M^{2},\] \[b_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};0) =b_{\mathsf{TD}_{1}}(G^{\prime},v_{1}v_{2};0),\] \[b_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};0) =b_{\mathsf{TD}_{0}}(G^{\prime},v_{1}v_{2};0).\] Proof.: The first two sets of inequalities follow since there are at most \(T\) and at least \(T-(\mu-1)\) available timestamps for each edge. For the third and fourth sets of inequalities, observe that there are at most \(M\) choices for an edge, at most \(4B_{D}\) choices such that the edge is not simple, and at most \(4\Delta\) choices such that the nodes are not distinct. The equalities follow from the observation that \(\mathbf{W}^{\prime}\) (in addition to \(\mathbf{D}\)) determines the number of incident simple temporal edges at each node. For all \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{1}( \mathbf{D})\) we have \[f_{\mathsf{B}_{m,n}}(G^{\prime})\leq\overline{f}_{\mathsf{B}_{m,n }}(\mathbf{W}^{\prime}),\] \[f_{\mathsf{C}_{m,n,o,p}}(G^{\prime})\leq\overline{f}_{\mathsf{C} _{m,n,o,p}}(\mathbf{W}^{\prime}).\] Proof.: The number of edges and timestamps needed for the \(\mathsf{B}_{m,n}\) switching is \(2(m+n)\), the number of edges and timestamps needed for the \(\mathsf{C}_{m,n,o,p}\) switching is \(2(m+n+o+p)\), and there are at most \(\Delta\) choices for each incident edge, and at most \(T\) choices for each timestamp. **Lemma 24**.: _For all \(G^{\prime\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\subseteq\mathcal{M}_{1}( \mathbf{D})\) we have_ \[\underline{b}_{\mathsf{B}_{m,n}}(\mathbf{W}^{\prime})\leq b_{ \mathsf{B}_{m,n}}(G^{\prime\prime},v_{1}\ldots v_{6})\leq M^{m+n}T^{2(m+n)},\] \[\underline{b}_{\mathsf{C}_{m,n,o,p}}(\mathbf{W}^{\prime})\leq b_{ \mathsf{C}_{m,n,o,p}}(G^{\prime\prime},v_{1}\ldots v_{10})\leq M^{m+n+o+p}T^{2 (m+n+o+p)}.\] Proof.: The number of edges and timestamps needed to reverse the \(\mathsf{B}_{m,n}\) switching is \(m+n\) and \(2(m+n)\), the number of edges and timestamps needed to reverse the \(\mathsf{C}_{m,n,o,p}\) switching is \(m+n+o+p\) and \(2(m+n+o+p)\). There are at most \(M\) choices for each edge, at most \(4B_{D}\) choices such that the edge is not simple, and at most \(2\Delta\) choices for each node already chosen such that the nodes are not distinct. Finally, there are at most \(T\) and at least \(T-(\Delta-1)\) choices for each timestamp such that the edge does not exist in \(G^{\prime\prime}\). Proof of Lemma 14.: Using Lemma 21, Lemma 22, Lemma 23, and Lemma 24 in a similar style argument as in the proof of Lemma 9, after removing a temporal double-edge between two nodes \(v_{1}\) and \(v_{2}\) in step 5, a given graph \(G^{\prime}\in\mathcal{S}(\mathbf{W}^{\prime})\) is produced with probability \[p(\mathsf{TD}_{1})p_{\mathsf{B}}(\mathbf{l})\frac{\underline{b}_{\mathsf{TD}_ {1}}(\mathbf{W}^{\prime};0)\underline{b}_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime} ;1)\underline{b}_{\mathsf{TD}_{1}}(\mathbf{W}^{\prime};2)}{\overline{f}_{ \mathsf{TD}_{1}}(\mathbf{W})|\mathcal{S}(\mathbf{W})|}\] if \(m_{1,2}(G^{\prime})=1\) and with probability \[p(\mathsf{TD}_{0})p_{\mathsf{C}}(\mathbf{l})\frac{\underline{b}_{\mathsf{TD}_ {0}}(\mathbf{W}^{\prime};0)\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime };1)\underline{b}_{\mathsf{TD}_{0}}(\mathbf{W}^{\prime};2)}{\overline{f}_{ \mathsf{TD}_{0}}(\mathbf{W})|\mathcal{S}(\mathbf{W})|}\] if \(m_{1,2}(G^{\prime})=0\). Specifying the probabilities \(p(\mathsf{TD}_{1})\) and \(p(\mathsf{TD}_{0})\) as done for Stage 2 then suffices to equalize the probabilities over all graphs in \(\mathcal{S}(\mathbf{W}^{\prime})\). We are now able to show the following. **Lemma 25**.: _Given a tuple \(\mathbf{D}=(\mathbf{d},T)\) as input, T-Gen outputs a uniform random sample \(G\in\mathcal{G}(\mathbf{D})\)._ Proof.: If the initial graph \(G\) is simple, then the claim follows by Theorem 3. Otherwise the initial graph \(G\) is uniformly distributed in the set \(\mathcal{S}(\mathbf{W}(G))\subseteq\mathcal{M}(\mathbf{D})\) for some \(\mathbf{W}(G)\neq\mathbf{0}^{n\times n\times T}\). If \(G\) satisfies the initial conditions, then all entries \(\mathbf{W}(G)_{i,i,t}\) are either \(0\) or \(1\) and all entries \(\mathbf{W}(G)_{i,j,t}\) such that \(i\neq j\) are either \(0\) or \(2\). Now, it is straightforward to check that each iteration of Stage 1 corresponds to a map \(\mathcal{S}(\mathbf{W})\to\mathcal{S}(\mathbf{W}^{\prime})\) where \(\mathbf{W}^{\prime}\) is the matrix obtained from \(\mathbf{W}=\mathbf{W}(G)\) by setting exactly one entry \(\mathbf{W}_{i,i,t}=1\) to \(0\), and Stage 1 ends once all such entries have been set to \(0\). Similarly, each iteration of Stage 2 corresponds to a map \(\mathcal{S}(\mathbf{W})\to\mathcal{S}(\mathbf{W}^{\prime})\) where \(\mathbf{W}^{\prime}\) is the matrix obtained from \(\mathbf{W}\) by setting exactly two entries \(\mathbf{W}_{i,j,t}=\mathbf{W}_{j,i,t}=2\) where \(i\neq j\) to \(0\), and Stage 2 ends once all such entries have been set to \(0\). Thus, after Stage 1 and Stage 2 end, the final graph \(G\) is a simple temporal graph with \(\mathbf{W}(G)=\mathbf{0}^{n\times n\times T}\), and by Lemma 9 and Lemma 14, this graph is uniformly distributed in \(\mathcal{S}(\mathbf{0}^{n\times n\times T})=\mathcal{G}(\mathbf{D})\) as claimed. We move on to the run time proof. ### Runtime of T-Gen The following additional results are needed for the proofs of Lemma 6, Lemma 10, Lemma 15, Lemma 11, and Lemma 16. **Lemma 26**.: _Given that \(\Delta^{2+\epsilon}=O(M)\) for a constant \(\epsilon>0\) and \(T-\Delta=\Omega(T)\), we have \(p_{\mathsf{A}}(\mathsf{I})=1-o(\Delta^{-1})\)._ Proof.: Let \(k=m+n\). Then, the probability of choosing a type \(\mathsf{A}_{m,n}\) switching is at most \[p(\mathsf{A}_{m,n})=p_{\mathsf{A}}(\mathsf{I})\frac{\overline{f}_{\mathsf{A}_ {m,n}}(\mathbf{W}^{\prime})}{\underline{b}_{\mathsf{A}_{m,n}}(\mathbf{W}^{ \prime})}<\frac{\Delta^{2k}T^{2k}}{(M-B_{L}-B_{D}-4(k+3)\Delta)^{k}(T-(\Delta -1))^{2k}}.\] Now, if \(\Delta^{2+\epsilon}=O(M)\) and \(T-\Delta=\Omega(T)\), then \[\frac{\Delta^{2k}T^{2k}}{(M-B_{L}-B_{D}-4(k+3)\Delta)^{k}(T-(\Delta-1))^{2k}}= O\left(\Delta^{-\epsilon k}\right)=o(\Delta^{-k/\mu})\] by \(B_{L}+B_{D}+4(k+3)\Delta=O(\Delta^{2})=o(M)\) and \(\epsilon>\frac{1}{\mu}\). Thus, the type \(\mathsf{I}\) switching is chosen with probability at least \[p_{\mathsf{A}}(\mathsf{I})=1-\sum_{\begin{subarray}{c}0\leq m,n<\Delta\\ \mu\leq\max\{m,n\}\end{subarray}}p_{\mathsf{A}}(\mathsf{A}_{m,n})>1-\sum_{ \mu\leq k<2\Delta}\binom{k+1}{1}o\left(\Delta^{-k/\mu}\right)=1-o\left(\Delta ^{-1}\right)\] as claimed. **Lemma 27**.: _Given that \(\Delta^{2+\epsilon}=O(M)\) for a constant \(\epsilon>0\) and \(T-\Delta=\Omega(T)\), we have \(p_{\mathsf{B}}(\mathsf{I})=1-o(\Delta^{-1})\) and \(p_{\mathsf{C}}(\mathsf{I})=1-o(\Delta^{-1})\)._ Proof.: By a similar argument as in the proof of Lemma 26. Proof of Lemma 6.: By Lemma 5, the highest multiplicity of any ordinary multi-edge in the initial graph is at most \(\eta=\lfloor 2+2/\epsilon\rfloor\) with high probability. To complete the proof, we show that with high probability no two edges created due to switchings share the same node set, which in turn implies that the highest multiplicity of any ordinary multi-edge is \(\eta+1=\lfloor 3+2/\epsilon\rfloor:=\mu\). First, observe that Lemma 26 and Lemma 27 imply that the probability of performing at least one auxiliary switching in either of Stage 1 or Stage 2 is at most \((B_{L}+B_{D})o(\Delta^{-1})=o(1)\). With the remaining probability, only \(\mathsf{TL}\), \(\mathsf{TD}_{1}\), and \(\mathsf{TD}_{0}\) switchings are performed. The \(\mathsf{TL}\) switching creates three edges with node sets \(\{v_{1},v_{2}\}\), \(\{v_{1},v_{3}\}\) and \(\{v_{4},v_{5}\}\) where \(v_{1}\) is incident with a temporal single-loop, and where \(v_{2}\) and \(v_{4}\), \(v_{3}\) and \(v_{5}\) are determined by choosing the switching uniformly at random which implies that these nodes are incident with edges \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) chosen uniformly at random among all edges which satisfy the conditions. Likewise, the \(\mathsf{TD}_{1}\) switching creates three edges with node sets \(\{v_{1},v_{3}\}\), \(\{v_{2},v_{4}\}\) and \(\{v_{5},v_{6}\}\) where \(v_{1}\) and \(v_{2}\) are incident with a temporal double-edge, \(v_{3}\) and \(v_{5}\), \(v_{4}\) and \(v_{6}\) are incident with edges satisfying the conditions chosen uniformly at random, and the \(\mathsf{TD}_{0}\) switching adds two such sets of edges. Now, observe that when performing a \(\mathsf{TL}\) switching at some node \(v_{1}\), there are at least \(M-2B_{L}-4B_{D}-2\Delta\) choices for the edge \((\{v_{2},v_{4}\},t_{2})\) and at least \(M-2B_{L}-4B_{D}-3\Delta\) choices for the second edge \((\{v_{3},v_{5}\},t_{3})\). Thus, the probability that a given node \(v\) with degree \(d\) takes on the role of one of \(v_{2}\), \(v_{3}\), \(v_{4}\), \(v_{5}\) in such a switching chosen uniformly at random is at most \[\frac{2d}{M-2B_{L}-4B_{D}-2\Delta}+\frac{2d}{M-2B_{L}-4B_{D}-3\Delta}=O\left( \frac{\Delta}{M}\right)\] due to \(d\leq\Delta\) and \(O(B_{L}+B_{D}+\Delta)=o(M)\implies M-O(B_{L}-B_{D}-\Delta)=\Omega(M)\). Similar calculations give the same asymptotic probability of a given node being involved in a \(\mathsf{TD}_{1}\) or \(\mathsf{TD}_{0}\) switching. This leads to the following bounds on the probability of creating two edges with the same node set in terms of the number of iterations of a stage performed. Fix any stage, and any node \(v\), and let \(i\) denote the number of iterations performed overall, and \(j\) the number of iterations performed such that the non-simple edge removed is incident at \(v\). Then, the number of edges created overall is \(O(i)\) and the expected number of edges created which are incident at \(v\) is \(O(j+i\Delta/M)\). Thus, the probability that the next switching performed at \(v\) creates an edge which shares the same node set as any edges created prior is \(O(i\Delta^{2}/M^{2})\) for edges not incident at \(v\), and \(O(j\Delta/M+i\Delta^{2}/M^{2})\) for edges incident at \(v\). Now, recall that by the initial conditions, there are at most \(\lambda=O(1)\) incident temporal single-loops and at most \(\kappa=O(1)\) incident temporal double-edges at any node, and at most \(B_{L}=M_{2}/M=O(\Delta)\) temporal single-loops and at most \(B_{D}=M_{2}^{2}/MT=O(\Delta^{2}/T)\) temporal double-edges overall. Then, starting with Stage 1, the probability of creating two edges with share the same node set due to any of the \(\mathsf{TL}\) switchings is at most \[\sum_{1\leq i\leq B_{L}}\sum_{1\leq j\leq\lambda}\left(O\left(i \frac{\Delta^{2}}{M^{2}}\right)+O\left(j\frac{\Delta}{M}+i\frac{\Delta^{2}}{M^ {2}}\right)\right) <B_{L}\lambda\left(O\left(B_{L}\frac{\Delta^{2}}{M^{2}}\right)+O \left(\lambda\frac{\Delta}{M}\right)\right)\] \[=O\left(\frac{\Delta^{4}}{M^{2}}+\frac{\Delta^{2}}{M}\right)\] \[=o(1)\] by \(B_{L}=O(\Delta)\), \(\lambda=O(1)\), and \(\Delta^{2+\epsilon}=O(M)\). Likewise, for Stage 2, the probability of creating two edges with share the same node set due to any of the \(\mathsf{TD}_{1}\) or \(\mathsf{TD}_{0}\) switchings is at most \[\sum_{1\leq i\leq B_{D}} \sum_{1\leq j\leq\kappa}\left(O\left((B_{L}+i)\,\frac{\Delta^{2}} {M^{2}}\right)+O\left((\lambda+j)\,\frac{\Delta}{M}+(B_{L}+i)\,\frac{\Delta^{2 }}{M^{2}}\right)\right)\] \[<B_{D}\kappa\left(O\left((B_{L}+B_{D})\,\frac{\Delta^{2}}{M^{2}} \right)+O\left((\lambda+\kappa)\,\frac{\Delta}{M}\right)\right)\] \[=O\left(\frac{\Delta^{6}}{T^{2}M^{2}}+\frac{\Delta^{5}}{TM^{2}}+ \frac{\Delta^{3}}{TM}\right)\] \[=o(1)\] by \(B_{D}=O(\Delta^{2}/T)\), \(\kappa=O(1)\), \(\Delta=O(T)\) and \(\Delta^{2+\epsilon}=O(M)\). Proof of Lemma 10.: By Lemma 26 the probability that Stage 1 restarts in steps \(9a-f\) is smaller than \(B_{L}o(\Delta^{-1})=o(1)\) and by Lemma 6 we can assume that the highest multiplicity of an edge in \(G\) is at most \(\mu\). Hence, the probability of not restarting is at most the probability of not f- or b-rejecting in steps 3, 7 or 10 under this assumption. The probability of an f-rejection in step 3 equals the probability of choosing edges \((\{v_{2},v_{4}\},t_{2})\), \((\{v_{3},v_{5}\},t_{3})\) and timestamps \(t_{4},t_{5},t_{6}\in[1,T]\) which do not fulfill the conditions defined for the \(\mathsf{TL}\) switching. For each edge, this probability is at most \((2B_{L}+4B_{D}+3\Delta)/M\) as there are at least \(M\) choices for each (oriented) edge, at most \(2B_{L}\) choices for a loop, at most \(4B_{D}\) choices for an edge contained in a temporal double-edge and at most \(3\Delta\) choices for an edge such that \(v_{2},v_{3},v_{4},v_{5}\) is not distinct from \(v_{1}\) or \(v_{4}\) is not distinct from \(v_{5}\). For the timestamps, we can assume that \(m_{1,2},m_{1,3},m_{4,5}\leq\mu\) so for each timestamp the probability of a rejection is at most \((\mu-1)/T\). Then, the probability of not f-rejecting in a given iteration is at least \[\frac{(M-2B_{L}-4B_{D}-3\Delta)^{2}(T-(\mu-1))^{3}}{M^{2}T^{3}}=\left(1-O\left( \frac{\Delta}{M}\right)\right)^{2}\left(1-O\left(\frac{\mu}{T}\right)\right)^{ 3}.\] ### Allendorf In addition, by Lemma 18, the probability of a b-rejection in step 7 is at most \((\mu-1)/T\) for each timestamp, and the probability of a b-rejection in step 10 is at most \((2B_{L}+4B_{D}+4\Delta)/M\). Thus, the probability of not b-rejecting in a given iteration is at least \[\frac{(M-2B_{L}-4B_{D}-4\Delta)(T-(\mu-1))^{2}}{MT^{2}}=\left(1-O\left(\frac{ \Delta}{M}\right)\right)\left(1-O\left(\frac{\mu}{T}\right)\right)^{2}.\] Finally, by the initial conditions there are at most \(B_{L}=M_{2}/M<\Delta\) iterations of Stage 1, and by using \(\mu=O(1)\), the probability that the algorithm does not restart in Stage 1 is at least \[\left(\left(1-O\left(\frac{\Delta}{M}\right)\right)^{2}\left(1-O\left(\frac{ \mu}{T}\right)\right)^{3}\right)^{B_{L}}=\exp\left(-O\left(\frac{\Delta^{2}}{M }\right)-O\left(\frac{\Delta}{T}\right)\right).\qed\] Proof of Lemma 11.: By the initial conditions, there are at most \(B_{L}=M_{2}/M<\Delta\) iterations of Stage 1. Thus, the claim follows if an iteration runs in expected time \(O(\Delta)\). The steps of an iteration which require attention are the f-rejection in step 3, the b-rejections in step 7 and 10, choosing an auxiliary switching in steps \(8-9\), the f-rejection in step \(9c\), and the b-rejection in step \(9f\). First, note that as observed by [1], there is a simple trick to implement an f-rejection step at little additional cost. In the case of step 3, it suffices to choose the two edges and three timestamps required for the TL switching uniformly at random and restart if those choices do not satisfy the conditions given in Definition 7. Then, since there are \(\overline{f}_{\text{TL}}(\mathbf{W})=M^{2}T^{3}\) ways to choose two (oriented) edges and three timestamps and \(f_{\text{TL}}(G)\) choices yield a switching which can be performed on the current graph \(G\), we restart with the desired probability of \(f_{\text{TL}}(G)/\overline{f}_{\text{TL}}(\mathbf{W})\). The b-rejections in step 7 and 10 require computing the quantities \(b_{\text{TL}}(G^{\prime},v_{1}v_{2}v_{3}v_{4}v_{5};2)\), and \(b_{\text{TL}}(G^{\prime},v_{1}v_{2}v_{3};1)\), \(b_{\text{TL}}(G^{\prime},v_{1};0)\). Computing \(b_{\text{TL}}(G^{\prime},v_{1}v_{2}v_{3}v_{4}v_{5};2)\) only requires look ups of the multiplicities of \(\{v_{2},v_{4}\}\) and \(\{v_{3},v_{5}\}\) which take time \(O(1)\) if the implementation maintains a data structure to store the multiplicity of the edges in the graph. To compute \(b_{\text{TL}}(G^{\prime},v_{1}v_{2}v_{3};1)\), it suffices to iterate trough the lists of incident edges at nodes \(v_{1}\), \(v_{2}\), \(v_{3}\) and subtract the number of simple edges which collide with those nodes from the total number of simple edges, which takes time \(O(\Delta)\). Additionally, computing \(b_{\text{TL}}(G^{\prime},v_{1};0)\) can be implemented in time \(O(1)\) by maintaining the number of simple edges incident at each node. Computing the type distribution for steps \(8-9\) naively would take time \(\Theta(\Delta^{2})\), but there is a simple trick to speed-up the computation. Re-purposing the proof of Lemma 26, we see that \[\sum_{\begin{subarray}{c}0\leq m,n<\Delta\\ \mu\leq\max(m,n)\\ m+n=k\end{subarray}}p_{\text{A}}(\text{A}_{m,n})<\binom{k+1}{1}p_{\text{A}}( \text{A}_{k,0})<\binom{k+1}{1}\frac{\overline{f}_{\text{A}_{k,0}}(\mathbf{W} )}{\underline{b}_{\text{A}_{k,0}}(\mathbf{W}^{\prime})}=:\overline{p}_{\text{ A}}(\text{A}_{k}).\] Thus, in time \(O(\Delta)\), we can compute \[\underline{p}_{\text{A}}(\text{I}):=1-\sum_{\mu\leq k<2\Delta-1}\overline{p}_ {\text{A}}(\text{A}_{k})\] as a lower bound on the probability of performing the identity switching, and with this probability step 9 can be skipped. Otherwise, if one of the auxiliary switchings \(\text{A}_{m,n}\) where \(m+n=k\) is chosen, then we compute the exact probabilities of these at most \(k+1=O(\Delta)\) switchings, and either choose a switching in accordance with the exact probabilities or restart the algorithm with probability \[1-\sum_{\begin{subarray}{c}0\leq m,n<\Delta\\ \mu\leq\max\{m,n\}\\ m+n=k\end{subarray}}\frac{p_{\mathsf{A}}(\mathsf{A}_{m,n})}{\overline{\mathsf{P}}_ {\mathsf{A}}(\mathsf{A}_{k})}\] to correct for the overestimate. Finally, by Lemma 26 the probability that steps \(9a-f\) are executed is at most \(o(\Delta^{-1})\), so it suffices if these steps can be implemented in time \(O(\Delta^{2})\). For the f-rejection in step \(9c\), we first pick the incident edges of \(v_{2}\) and \(v_{4}\) and the timestamps uniformly at random, and then restart if these choices do not satisfy the conditions in step Definition 8. This results in a restart probability of \(f_{\mathsf{A}_{m,n}}(G^{\prime})/(d_{2}^{m}d_{4}^{n}T^{2(m+n)})\), and to reach the desired probability of \(f_{\mathsf{A}_{m,n}}(G^{\prime})/\overline{f}_{\mathsf{A}_{m,n}}(\mathbf{W}^{ \prime})\), it suffices to restart with probability \(1-(d_{2}^{m}d_{4}^{n}T^{2(m+n)})/(\Delta T)^{2(m+n)}\) afterwards. To implement the b-rejection in step \(9f\), we need to compute \(b_{\mathsf{A}_{m,n}}(G^{\prime\prime},v_{1}v_{2}v_{3}v_{4}v_{5})\) which is the number of \(\mathsf{A}_{m,n}\) switchings which can produce the graph \(G^{\prime\prime}\). As this is equal to the number of ways to reverse an \(\mathsf{A}_{m,n}\) switching on the graph \(G^{\prime\prime}\), we can compute this quantity as the number of choices for \(m\) and \(n\) edges between \(v_{2},v_{4}\) and \(v_{3},v_{5}\), respectively, \(m+n\) additional edges, and \(2(m+n)\) timestamps which satisfy the conditions. Computing the number of choices for the edges between \(v_{2},v_{4}\) and \(v_{3},v_{5}\) can be done in time \(O(1)\) by looking up the number of simple edges between these nodes. Choosing an additional edge, and checking if it satisfies the conditions, e.g. if it is simple, and does not share nodes with the other edges, takes time \(O(1)\) for the first edges chosen and time \(O(\Delta)\) for the last edges, so for all \(m+n<2\Delta-1\) edges this takes at most time \(O(\Delta^{2})\). Finally, computing the number of available timestamps again only requires looking up the multiplicities of \(2(m+n)=O(\Delta)\) edges and takes time \(O(\Delta)\). Proof of Lemma 15.: By Lemma 27 the probability that Stage 2 restarts in steps \(6ei-vi\) or \(7ei-vi\) is smaller than \(B_{D}o(\Delta^{-1})=o(1)\) and by Lemma 6 we can assume that the highest multiplicity of an edge in \(G\) is at most \(\mu\). Hence, the probability of not restarting is at most the probability of not f- or b-rejecting in steps \(4\), \(6c\), \(6f\), \(7c\) or \(7f\) under this assumption. Furthermore, it is straightforward to check that the probability of f- or b-rejecting in a given iteration is larger if \(\theta=\mathsf{T}\mathsf{D}_{0}\) so we focus on this case. Using a similar argument as for Lemma 10, the probability of not f-rejecting in step \(3\) of a given iteration is at least \[\frac{(M-4B_{D}-5\Delta)^{4}(T-(\mu-1))^{6}}{M^{4}T^{6}}=\left(1-O\left(\frac {\Delta}{M}\right)\right)^{4}\left(1-O\left(\frac{\mu}{T}\right)\right)^{6}.\] In addition, by Lemma 22 and Lemma 23, the probability of a b-rejection in step \(6c\) or \(7c\) is at most \((\mu-1)/T\) for each timestamp, and the probability of a b-rejection in step \(6f\) or \(7f\) is at most \((4B_{D}+4\Delta)/M\). Then, the probability of not b-rejecting in a given iteration is at least \[\frac{(M-4B_{D}-4\Delta)^{2}(T-(\mu-1))^{4}}{M^{2}T^{4}}=\left(1-O\left(\frac {\Delta}{M}\right)\right)^{2}\left(1-O\left(\frac{\mu}{T}\right)\right)^{4}.\] Finally, by the initial conditions there are at most \(B_{D}=M_{2}^{2}/MT<\Delta^{2}/T\) iterations of Stage 2, and thus the probability that the algorithm does not restart in Stage 2 is at least \[\left(\left(1-O\left(\frac{\Delta}{M}\right)\right)^{4}\left(1-O\left(\frac{ \mu}{T}\right)\right)^{6}\right)^{B_{D}}=\exp\left(-O\left(\frac{\Delta^{3}}{ MT}\right)-O\left(\frac{\Delta^{2}}{T^{2}}\right)\right).\qed\] Proof of Lemma 16.: By the initial conditions, there are at most \(B_{D}=M_{2}^{2}/MT<\Delta^{2}/T\) iterations of Stage 2, so the claim follows if an iteration runs in expected time \(O(\Delta)\). We focus on the f-rejection in step 4, the b-rejections in steps \(6c\), \(6f\) and \(7c\), \(7f\), choosing an auxiliary switching in steps \(6d-e\) and \(7d-e\), the f-rejections in step \(6eiii\) and \(7eiii\), and the b-rejections in step \(6ev\) and \(7ev\). To implement the f-rejection in step 4, it again suffices to choose the two (or four) edges and three (or six) timestamps needed for the \(\mathsf{TD}_{1}\) or \(\mathsf{TD}_{0}\) switching uniformly at random and restart if those choices do not satisfy the conditions given in Definition 12. By similar arguments as in the proof of Lemma 11, the quantities needed for the b-rejections in step \(6c\), \(6f\) and \(7c\), \(7f\) can be computed in time \(O(\Delta)\). To choose an auxiliary switching in steps \(6d-e\) the method described in the proof of Lemma 11 can be re-used. For steps \(7d-e\), define \[\sum_{\begin{subarray}{c}0\leq m,n,o,p\leq\Delta\\ \mu<\max\{m,n,o,p\}\\ m+n+o+p=k\end{subarray}}p_{\mathsf{C}}(\mathsf{C}_{m,n,o,p})<\binom{k+1}{3} \frac{\overline{f}_{\mathsf{C}_{k,0,0,0}}(\mathbf{W})}{\underline{b}_{\mathsf{ C}_{k,0,0,0}}(\mathbf{W}^{\prime})}=:\overline{p}_{\mathsf{C}}(\mathsf{C}_{k}).\] Then, we can compute \[\underline{p}_{\mathsf{C}}(\mathsf{I}):=1-\sum_{\mu\leq k<4\Delta-3}\overline{ p}_{\mathsf{C}}(\mathsf{C}_{k})\] in time \(O(\Delta)\). To efficiently choose one of the switchings where \(m+n+o+p=k\), observe that \(p_{\mathsf{C}}(\mathsf{C}_{m,n,o,p})\) only depends on \(k\) and not the individual choices of \(m,n,o,p\), thus, it suffices to compute the exact probability for one of the switchings, and the exact number of switchings, and then follow the steps described in the proof of Lemma 11. By Lemma 27 the probability that steps \(6ev\) or \(7ev\) are executed is at most \(o(\Delta^{-1})\), so it suffices if these steps can be implemented in time \(O(\Delta^{2})\). This is possible by following the steps described towards the end of the proof of Lemma 11 with the necessary modifications. The following shows the efficiency of T-Gen. T-Gen runs in expected time \(O(M)\) for a tuple \(\mathbf{D}=(\mathbf{d},T)\) which satisfies \(\Delta^{2+\epsilon}=O(M)\) for a constant \(\epsilon>0\) and \(T-\Delta=\Omega(T)\). Proof.: By Lemma 4 and Lemma 5 the algorithm restarts at most \(O(1)\) times before finding a multigraph which satisfies the initial conditions. In addition, if the initial multigraph satisfies the conditions, then by Lemma 10 and Lemma 15, the algorithm restarts at most \(O(1)\) times during Stage 1 and Stage 2. The run time of the temporal configuration model is \(O(M)\), and if the initial multigraph satisfies the conditions, then by Lemma 11 and Lemma 16, the combined runtime of both Stage 1 and Stage 2 is \(O(\Delta^{3}/T)+O(\Delta^{2})=O(M)\). ### Proof of Theorem 2 Proof of Theorem 2.: The claim follows by Lemma 25 and Lemma 28.
2306.14732
Krylov complexity of modular Hamiltonian evolution
We investigate the complexity of states and operators evolved with the modular Hamiltonian by using the Krylov basis. In the first part, we formulate the problem for states and analyse different examples, including quantum mechanics, two-dimensional conformal field theories and random modular Hamiltonians, focusing on relations with the entanglement spectrum. We find that the modular Lanczos spectrum provides a different approach to quantum entanglement, opening new avenues in many-body systems and holography. In the second part, we focus on the modular evolution of operators and states excited by local operators in two-dimensional conformal field theories. We find that, at late modular time, the spread complexity is universally governed by the modular Lyapunov exponent $\lambda^{mod}_L=2\pi$ and is proportional to the local temperature of the modular Hamiltonian. Our analysis provides explicit examples where entanglement entropy is indeed not enough, however the entanglement spectrum is, and encodes the same information as complexity.
Pawel Caputa, Javier M. Magan, Dimitrios Patramanis, Erik Tonni
2023-06-26T14:33:40Z
http://arxiv.org/abs/2306.14732v1
# Krylov complexity of modular Hamiltonian evolution ###### Abstract We investigate the complexity of states and operators evolved with the modular Hamiltonian by using the Krylov basis. In the first part, we formulate the problem for states and analyse different examples, including quantum mechanics, two-dimensional conformal field theories and random modular Hamiltonians, focusing on relations with the entanglement spectrum. We find that the modular Lanczos spectrum provides a different approach to quantum entanglement, opening new avenues in many-body systems and holography. In the second part, we focus on the modular evolution of operators and states excited by local operators in two-dimensional conformal field theories. We find that, at late modular time, the spread complexity is universally governed by the modular Lyapunov exponent \(\lambda_{L}^{mod}=2\pi\) and is proportional to the local temperature of the modular Hamiltonian. Our analysis provides explicit examples where entanglement entropy is indeed not enough, however the entanglement spectrum is, and encodes the same information as complexity. ## I Introduction and summary In recent years quantum complexity has become a new exciting area within quantum many-body systems, quantum gravity and quantum field theory, see e.g. [1; 2; 3]. It provides a new perspective on the structure of quantum states as well as quantum dynamics, complementing that of quantum information. It is also instrumental in understanding black holes in holography [4] and quantum gravity [5; 6; 7; 8]. In this last context, it was argued by Susskind that "entanglement is not enough" [9], in particular if one seeks to understand aspects of the long time regime of chaotic systems and black holes. Complexity measures were then proposed as fine-grained probes of dynamics at late times. For this reason, most research in this direction has focused on properties of a real time, Hamiltonian evolution of complexity measures, trying to test their supremacy to entanglement measures. On the other hand, it is known that entanglement entropy contains only a small fraction of information about bipartite entangled states, and that the entanglement spectrum is a much more fine-grained measure of their structure [10]. In this vein, it is natural to ask for more direct relations between complexity and entanglement, and better characterize what types of entanglement measures are enough for the study of holography and quantum black holes. This is one of our main motivations in this work. To this end, we focus on arguably the most promising (in relation to the problems mentioned above) definition of complexity called, Krylov [11] or spread complexity [12], that can be applied both to quantum operators and states. This measure was inspired by pioneering works on operator size, quantum chaos and thermalisation in many-body systems [13; 14], and it has produced a burst of activity and interest in recent years [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. As shown in [12], Krylov or spread complexity defines complexity as the minimal amount of spread of the wavefunction in the Hilbert space. Such minimization is universally accomplished for a finite amount of time by the so-called Krylov basis, that arises via the Lanczos recursion method [53] (to be reviewed below). Some highlights of Krylov complexity in many body systems are the demonstration of the exponential growth with the universal Lyapunov exponent, together with the idea that Krylov complexity bounds the growth of out-of-time-ordered correlators [11; 54], the derivation of the linear growth regime of complexity [15], the geometric approach and connection to generalized coherent states [20], the ability to codify quantum chaos and fine-grained properties of the spectrum [12; 35; 42], and the recent holographic demonstrations that it can reproduce the volume of black hole interiors, a.k.a. the volumes of Einstein-Rosen bridges, see [34; 42; 43] for the JT gravity cases, and [36] for the case of general relativity in general dimensions. Given these recent developments, and with the aim of exploring the relation between entanglement and complexity, in this work we expand the Krylov or Lanczos approach in two new directions. First, generalising previous work on the time evolution of the thermofield double state (TFD) [12], we define and study the complexity of modular evolution in generic bipartite entangled states. We show that this measure is controlled by the entanglement spectrum of the reduced density matrix. Equivalently it is controlled by the modular Lanczos spectrum, which interestingly contains the very same information, and might be taken as a new characterization of the entanglement structure. Indeed the first modular Lanczos coefficient is the entanglement entropy itself, while the second is the (square) of capacity of entanglement [55; 56]. We will then analyze this quantity in several examples. For random states, the modular Hamiltonian is random, and we discuss how entanglement entropy is codified in the plateau of the modular complexity evolution, and how modular complexity is also sensitive to the Page curve [57]. In the second part we will discuss modular growth and evolution of quantum operators. In holography this paves a way towards a precise measure of complexity of bulk reconstruction. Exploiting again the power of generalised coherent states as well as modular two-point correlators in two-dimensional (2d) CFTs, we will derive a universal growth of spread complexity of modular evolution characterised by Lyapunov exponent \(\lambda^{mod}=2\pi\) and the scrambling time governed by an effective local temperature of the modular Hamiltonian for a single interval as well as two intervals in free fermion CFT. Overall, this approach makes it clear why "entanglement is not enough" [9], while at the same time it also suggests that a slight but insightful modification may solve the issue at stake, and that indeed entanglement spectrum is enough. ## II Spread Complexity For completeness, we begin with a brief review of the spread complexity [12]. The starting point of the discussion is the unitary evolution of an initial quantum state \(\ket{\psi_{0}}\) with time-independent Hamiltonian \(H\) \[\ket{\Psi(t)}=e^{-iHt}\ket{\psi_{0}}. \tag{1}\] Generically, this evolution spreads the state \(\ket{\psi_{0}}\) in the Hilbert space of the model, making it more complex. While the amount of the spread depends on the choice of basis, we can quantify the complexity of this process by minimizing the spread of the wavefunction over all choices of basis. The result of this minimization, at least for a finite period of time, brings us to the so-called Krylov basis. This basis, denoted below by \(\ket{K_{n}}\), is obtained via the Gram-Schmidt orthogonalisation procedure on the subspace of all the powers of \(H\) applied to \(\ket{\psi_{0}}\). The iterative procedure to achieve this is called the Lanczos algorithm [53] and it can be written as \[\ket{A_{n+1}}=\left(H-a_{n}\right)\ket{K_{n}}-b_{n}\ket{K_{n-1}}, \tag{2}\] where \(\ket{K_{n}}=b_{n}^{-1}\ket{A_{n}}\), \(b_{0}=0\) and the first vector coincides with our initial state \(\ket{K_{0}}=\ket{\psi_{0}}\). The key role in this story is played by Lanczos coefficients \(a_{n}\) and \(b_{n}\) that control the dynamics and are defined as \[a_{n}=\bra{K_{n}}H\ket{K_{n}},\qquad b_{n}=\bra{A_{n}}\ket{A_{n}}^{1/2}. \tag{3}\] The algorithm stops as soon as any of the \(b_{n}=0\), which signifies that no more independent basis vectors can be constructed. After running this iterative algorithm, we can expand the state in the Krylov basis \[\ket{\Psi(t)}=\sum_{n}\psi_{n}(t)\ket{K_{n}}. \tag{4}\] By construction, the coefficients of this expansion satisfy a discrete Schrodinger equation \[i\partial_{t}\psi_{n}(t)=a_{n}\psi_{n}(t)+b_{n}\psi_{n-1}(t)+b_{n+1}\psi_{n+1} (t), \tag{5}\] that also highlights the fact that the Hamiltonian is tridiagonal in the Krylov basis, with tridiagonal elements given by the Lanczos coefficients. Finally, if we are able to solve this equation, the spread complexity is computed as the average value of \(n\) in the probability distribution \(p_{n}(t)\equiv|\psi_{n}(t)|^{2}\), namely \[\mathcal{C}(t)=\sum_{n}n\,p_{n}(t). \tag{6}\] Clearly, solving (5) is the main step and it requires the knowledge of the Lanczos coefficients. They are in fact encoded in the return amplitude (the Loschmidt amplitude) \[S(t)=\bra{\Psi(t)}\Psi(0)\rangle=\bra{\psi_{0}}e^{iHt}\ket{\psi_{0}}=\sum_{n }\mu_{n}\frac{t^{n}}{n!}. \tag{7}\] Its moments \(\mu_{n}=\bra{\psi_{0}}(iH)^{n}|\psi_{0}\rangle\) allow us to extract Lanczos coefficients that are related via polynomial equations e.g., the first two are (see more in Appendix A) \[a_{0}=-i\mu_{1},\qquad b_{1}^{2}=\mu_{1}^{2}-\mu_{2}. \tag{8}\] Inversely, the knowledge of the Lanczos coefficients allows the computation of the moments of the Hamiltonian. Therefore, since the Lanczos coefficients play such a pivotal role, it is important to understand their physical meaning and how different phenomena are encoded in their scaling with \(n\). We conclude this introduction with two remarks. Firstly, an important class of initial states \(\ket{\psi_{0}}\) is given by the TFD state [58]. Denoting by \(\ket{n}\) the eigenstate of the Hamiltonian with energy \(E_{n}\) this state reads \[\ket{\psi_{\beta}}=\frac{1}{\sqrt{Z(\beta)}}\sum_{n}e^{-\frac{\beta}{2}E_{n} }\ket{n}_{L}\otimes\ket{n}_{R}, \tag{9}\] where \(Z(\beta)\) is the partition function at temperature \(T=1/\beta\). The TFD state is the canonical purification of the thermal density matrix \(\rho=e^{-\beta H}\). It is then interesting to consider the time evolution of (9) with the Hamiltonian of a single copy, say \(H_{L}\), especially in the context of black holes [59; 60]. For this evolution the return amplitude becomes the analytically continued partition function \[S(t)=\frac{Z(\beta-it)}{Z(\beta)}, \tag{10}\] whose modulus squared is the spectral form factor, a key object in the field of quantum chaos [61]. This way, the Lanczos coefficients as well as spread complexity are directly probing the spectrum of the evolving Hamiltonian, and they codify the fine-grained aspects such as spectral rigidity and the universality class of the chaotic model [42; 12; 35]. Indeed, for chaotic systems with no degeneracies the Lanczos spectrum of this process contains exactly the same information as the spectrum itself. The main idea of this work is to generalise this TFD example to reduced density matrices and modular Hamiltonian evolution. Secondly, the Krylov complexity of the operator growth [11] can be studied in a complete analogy with the discussion above. The only non-trivial step is the choice of the inner-product in the space of operators that allows us to map Heisenberg evolution of an operator \(\mathcal{O}(t)\) to a state \(|\mathcal{O}(t)\). The crucial information about the operator growth is then captured by the return amplitude that corresponds to a two-point correlator (\(\mathcal{O}|\mathcal{O}(t)\)). Along these lines, below we will consider operator growth as well as the dynamics of CFT states excited by local operators under modular Hamiltonian evolution. They will involve return amplitudes based on modular two-point functions in 2d CFTs. ## II Modular spread complexity We now consider spread complexity of modular Hamiltonian evolution. As reviewed above, we start with a pure state \(\ket{\Psi_{0}}\) in some Hilbert space \(\mathcal{H}\). We then pick a sub-system \(A\) and its complement \(A^{c}\), and assume a Hilbert space decomposition \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{A^{c}}\), so that we can write \(\ket{\Psi_{0}}\) in the Schmidt form \[\ket{\Psi_{0}}=\sum_{j}\sqrt{\lambda_{j}}\ket{j}_{A}\ket{j}_{A^{c}}, \tag{11}\] where \(\ket{j}\) are basis vectors in \(A\) (and the complement). As usual, we define the reduced matrix \(\rho_{A}\) of the sub-region \(A\) as well as the modular Hamiltonian \(H_{A}\) by \[\rho_{A}=\mathrm{Tr}_{A^{c}}\left(\ket{\Psi_{0}}\bra{\Psi_{0}}\right)\equiv e ^{-H_{A}}. \tag{12}\] The Schmidt coefficients in (11) describe the spectrum \(\lambda_{j}\) of \(\rho_{A}\) or the spectrum \(\mathcal{E}_{j}\) of the modular Hamiltonian \(H_{A}\) \[\lambda_{j}\equiv e^{-\mathcal{E}_{j}},\qquad\sum_{j}\lambda_{j}=1, \tag{13}\] and, by analogy with thermal states, we can define the modular partition function at inverse temperature \(\beta=n\) as \[\tilde{Z}(n)=\mathrm{Tr}(\rho_{A}^{n})=\sum_{j}e^{-n\mathcal{E}_{j}}. \tag{14}\] Conventionally, we normalise \(\mathrm{Tr}(\rho_{A})=\tilde{Z}(1)=1\). Finally, we define the modular evolution of the initial state (11) as \[\ket{\Psi(s)}=e^{-isH_{A}\otimes 1_{A^{c}}}\ket{\Psi_{0}}, \tag{15}\] where \(s\) is the modular time. Note that we perform this evolution with \(H_{A}\otimes 1_{A^{c}}\) and not with the total modular Hamiltonian \(H_{mod}=H_{A}\otimes 1_{A^{c}}-1_{A}\otimes H_{A^{c}}\); indeed, \(\ket{\Psi_{0}}\) is invariant under the evolution with \(H_{mod}\)[62]. By analogy with the TFD state (evolution with \(H_{L}\) vs \(H_{L}-H_{R}\)), this leads to a non-trivial evolution of the state \(\ket{\Psi_{0}}\). In the following, our goal will be to quantify the spread complexity of this state in various models and shed light on the Lanczos coefficients in this evolution. For that we use the Lanczos algorithm to construct an orthonormal basis \(\ket{K_{n}}\) and expand our state as in (4), where the expansion coefficients \(\psi_{n}(s)\) satisfy (5) with Lanczos coefficients \(a_{n}\) and \(b_{n}\) encoded in the modular return amplitude \[S(s)\equiv\bra{\Psi(s)}\Psi_{0}=\sum_{j}\lambda_{j}^{1-is}=\tilde{Z}(1-is). \tag{16}\] This object is closely related to the Renyi entropies of the reduced density matrix \(\rho_{A}\) defined for integer \(n\) as \[S_{A}^{(n)}=\frac{1}{1-n}\log(\mathrm{Tr}\rho_{A}^{n}), \tag{17}\] and we have the relation to the analytically continued Renyi with replica index \(n=1-is\) \[S(s)=\exp\left(is\,S_{A}^{(1-is)}\right). \tag{18}\] We conclude that the Lanczos procedure, based on the moments of \(S(s)\), will involve interesting combinations of quantum information measures. Indeed, already from (8), we can see that for the modular Hamiltonian \(H_{A}\), the moments \(a_{0}\) and \(b_{1}^{2}\) will be simply the von Neumann entropy \(S_{A}\) and the capacity of entanglement \(C_{E}\)[63; 64; 65; 55; 66] respectively. At the conceptual level, since spread complexity is a functional of the survival amplitude, and this is a functional of the entanglement spectrum, we conclude that, while entanglement is not enough (it is just \(a_{0}\)), entanglement spectrum is enough. Going in the reverse direction, since the Renyi entropies can be found from the modular survival amplitude, and this is a functional of the modular Lanczos spectrum, we also conclude that Lanczos spectrum is enough. This construction then provides a solid bridge between entanglement and complexity, as we further develop below. ## III Examples It is useful to consider a few simple, analytical examples. Let us start from a qubit state \(\ket{\Psi_{0}}=\sqrt{p}\ket{00}+\sqrt{1-p}\ket{11}\) where \(A\) and \(A^{c}\) are the first and second spins respectively and \(p\in[0,1]\). Tracing out the second Hilbert space we obtain the return amplitude \[S(s)=\mathrm{Tr}(\rho_{1}^{(1-is)})=p^{(1-is)}+(1-p)^{(1-is)}, \tag{19}\] with moments (see definition (7)) \[\mu_{k}=(-i)^{k}\left(p\log^{k}(p)+(1-p)\log^{k}(1-p)\right). \tag{20}\] From them we extract the non-vanishing Lanczos coefficients \[a_{0} = -p\log(p)-(1-p)\log(1-p)=S_{1},\] \[b_{1}^{2} = p(1-p)\left(\log(1-p)-\log(p)\right)^{2}=C_{E}(\rho_{1}),\] \[a_{1} = -p\log(1-p)-(1-p)\log(p), \tag{21}\] and confirm the relation with entanglement entropy and capacity of entanglement. At present, we do not have a sharp quantum information interpretation for \(a_{1}\) and we hope to return to this issue in the future. Next, we derive the two solutions of the Schrodinger equation (5) with these Lanczos coefficients and they are \[\psi_{0}(s) = p^{1+is}+(1-p)^{1+is}=S(s)^{*},\] \[\psi_{1}(s) = \mp\sqrt{p(1-p)}((1-p)^{is}-p^{is}), \tag{22}\] with \(\mp\) corresponding to \(\pm\) in \(b_{1}\) (recall that \(b_{n}\)'s (3) are always positive so this sign depends on the difference between \(\log(1-p)\) and \(\log(p)\)). By construction, the coefficient \(\psi_{0}(s)\) and \(S(s)\) are related by the simple complex conjugation. Finally, the modular spread complexity (6) is given by \[\mathcal{C}(s)=4p(1-p)\sin^{2}\left(\frac{s}{2}\log\frac{1-p}{p}\right). \tag{23}\] In this simple example with Krylov space dimension equal to 2, we have a relation between the modular spread complexity and modular spectral form factor: \(\mathcal{C}(s)=|\psi_{1}(s)|^{2}=1-|\psi_{0}(s)|^{2}\), which is not true in general for long times [42]. Clearly, the complexity growth is determined by the value of \(p\). In particular, for maximally entangled state \(p=1/2\), the \(b_{1}\) as well as \(C(s)\) vanish (see Appendix B for another example). More generally for flat entanglement spectrum we have \[\mathrm{Tr}(\rho_{A}^{n})=\mathrm{dim}(\mathcal{H}_{A})^{1-n}, \tag{24}\] and we only get non-trivial \(a_{0}=S_{A}\) and all \(b_{n}=0\). This is a physically sensible result. In this context we only need one number to understand the structure of the state. In the thermodynamic limit of physical systems, such as those appearing in quantum gravity, it might seem that we have flat entanglement spectrum at micro-canonical sectors. This is only an artefact of the thermodynamic limit. In reality the spectrum is chaotic, and the eigenvalues, although close to the average flat value, show no degeneracy and resemble the spectrum of a random matrix. In this scenario the Lanczos spectrum is completely different, as we discuss below. Another simple example consists of two coupled harmonic oscillators [67]. After tracing one of them, we get the entanglement spectrum and modular partition function \[\lambda_{k}=(1-\xi)\xi^{k},\qquad\tilde{Z}(n)=\frac{(1-\xi)^{n}}{1-\xi^{n}}, \tag{25}\] where \(0<\xi<1\) is related to the details of the coupling between the oscillators. Following the above procedure, we can derive a general form for the Lanczos coefficients \[a_{n} = -n\frac{1+\xi}{1-\xi}\log(\xi)-\log(1-\xi)-\frac{\xi}{1-\xi}\log( \xi),\] \[b_{n} = n\frac{\sqrt{\xi}}{1-\xi}\log(1/\xi), \tag{26}\] where again \(a_{0}=S_{1}\) and \(b_{1}^{2}=C_{E}(\rho_{1})\). Observe that these Lanczos coefficients are governed by the SL(2,R) symmetry algebra and our modular evolution of the state can be mapped to a coherent state of this Lie group. This allows us to recycle the derivations in [20] and derive the modular spread complexity \[\mathcal{C}(s)=\frac{4\xi}{(1-\xi)^{2}}\sin^{2}\left(\frac{s}{2}\log(\xi) \right). \tag{27}\] The entanglement spectrum is equivalent to the thermal spectrum of a single oscillator, i.e. writing \(\xi=\exp(-\beta\omega)\) we have \(\lambda_{k}=e^{-\beta E_{k}}/Z(\beta)\) with \(E_{k}\) being the energy of a single harmonic oscillator with frequency \(\omega\). Even though we have an infinite dimensional Krylov basis, modular spread complexity oscillates. However, we can formally send \(\omega\to i\tilde{\omega}\) (complex \(\log(\xi)\)) and observe exponential growth of the modular spread complexity. Finally, we consider 2d CFT where the trace of the reduced density matrix of a single interval \(A=[u,v]\) can be computed using the replica trick as a correlator of twist operators inserted at the end-points of \(A\)[68] \[\tilde{Z}(n)=\langle\sigma_{n}(u)\tilde{\sigma}_{n}(v)\rangle=\exp(-\left(n-1 /n\right)W), \tag{28}\] where \(W\) contains the CFT central charge \(c\) and details of the interval as well as geometry of the underlying CFT and is directly related to entanglement entropy \(S_{A}=2W\) (e.g. \(W=\frac{c}{6}\log((u-v)/\epsilon)\) for the vacuum in a line). For our discussion, we neglected an overall non-universal constant in (28). However, it is crucial that we keep the cut-off \(\epsilon\) small, but finite. The analytic continuation gives the modular partition function \[\tilde{Z}(1-is)=\exp\left(-\frac{s^{2}W}{s^{2}+1}+i\frac{s(s^{2}+2)W}{s^{2}+1 }\right), \tag{29}\] therefore, the corresponding modular spectral form factor \(|\tilde{Z}(1-is)|^{2}\) decays to a plateau with value \(\exp(-S_{A})\). By expanding Lanczos coefficients for large \(W\) (or large central charge \(c\), see Appendix C), we can show that spread complexity grows quadratically for initial modular time, proportionally to the entanglement entropy \(S_{A}\) \[\mathcal{C}(s)\sim S_{A}\,s^{2}. \tag{30}\] For later times, at finite cut-off \(\epsilon\), we also expect a period of linear growth and saturation to a plateau (analogous to the spectral form factor (29)). Verifying this expectation numerically would be interesting and we leave it for future work. Next we move to more general qualitative arguments in the context of random matrix theory. ## IV Random modular Hamiltonians Further relations between entanglement entropy and entanglement spectrum on one hand, and spread complexity and the Lanczos spectrum on the other, arise by considering the example of random pure states. Given a pure state and a bipartition of the system into \(A\) and \(A^{c}\), a putative ensemble of pure states (defining the particular notion of random state) naturally defines an ensemble of modular Hamiltonians \(H_{A}\). This ensemble defines a particular notion of random modular Hamiltonian. The analysis of the Lanczos approach for random matrices was recently developed in [69; 35]. The application of these constructions to modular evolution goes as follows. We first notice that, in the context of random states, the Lanczos coefficients of a reduced subsystem are random parameters, and the first goal is to compute their statistics. This can be accomplished with two assumptions. First we need take the thermodynamic limit, where the dimension \(N\) of the subsystem \(A\) goes to infinity. Without loss of generality we assume that this dimension is smaller than the dimension of \(A^{c}\). In this limit the average values reliably inform us of the typical values associated with individual instances of the random modular Hamiltonian. Second we need to choose as initial state the vector \((1,0,\cdots,0)\). The reason is that for this state we know how to compute the Jacobian of the transformation between the original form of the random modular Hamiltonian and the tridiagonal form. It is given by [35] \[J=\prod_{n=1}^{N-1}b_{n}^{(N-n)\beta-1}\;, \tag{31}\] where \(\beta\) is the Dyson index of the ensemble of random matrices. This Jacobian should be thought as the analogue of the Vandermonde determinant for the change of variables that takes us to the diagonal form of the matrix. Equivalently, if the ensemble is invariant under a certain group of unitaries, we are free to take any initial state that follows from the previous one by applying a unitary belonging to such a group. In the thermodynamic limit, see [69; 70; 71; 35], it becomes natural to label the Lanczos coefficients in terms of \(x\equiv n/N\), namely as \(a(x)\equiv a_{n=xN}\) and \(b(x)\equiv b_{n=xN}\). The reason is that in this limit, on average over the ensemble, the Lanczos coefficients become continuous functions in the interval \(x\in[0,1]\). We can now obtain the relation between these functions and the modular spectrum. We cut the Krylov chain into shorter segments of a given length \(L\), such that \(L\to\infty\) and \(L/N\to 0\) in the thermodynamic limit. This is a block approximation of the Hamiltonian whose density of states is the sum of the densities of each block. Given the continuity assumption, \(a_{n}\) and \(b_{n}\) can be taken as constants in each block, equal to \(a(x)\) and \(b(x)\). The different Hamiltonian blocks are then Toeplitz matrices of size \(L\), with diagonal elements given by certain \(a\) and off-diagonal elements given by certain \(b\). These matrices have eigenvalues \(E_{k}=2\,b\,\cos(k\pi/(L+1))+a\), with \(k=1,\cdots,L\), and their density of states read \[\rho_{a,b}(E)=\frac{1/L}{|dE_{k}/dk|}=\frac{H(4\,b^{2}-(E-a)^{2})}{\pi\,\sqrt{ 4\,b^{2}-(E-a)^{2}}}\;. \tag{32}\] Here \(H(x)\) is the Heaviside step function and we normalized the density of states by dividing by \(L\). The total (normalized) density of states is the sum over all blocks. In the thermodynamic or continuum limit this becomes [35] \[\rho(E)=\int_{0}^{1}dx\,\frac{H(4\,b(x)^{2}-(E-a(x))^{2})}{\pi\,\sqrt{4\,b(x) ^{2}-(E-a(x))^{2}}}\,. \tag{33}\] This formula relates the average Lanczos coefficients to the modular spectrum, in particular to the modular density of states, where we remind that \(\lambda=e^{-E}\) (see (13)). Deviations from this formula were also found in [35], further providing a relation between the average Lanczos coefficients and the potential defining the ensemble of random matrices. Generically, in chaotic systems the wavefunction in the Krylov basis (4) reaches a stationary regime. In this regime the probabilities fluctuate around a mean value \(\tilde{p}(x)\). For special initial states we might have \(\tilde{p}(x)=1\), namely constant in \(x\), but this is not the generic situation as can be established numerically in simple scenarios [35; 12]. It is thus natural to inquire for the form of the stationary distribution \(\tilde{p}(x)\). Indeed, in terms of the distribution of energies of the initial state \[\mid\langle\psi|E\rangle\mid^{2}\equiv P(E)\;, \tag{34}\] this is derived in [69] as follows. Assume \(P(E)\) is a continuous function of the energy. For the modular state evolution that we are considering, this implies a continuous entanglement spectrum with small fluctuations around the average. Using (32), the number of states in the interval between \(x\) and \(x+dx\) and in the interval between \(E\) and \(E+dE\) is \[\tilde{\rho}(x,E)\,dx\,dE=\frac{N\,dx\,dE}{\pi\,\sqrt{4b(x)^{2}-(E-a(x))^{2}}}. \tag{35}\] The long-time average probability distribution in the Krylov basis is just the the convolution of this density with the distribution of energies of the initial state (which is conserved in time). This reads \[\bar{p}(x)=\int dE\,P(E)\,\tilde{\rho}(x,E)\,. \tag{36}\] For the modular evolution of states the initial state was (11). The distribution of energies in the initial state is then \(P(E)=\lambda=e^{-E}\), and we arrive at \[\bar{p}(x)=\,N\,I_{0}(2b(x))e^{-a(x)}\;. \tag{37}\] The plateau of the modular spread complexity and the Shannon entropy \(H_{\rm Shannon}\) in the Krylov basis (dubbed K-entropy in [15]) follow from this probability distribution \(\bar{p}(x)\). This is an explicit function once we have derived the Lanczos spectrum from the modular density of states using (33). It turns out that the result for the Shannon entropy is quite insensitive to the specific ensemble of random modular Hamiltonian, i.e on the specific Lanczos coefficients \(a(x)\) and \(b(x)\). Indeed \[H_{\rm Shannon}=-\sum_{n}\bar{p}_{n}\ln\bar{p}_{n}\approx\ln N, \tag{38}\] up to subleading corrections in the thermodynamic limit. This means that the dimension of the Hilbert space explored by the random modular evolution is the same as the number of non-zero eigenvalues in the reduced density matrix, counted by its leading density of states. Notice that this same result applies for the complementary subsystem \(A^{c}\). Although we assumed the dimension of \(A\) was smaller than that of \(A^{c}\), the modular Hamiltonian and modular spectrum are the same up to zeros. In particular the number of non-zero eigenvalues is the same, and the saturation will happen at \(\log N\) as well for \(A^{c}\), where we remind that \(N\) is the dimension of the smaller subsystem \(A\). Finally, we can turn things around. Starting from the Lanczos coefficients \(a(x)\) and \(b(x)\), we can find the stationary distribution of the modular spread complexity \(\bar{p}(x)\) and from there we can obtain the initial probability distribution in the energy basis as \[P(E)=\int dx\,\bar{p}(x)\,\tilde{\rho}(x,E)\,. \tag{39}\] We are led to the following conclusions. The first is that we could use these results in the context of the Page curve [57] (recall also that the relevance of the capacity of entanglement, that is our Lanczos coefficient \(b_{1}\), to the Page curve was already discussed in [65; 66]). In this scenario, for random states drawn from the Haar measure the modular density of states is known and of compact support [57]. Although (33) cannot be solved in closed form in this case, the Lanczos \(b(x)\) coefficients decay to zero as they should and the modular spread complexity follows the regimes described in [12]. In particular, the spread complexity will saturate at a value controlled by the dimension of the smallest subsystem. For example, the entropy will be precisely \(\log N\) in the leading approximation, where \(N\) is such dimension. The plateau of modular spread complexity then draws a complexity Page curve in the same way as the entanglement entropy. The second conclusion concerns the slogan "entanglement is not enough" [9]. This was put forward to motivate the introduction of the notion complexity in quantum gravity. The present construction transparently shows why this is true when for the word "entanglement" we more precisely understand entanglement entropy itself. The reason is that entanglement entropy is the first entry of the Lanczos spectrum. But one needs the full spectrum of Lanczos coefficients to predict the long time dynamics of the wavefunction of the system. Spread complexity, which serves to characterize these dynamics, is also a functional of the whole spectrum. Clearly then, entanglement entropy is not enough. It is however not true if we slightly, but insightfully, modify the slogan so that it refers to the entanglement spectrum. As we have derived, there is a precise relation (one follows from the other and vice-versa), between the entanglement or modular spectrum, the modular Lanczos coefficients, the modular survival amplitude and the modular spread complexity. In this precise sense, we reach again the conclusion that the entanglement spectrum seems to be enough in the context of quantum gravity. The Lanczos modular spectrum and associated survival amplitude and modular complexity are enough as well. ## VI Modular growth and evolution of primary operators In this final section we discuss the operator growth and spread complexity of operators under the modular flow with the total modular Hamiltonian \(H_{mod}=H_{A}\otimes 1_{A^{c}}-1_{A}\otimes H_{A^{c}}\) (\(A^{c}\) being the complement of \(A\)). Namely, we consider the following modular evolution [72; 73; 62] \[\mathcal{O}(s)=e^{iH_{mod}s}\mathcal{O}(0)e^{-iH_{mod}s}\equiv e^{i\mathcal{ L}_{mod}s}\mathcal{O}(0), \tag{40}\] where the modular Liouvillian (super-operator) acts on operators by taking the commutator \(\mathcal{L}_{mod}\equiv[H_{mod},\cdot]\). This modular flow of operators has been a central topic in a variety of recent works in QFT and holography [74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87] but, to our knowledge, its complexity remains relatively unexplored. To make progress, for simplicity, we first consider \(H_{mod}\) for static, universal examples where \(A\) is a single interval in the vacuum of a 2d CFT defined either on the line or on the circle, leaving more complicated cases to future works (see more in Appendix D). In the final part of this section we also mention the result for two disjoint intervals on the line for the free massless Dirac field in its ground state. We start with the modular evolution of the highest weight state \(\left|h\right>\) (eigenstate of the CFT Hamiltonian i.e., \(\left|h\right>\equiv\lim_{z\to 0}\mathcal{O}(z)\left|0\right>\) in radial quantisation of the Euclidean formalism) with the total modular Hamiltonian of an interval \(A\) in 2d CFT \[\left|\psi(s)\right>=e^{-isH_{mod}}\left|h\right>. \tag{41}\] The total modular Hamiltonian is a well-defined operator in the continuum and, in 2d CFTs, it can be written as a linear combination of the SL(2,R) generators (see e.g. [88; 89]) \[H_{mod}=\sigma_{-1}L_{-1}+\sigma_{0}L_{0}+\sigma_{1}L_{1}+a.c., \tag{42}\] where the anti-chiral (a.c.) part is similarly expressed in terms of global \(\tilde{L}\)'s (for simplicity, we will focus on the chiral part) and the coefficients depend on the CFT and interval geometry (see Appendix D). For this reason (41) is simply a coherent state and falls into the Lie-algebra symmetry examples considered in [12; 20] where, using the Baker-Campbell-Hausdorff formula, the spread complexity can be evaluated as a simple function of general \(\sigma_{i}\)'s (see (94) in Appendix D). Before we write it down, note that we may think about this state simply in the context of spread complexity of states [12] or as a state representing operator growth [11] with a particular choice of the inner product that corresponds to the return amplitude \[S(s)=\left<h|e^{isH_{mod}}\left|h\right>.\right. \tag{43}\] By using the procedures discussed in [12; 20], we find that Lanczos coefficients from (43) have the SL(2,R) form: \(a_{n}=\gamma(n+\Delta)\) and \(b_{n}=\alpha\sqrt{n(n+2\Delta-1)}\) with \(\alpha=\sqrt{\sigma_{1}\sigma_{-1}}\) and \(\gamma=\sigma_{0}\). Interestingly, in all the examples where (42) holds, the coefficients satisfy \(\sigma_{1}\sigma_{-1}-\sigma_{0}^{2}/4=\pi^{2}\) and this combination is directly linked to the Lyapunov exponent defined from the Krylov complexity [11]. For example, for a single interval \(A=[a,b]\) in 2d CFT on a circle of size \(L\) we obtain \[\mathcal{C}(s)\simeq\frac{2h}{\sin^{2}\left(\frac{\pi(b-a)}{L}\right)}\sinh^ {2}(\pi s). \tag{44}\] Clearly, at late modular time \(s\gg 1\), the spread complexity grows exponentially with Lyapunov exponent \(\lambda_{L}^{mod}=2\pi\). We will see below that this is in fact a universal behaviour also for local operator growth. The size of the entangling interval \(b-a\) governs the scrambling time at late time (see also below). We should also point that, this result that uses \(\sigma_{i}\)'s from [88; 89] in the general formula (94), does not seem to have a well-defined (naive) limit of \(L\rightarrow\infty\). As already pointed out in [20; 22], the spread complexity of coherent states can be written as an expectation value of \(L_{0}\). When passing from the cylinder to the plane, the derivative of the exponential map will bring the appropriate factor of \(L\) that cures this (that is why we used \(\simeq\)). Next, for 2d CFTs, we consider modular Hamiltonian evolution of states locally excited by a primary operator \(\mathcal{O}(l)\) of conformal dimension \(h\) placed inside the interval \(A=[a,b]\), i.e., \(l\in A\). This state is defined as \[\left|\psi(s)\right>=\mathcal{N}e^{-iH_{mod}s}e^{-\epsilon H}\mathcal{O}(l) \left|0\right>, \tag{45}\] where \(\left|0\right>\) is the ground state of the entire system bipartite as \(A\cup A^{c}\) and \(H_{mod}\) is the total modular Hamiltonian associated with \(A\) in this state. We remark that, again for the sake of simplicity, we only consider a chiral part of the 2d CFT. Note that the local operator is first smeared with the CFT Hamiltonian by an amount \(\epsilon\) in Euclidean time such that the energy of the excitation is finite \(E_{\mathcal{O}}\sim h/\epsilon\) and factor \(\mathcal{N}\) is the normalisation of this initial state with the operator. The standard Hamiltonian evolution of these states has been extensively studied in the past [90; 91; 92; 93] (see for corresponding spread complexity in Appendix D) but here we will be interested in the modular evolution instead. Before we proceed, it is important to point that, since the operators \(\mathcal{O}(l)\) are inserted in \(A\), the actions of \(H_{mod}\) and \(H_{A}\otimes 1_{A^{c}}\) on them are identical. Hence, our discussion in the following also holds for (45) with \(H_{mod}\) replaced by \(H_{A}\) and we will use them interchangeably in our formulas. In particular, the modular correlators that will be used in our return amplitudes (see below) are identical for these two modular evolutions. Let us then recall a few basic facts about \(H_{A}\). In the chiral 2d CFTs and for some particular states and bipartitions (e.g. when the CFT is defined either on the line or on the circle and is in its ground state), the modular Hamiltonian can be written as \[H_{A}=2\pi\int_{a}^{b}\beta_{0}(u)T(u)du,\qquad\beta_{0}(u)=\frac{1}{w^{\prime} (u)}, \tag{46}\] where \(T(u)\) is the chiral component of the 2d CFT energy-momentum tensor and the weight function \(\beta_{0}(u)\) (often called local inverse temperature) encodes the dependence on the state and of the bipartition for the specific cases we are considering. For instance, for the ground state of a CFT on the line or on a circle of length \(L\), we have respectively \[w(u)=\log\left(\frac{u-a}{b-u}\right),\quad w(u)=\log\left(\frac{\sin[\pi(u-a)/ L]}{\sin[\pi(b-u)/L]}\right). \tag{47}\] The modular evolution generated by (46) for a primary operator \(\mathcal{O}\) of conformal dimension \(h\) is \[\mathcal{O}(s,u)\equiv e^{isH_{A}}\mathcal{O}(u)e^{-isH_{A}}, \tag{48}\] and it can be written as [74; 75; 83] \[\mathcal{O}(s,u)=\left(\frac{\beta_{0}(\xi(s,u))}{\beta_{0}(u)}\right)^{h} \mathcal{O}\big{(}\xi(s,u)\big{)}, \tag{49}\] where \({\cal O}(u)\) is the initial configuration of the field at \(s=0\) and \(\xi(s,u)\) satisfies the following differential equation \[\partial_{s}\xi(s,u)=2\pi\beta_{0}(x)\,\partial_{u}\xi(s,u),\qquad\xi(0,u)=u. \tag{50}\] The solution of this equation reads \[\xi(s,u)\equiv w^{-1}\big{(}w(u)+2\pi s\big{)}, \tag{51}\] in terms of \(w(u)\) defined in (46) and its inverse function. Then, the modular evolution (49) can be expanded in powers of \(s\) as follows \[{\cal O}(s,u)=\sum_{n=0}^{\infty}\frac{(2\pi\,s)^{n}}{n!}\,\widetilde{\cal O}_ {n}(u). \tag{52}\] By employing (50), the first three (non-trivial) operators in this expansion are \[\widetilde{\cal O}_{1}(u) = \beta_{0}(u)\,{\cal O}^{\prime}(u)+h\,\beta_{0}^{\prime}(u)\,{ \cal O}(u),\] \[\widetilde{\cal O}_{2}(u) = \beta_{0}(u)^{2}\,{\cal O}^{\prime\prime}(u)+(2h+1)\beta_{0}(u) \beta_{0}^{\prime}(u)\,{\cal O}^{\prime}(u)\] \[+h\big{[}h\,\beta_{0}^{\prime}(u)^{2}+\beta_{0}(u)\beta_{0}^{ \prime\prime}(u)\big{]}{\cal O}(u),\] \[\widetilde{\cal O}_{3}(u) = \beta_{0}(u)^{3}\,{\cal O}^{\prime\prime\prime}(u)+3(h+1)\beta_{0 }(u)^{2}\beta_{0}^{\prime}(u)\,{\cal O}^{\prime\prime}(u) \tag{53}\] \[+\beta_{0}(u)\big{[}(3h^{2}+3h+1)\beta_{0}^{\prime}(u)^{2}\] \[+(3h+1)\beta_{0}(u)\beta_{0}^{\prime\prime}(u)\big{]}\,{\cal O}^{ \prime}(u)\] \[+h\big{[}h^{2}\beta_{0}^{\prime}(u)^{3}+(3h+1)\beta_{0}(u)\beta_{0 }^{\prime}(u)\beta_{0}^{\prime\prime}(u)\] \[+\beta_{0}(u)^{2}\beta_{0}^{\prime\prime}(u)\big{]}{\cal O}(u).\] It is straightforward to write also \(\widetilde{\cal O}_{n}(u)\) with \(n>3\), but their expressions are rather complicated to be reported here. Clearly, the growth of the operator (in operator space) due to the modular evolution (48) is determined also by the weight function \(\beta_{0}(u)\) occurring in the modular Hamiltonian (46) and its non-trivial derivatives provide additional contributions of the initial field configuration \({\cal O}(u)\) into \(\widetilde{\cal O}_{n}(u)\). Indeed, setting \(\beta_{0}(u)=\) const in (53) simplifies the expressions in a considerable way. On the other hand, the actual operator size and Krylov complexity [11] is usually computed based on the return amplitude that, after an appropriate choice of the inner-product, may become a two-point correlator (computable with (49)). Below, we will add to these intuitions by computing the spread complexity of (45) and find that it indeed depends on the local temperature \(\beta_{0}(u)\). Now, let us get back to the computation of the modular spread complexity of (45). The crucial ingredient is again the return amplitude that can be written as a special modular two-point correlator \[S(s)=\frac{\langle{\cal O}^{\dagger}(0,u_{1}){\cal O}(s,u_{2})\rangle}{\langle {\cal O}^{\dagger}(0,u_{1}){\cal O}(0,u_{2})\rangle}, \tag{54}\] which satisfies \(S(0)=1\) by construction. The insertion points of the operators in the initial state are (see Appendix D) \[u_{1}=l+i\epsilon,\qquad u_{2}=l-i\epsilon. \tag{55}\] The two-point correlators of the operators after modular flow can be found e.g. in [74; 75; 86; 87]. Their general form is \[\frac{\langle{\cal O}(s_{1},u_{1}){\cal O}(s_{2},u_{2})\rangle}{\langle{\cal O }(0,u_{1}){\cal O}(0,u_{2})\rangle}=\left(\frac{e^{w(u_{1})}-e^{w(u_{2})}}{e^{ w(u_{1})+\pi s_{12}}-e^{w(u_{2})-\pi s_{12}}}\right)^{2h}, \tag{56}\] where \(s_{12}\equiv s_{1}-s_{2}\) and \(w(u)\) is defined in (46). The modular correlator (56) satisfies the KMS condition with inverse temperature \(\beta_{KMS}=1\). Using (56), we can write our return amplitude with general \(w(u)\) as \[S(s)=\left(\frac{e^{-\pi s}(1-B)}{e^{-2\pi s}-B}\right)^{2h},\qquad B=e^{w(u_{ 2})-w(u_{1})}, \tag{57}\] where \(B\), via \(w(u)\), depends on the details of the bipartition. This return amplitude again falls into the SL(2,R) symmetry class and we can derive universal Lanczos coefficients \(a_{n}\) and \(b_{n}\) for arbitrary \(B\) (or \(w(u)\)) and compute the modular spread complexity. To derive correct Lanczos coefficients, it is important to perform this computation for general \(B\) and take small \(\epsilon\) in (55) only at the end (instead of first expansing \(S(s)\) in \(\epsilon\) and then trying to derive moments). The Lanczos coefficients are \[a_{n} = \frac{2\pi i(B+1)}{B-1}(n+h),\] \[b_{n} = \frac{2\pi\sqrt{B}}{\sqrt{-(B-1)^{2}}}\sqrt{n(n+2h-1)}, \tag{58}\] where the factor of \(i\) and the signs are chosen such that they are real for our physical insertion points (55). Finally, the spread complexity for finite \(\epsilon\) can be written compactly for arbitrary \(w(u)\) as \[{\cal C}(s)=\frac{2h}{\sin^{2}\left(\frac{i(w(u_{1})-w(u_{2}))}{2}\right)}\sinh ^{2}(\pi s). \tag{59}\] Comparing with (44), we see that the modular evolution with \(\sinh^{2}(\pi s)\) is universally the same but the pre-factor in the present case involves the details of the insertion of the local operator \({\cal O}(l)\). Interestingly, the small \(\epsilon\) expansion leads to \[{\cal C}(s)=2h\frac{\beta_{0}(l)^{2}}{\epsilon^{2}}\sinh^{2}(\pi s)+O(\epsilon^{ 0}), \tag{60}\] which depends on \(\beta_{0}(l)\), while the sub-leading orders contain also the derivatives of \(\beta_{0}(l)\) (see (110)). We remark that the universal dependence on \(s\) is consistent with the analyticity properties of the two-point function (56) and, since the KMS inverse temperature is \(\beta_{KMS}=1\) for the modular evolution, it can be understood as \(\sinh^{2}(\pi s/\beta_{KMS})\). Moreover, at late modular time \(s\), we find \[C(s)\sim e^{\lambda_{\overline{L}}^{\rm end}(s-s_{s})}, \tag{61}\] where the modular Lyapunov exponent \(\lambda_{L}^{mod}\) and scrambling time \(s_{*}\) for the local operator are determined by the local temperature of the modular Hamiltonian respectively as \[\lambda_{L}^{mod}=2\pi,\qquad s_{*}=\frac{1}{\pi}\log\left(\sqrt{\frac{2}{h}} \frac{\epsilon}{\beta_{0}(l)}\right). \tag{62}\] It is interesting to point that, since the spatial bipartition is symmetric w.r.t. the center of the interval, the coefficient of the modular spread complexity \(\beta_{0}(l)^{2}\) (or the scrambling time) is maximal for \(l\) in the middle of the entangling region \(l=(a+b)/2\) whereas it is suppressed (vanishes) close to the boundary points of the entangling interval (see e.g. (100)). These are our main results in this section. Similarly to the bound of the Lyapunov exponent from Krylov complexity [11; 54] we conjecture that our modular exponent provides a bound on the modular chaos (see e.g. [79]). For more intervals, general modular Hamiltonians become more complicated and non-universal so the analysis is beyond the scope of this work. Nevertheless, for the free massless Dirac fermion in the vacuum, the modular Hamiltonian of disjoint intervals and the dicorresponding two-point modular correlators are known explicitly [75]. More precisely, we can consider local fermion operator \(\Psi(l)\) with \(h=1/2\), in either of the two intervals \([a_{1},b_{1}]\cup[a_{2},b_{2}]\), evolved with the modular Hamiltonian of this union region. The two-point function of modular flow of \(\Psi\) is known in this case [75; 86] (see also e.g. [87]) and, somewhat surprisingly, it turns out that the corresponding return amplitude can again be written as (57) with \[w(u)=\log\left(-\frac{(u-a_{1})(u-a_{2})}{(u-b_{1})(u-b_{2})}\right). \tag{63}\] This is sufficient to determine the modular spread complexity that, in the leading \(\epsilon\), becomes \[\mathcal{C}(s)=\frac{\beta_{0}^{loc}(l)^{2}}{\epsilon^{2}}\sinh^{2}(\pi s), \qquad\beta_{0}^{loc}(u)=\frac{1}{w^{\prime}(u)}, \tag{64}\] in terms of (63). In fact the local part of this modular Hamiltonian (that also contains a non-local piece) of these two disjoint intervals can again be written in the form (46) with \(\beta_{0}^{loc}(u)\); hence it governs the scrambling time. ## V Discussion and Outlook In this work we have expanded Krylov complexity technology to the context of modular evolution. In particular, we have studied the relations between the entanglement spectrum, the Lanczos spectrum, and the notions of Krylov and spread complexity in various concrete examples. On one hand, this construction transparently shows why "entanglement is not enough" [9]. In fact, from the complexity perspective of this story, entanglement entropy is just the first Lanczos coefficient, namely \(a_{0}=S_{E}\). However, to understand the evolution of the wavefunction, and consequently of spread complexity, we need to know the full modular Lanczos spectrum. One can make an analogous statement about the TFD state, where the thermal entropy is related to the first Lanczos coefficient, but all the higher coefficients are also crucial to determine the evolution of complexity. On the other hand, the full Lanczos spectrum is obtained from the entanglement spectrum, providing concrete evidence that entanglement spectrum may be enough in certain scenarios. From a different standpoint, from the Lanczos spectrum one can determine the entanglement spectrum up to degeneracies. In fact one can obtain all the moments of the modular Hamiltonian, and therefore the modular flow and all Renyi entropies. This way, the analysis of the modular Lanczos coefficients opens up a new window on the study of entanglement measures. As we show, the entanglement entropy is the first Lanczos coefficient while the capacity of entanglement is the second. It would be interesting if also the higher \(n\) Lanczos coefficients contain similar information theoretic interpretations (perhaps along the lines of entanglement monotones [94]) and we leave this problem for the future investigation. Then, we found that the modular growth of operators exhibits universal modular Lyapunov exponent \(\lambda_{L}^{mod}=2\pi\), related to the \(\beta_{KMS}\) and analyticity of the return amplitude, as well as the scrambling time sensitive to the local temperature of the CFT modular Hamiltonians. Going beyond our universal examples is certainly very important. For example, numerics for modular Hamiltonians in lattice models [95; 96; 97], would clarify the aforementioned relation between entanglement, complexity and modular chaos. In addition, we remark that in our analysis of spread complexity, by definition, we work with the standard, natural inner product in Hilbert space. For Krylov complexity of operators [11], the freedom of choosing a different inner product (e.g. Wightman) provides a different modular Krylov complexity. A systematic study and better understanding of sensitivity of modular complexity and modular chaos to these choices is an interesting open problem. Another important direction concerns holography. In holographic theories we expect the relation [77] \[H_{mod}^{bdy}=\frac{\hat{A}_{ext}}{4G_{N}}+\hat{S}_{\text{Wald-like}}+H_{mod}^ {bulk}\,, \tag{65}\] for the boundary modular Hamiltonian in terms of bulk quantities (\(\hat{A}_{ext}\) computes the area of the Ryu-Takayanagi [98] extremal surface \(\mathcal{S}\) in the bulk and \(\hat{S}_{\text{Wald-like}}\) can be expressed by expectation values of local operators on \(\mathcal{S}\)[77]). In the semiclassical limit, we also have that in the entanglement wedge of region \(R_{b}\) the commutators \([H^{bdy}_{mod},\phi_{R}]\) and \([H^{bdy}_{bulk},\phi_{R}]\) are the same, for any local operator \(\phi_{R}\) in \(R_{b}\). Equivalently, in the low energy limit (in the code-subspace) we have \[\phi_{R}(s)=e^{is\mathcal{L}_{bdy}}\phi_{R}=e^{is\mathcal{L}_{bulk}}\phi_{R}, \tag{66}\] and the Krylov subspace with modular Liouvillians \(\mathcal{L}_{bdy}\) or \(\mathcal{L}_{bulk}\) will be the same. It is then interesting to study the complexity of bulk reconstruction joining the results developed in [78], the analysis of the Lanczos approach for generalized free fields described in [16], and the present techniques. This might naturally be extended to the complexity of extracting information from the black hole interior, following the islands construction [99]. Finally, it would also be interesting to extend our discussion to the analysis of the black hole micro-states put forward in [36; 100], which are insightful examples of the so-called PETP states [101]. We hope to report on it in the near future. ## Acknowledgements We are grateful to Vijay Balasubramanian, Jan Boruch, Anatoly Dymarsky, Nima Lashkari, Sinong Liu, Joan Simon, Qingyue Wu and Claire Zukowski for many conversations on Krylov and spread complexity, and useful comments on the present draft. We also wish to thank Roberto Auzzi, Shira Chapman, Aldo Cotrone, Dongsheng Ge, Francesco Gentile, Mihail Mintchev, Giuseppe Mussardo, Giuseppe Policastro, Domenico Seminara. PC and DP are supported by NAWA "Polish Returns 2019" PPN/PPO/2019/1/00010/U/0001 and NCN Sonata Bis 9 2019/34/E/ST2/00123 grants. The work of JM is supported by CONICET, Argentina. ET is grateful to the Henri Poincare Institute (Paris) and to the CTP at MIT (Boston) for hospitality and financial support during part of this work.
2303.11094
Does gravitational confinement sustain flat galactic rotation curves without dark matter?
The short answer is $\textit{probably no}$. Specifically, this paper considers a recent body of work which suggests that general relativity requires neither the support of dark matter halos, nor unconventional baryonic profiles, nor any infrared modification, to be consistent after all with the anomalously rapid orbits observed in many galactic discs. In particular, the gravitoelectric flux is alleged to collapse nonlinearly into regions of enhanced force, in an analogue of the colour-confining chromoelectric flux tube model which has yet to be captured by conventional post-Newtonian methods. However, we show that the scalar gravity model underpinning this proposal is wholly inconsistent with the nonlinear Einstein equations, which themselves appear to prohibit the linear confinement-type potentials which could indicate a disordered gravitational phase. Our findings challenge the fidelity of the previous Euclidean lattice analyses. We confirm by direct calculation using a number of perturbation schemes and gauges that the next-to-leading order gravitoelectric correction to the rotation curve of a reasonable baryonic profile would be imperceptible. The `gravitoelectric flux collapse' programme was also supported by using intragalactic lensing near a specific galactic baryon profile as a field strength heuristic. We recalculate this lensing effect, and conclude that it has been overstated by three orders of magnitude. As a by-product, our analysis suggests fresh approaches to (i) the fluid ball conjecture and (ii) gravitational energy localisation, both to be pursued in future work. In summary, whilst it may be interesting to consider the possibility of confinement-type effects in gravity, we may at least conclude here that confinement-type effects $\textit{cannot play any significant part}$ in explaining flat or rising galactic rotation curves without dark matter halos.
W. E. V. Barker, M. P. Hobson, A. N. Lasenby
2023-03-20T13:29:29Z
http://arxiv.org/abs/2303.11094v1
# Does gravitational confinement sustain flat galactic rotation curves without dark matter? ###### Abstract The short answer is _probably no_. Specifically, this paper considers a recent body of work which suggests that general relativity requires neither the support of dark matter halos, nor unconventional baryonic profiles, nor any infrared modification, to be consistent after all with the anomalously rapid orbits observed in many galactic discs. In particular, the gravitoelectric flux is alleged to collapse nonlinearly into regions of enhanced force, in an analogue of the colour-confining chromoelectric flux tube model which has yet to be captured by conventional post-Newtonian methods. However, we show that the scalar gravity model underpinning this proposal is wholly inconsistent with the nonlinear Einstein equations, which themselves appear to prohibit the linear confinement-type potentials which could indicate a disordered gravitational phase. Our findings challenge the fidelity of the previous Euclidean lattice analyses: we propose that the question of confinement demands a gauge-invariant lattice implementation. We confirm by direct calculation using a number of perturbation schemes and gauges that the next-to-leading order gravitoelectric correction to the rotation curve of a reasonable baryonic profile would, in fact, be imperceptible. The 'gravitoelectric flux collapse' programme was also supported by using intragalactic lensing near a specific galactic baryon profile as a field strength heuristic. We recalculate this lensing effect, and conclude that it has been overstated by three orders of magnitude. As a by-product, our analysis suggests fresh approaches to (i) the fluid ball conjecture and (ii) gravitational energy localisation, both to be pursued in future work. _In summary, whilst it may be interesting to consider the possibility of confinement-type effects in gravity, such an investigation should be done thoroughly, without relying on heuristics: that task is neither attempted in this work nor accomplished by the key works referenced. Pending such analysis, we may at least conclude here that confinement-type effects cannot play any significant part in explaining flat or rising galactic rotation curves without paradigmatic dark matter halos._ ## I Introduction A substantial body of work has recently accumulated [1; 2; 3; 4; 5; 6; 7; 8; 9], asserting that overlooked effects - otherwise native to quantum chromodynamics - are nonlinearly implied by general relativity, and are in fact manifest among the observed astrophysical and cosmological phenomena. It is proposed that non-Abelian graviton-graviton interactions at next-to-leading order can qualitatively reshape (yet still be adequately described by) the weak-field regime. The effect is best seen in the dominant _gravitoelectric_ portion of the weak field, i.e. the familiar Newtonian part sourced by static mass-energy in a manner analogous to electrical charge in Maxwell's theory. If the gravitoelectric flux lines collapse under their own 'weight', regions of appreciably rarefied and enhanced force will appear. In what follows, we will use the broad term _'gravitoelectric flux collapse'_ (GEFC) to refer to this effect, and to flag the associated literature (e.g. 'GEFC/[1], etc.)1. Footnote 1: We stress that _GEFC_ is merely a convenient label, and not intended to fairly capture all the effects proposed in GEFC[11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 222; 22; 223; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 276; 277; 278; 279; 281; 283; 284; 285; 286; 287; 288; 289; 290; 289; 281; 286; 287; 289; 291; 288; 289; 292; 300; 31; 329; 33; 340; 35; 361; 37; 38; 393; 394; 395; 396; 397; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 61; 629; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 829; 81; 83; 84; 85; 86; 87; 88; 89; 91; 89; 920; 82; 83; 85; 87; 89; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 13; 117; 118; 119; 121; 133; 141; 142; 143; 144; 155; 16; 177; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 204; 205; 206; 207; 208; 209; 211; 214; 215; 216; 217; 218; 22; 227; 223; 231; 232; 234; 235; 236; 237; 238; 239; 241; 250; 251; 26; 27; 28; 293; 261; 28; 294; 295; 30; 31; 32; 33; 341; 35; 36; 37; 38; 396; 40; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 73; 75; 76; 77; 78; 79; 81; 82; 83; 85; 86; 87; 89; 94; 88; 89; 95; 96; 97; 98; 99; 100; 101; 102; 104; 105; 106; 107; 108; 109; 119; 117; 118; 119; 122; 123; 124; 125; 126; 127; 128; 129; 130; 141; 151; 152; 160; 173; 174; 175; 176; 187; 188; 199; 219; 220; 217; 231; 24; 245; 25; 266; 27; 288; 297; 298; 30; 31; 32; 33; 341; 35; 36; 37; 38; 399; 40; 41; 42; 43; 45; 46; 47; 48; 49; 52; 49; 53; 54; 56; 57; 59; 62; 58; 59; 70; 71; 72; 73; 74; 75; 76; 78; 79; 82; 89; 99; 93; 94; 95; 96; 97; 98; 99; 100; 11; 112; 13; 143; 154; 16; 177; 18; 199; 199; 201; 193; 194; 195; 196; 197; 198; 202; 203; 219; 21; 224; 25; 267; 278; 289; 299; 303; Does gravity exist in a state of magnetic disorder? In contrast to the asymptotic freedom of QCD, Coulombic (i.e. _Newtonian_) gravity appears to dominate at _long_ distances. As the length-scale associated with the curvature decreases, nonlinearities certainly emerge, as evidenced by the precession observed in Mercury's orbit [20]. As with QCD, gravitational nonlinearity is evident in the Lagrangian (from the dependence of \(-\frac{1}{2x}R\) on the metric inverse), but we are puzzled that the quadratic Maxwell structure seen above is _missing_. While GR might not therefore qualify as a Yang-Mills theory on these grounds, it is undeniably a gauge theory, and we observe moreover that the various natural gauge-theoretic reformulations of GR [21; 22] (and gravitational theory as a whole [22; 23]) all point to a local symmetry of the Poincare group \(\mathbb{R}^{1,3}\rtimes\text{SO}^{+}(1,3)\), which is non-Abelian2. Footnote 2: Although again, since \(\mathbb{R}^{1,3}\rtimes\text{SO}^{+}(1,3)\) is not compact, it may seem less suitable than \(\text{SU}(\mu)\) as a basis for Yang–Mills theory. So far, so good, in likening GR to a theory of 'classical chromodynamics'. The classical electrodynamic comparison, swapping a _chromoelectric_ for a _gravitoelectric_ field, seems even more encouraging. Gravitoelectromagnetism (GEM) constitutes a well established, Lorentz-invariant and classical correspondence between linear GR and electromagnetism, in which mass-energy is interpreted as the electrical charge [24; 25]. GEM thus offers a convenient, linear foundation -- perhaps less naturally present in QCD [26] -- upon which the inherent nonlinearities of the theory can be turned up and examined. But do gravitons carry the GEM mass-energy charge, as gluons carry colour? They _do_, but only at the expense of general covariance. In developing a GEFC picture of 'heavy flux', it seems hard to avoid an appeal to _gravitational energy_, for which there is no preferred, generally-applicable localisation scheme [24; 27]. One is in fact spoiled for a choice of gravitational stress-energy pseudotensor with which to perform the self-coupling [28; 29]. It is not obvious that this will be fatal to a speculative GEFC programme. For example, flux collapse could be interpreted covariantly (e.g. through the curvature as a natural field strength), while some 'compensating' gauge-dependence is implicated in the details of how the chosen pseudotensor acts as a source. So, a classical confinement model for GR does not seem out of the question. What about quantum effects? QCD is _strictly_ a quantum theory, but a complete quantum theory of gravity is currently missing [30]. We are puzzled again, but it is helpful to remember that quantum field theory in curved spacetime is nonetheless very well understood. Experience of the truth effect then suggests GEFC will not be so susceptable to string-breaking as QCD [31; 32; 33]. Although it is not limited by the \(\sim\) MeV quark mass, suppression by the Planck density means that conditions for appreciable gravity-induced particle production are met only in the most extreme scenarios, such as inefficient inflationary reheating [34; 35], and the Hawking effect near small (i.e. hot) black holes [36]. If the linear regime is less fragile, we might expect long-range, linear gravitational potentials to be commonplace in nature. If that were true, GEFC ought to apply quite intuitively over the galactic plane: radial field lines (as sourced by a typical galactic baryon profile) should drawn downwards to be embedded in the disc, where their bunching would enhance the centripetal force on the stars at the periphery and hence -- in a grand astrophysical analogue of the meson Regge trajectories, _accelerate galactic orbits_. By now this begins to sound potentially exciting. As established by the seminal 21 cm observations of Rubin et al [37; 38; 39] and, later, others [40; 41], we know that most spiral galaxies exhibit flat or rising rotation curves which are inconsistent with the (traditionally modelled [42; 43]) weak gravitation of their optically determined baryon content. A missing or _dark_ matter component which might account for this was earlier proposed by Zwicky [44; 45], based on the motions of seven galaxies in the Coma cluster3. The current paradigm of course stipulates that most late-type baryonic discs are sitting at the centre of a heavier, more extensive dark matter halo [47; 48; 49], whose presence may also be inferred by lensing [50]. Footnote 3: See English and Spanish translations in [46]. Current GEFC models promise an alternative to this paradigm. By accounting for rotation curves within the strict context of general relativity (GR), GEFC/[4; 5; 6] is supposed to be supported by lensing calculations GEFC/[7], an observed correlation between missing matter effects and ellipticity GEFC/[2], and an extension of its principle to galaxy clusters (specifically the Bullet cluster [51]) GEFC/[1; 5; 7], in promising to _eliminate_ the original need for the dark matter, whose particle composition continues to remain so elusive [52]. The role of (_cold_) dark matter (CDM) on cosmological scales is also central to the prevailing \(\Lambda\)CDM cosmic concordance model [53; 54; 55; 56] -- but here too, GEFC is put forward to restore consistency. The enhanced force effect is apparently shown in GEFC/[8] to be adequate in driving structure formation without the need for CDM. The rarefied force effect is moreover suggested in GEFC/[5] as an alternative to dark _energy_ (viz the cosmological constant \(\Lambda\)), as suggested by the relevant SNIa observations [57; 58; 59]. Most recently GEFC/[9] concludes that the effects are also consistent with the observed power spectrum of temperature anisotropies in the cosmic microwave background (CMB), without the need for any dark ingredients [54; 55; 56]. Notwithstanding the theoretical appeal as we have motivated it above, the _extent_ to which GEFC/[1; 2; 3; 4; 5; 6; 7; 8; 9] credits confinement-like effects with the observed phenomena would seem to warrant a level of skepticism. In particular, it is not clear how such significant behaviours can have been consistently missed in the long history of numerical relativity [60; 61], or in the well-developed post-Newtonian formalism [42; 43]. If we are not too concerned with string-breaking effects, it seems prudent to pay closest attention to the _onset_ of confinement, and ask how this can come about in the weak-field environment of the galactic disc. In this paper, we will attempt to refute the main structural elements of the proposed GEFC programme, as it is currently presented. Much of our commentary follows from a close reading (and sincere attempts at reproduction) of GEFC[4] and GEFC/[7], but it will become clear that our findings also disallow the better part of the techniques which support the broader literature GEFC/[1; 3; 5; 6; 8; 9]4. Footnote 4: We note a certain parallel with a previous attempt to explain rotation curves using purely GR effects, by Cooperstock and Tieu [62] – that model was cogently shown to be non-viable by Korzyński [63]. In particular we note that GEFC has, hitherto and from its inception (see GEFC/[1; 3; 4; 5; 6; 8]), used a _scalar_ model of gravitation as a proxy for GR. This model is described in greatest detail in GEFC[4], from which we understand the scalar to be a nonlinear extension of the gravitoelectrostatic potential. Now, post-Newtonian scalar models of gravity have been proposed in the past - most notably by Nordstrom and later Einstein - but they are not faithful to the phenomena [64]. We will show that the GEFC scalar is no different in this regard: it does not descend from GR by any principled means, and has no clear redeeming feature beyond the attractive force law expected of an even-spin representation of the Lorentz group. In GEFC/[4] the scalar model is implemented on a Euclidean lattice to produce remarkable \(-\) ostensibly gravitational \(-\) effects, such as linear potentials between point masses. These results are recapitulated in GEFC/[3; 6]. However, notwithstanding the non-relation between the GEFC scalar and GR, we are not convinced that the specific lattice techniques used in that work have a physical grounding. We also address the outstanding phenomenological claims of GEFC, insofar as they terrain to galactic rotation curves in GEFC/[4; 5; 6; 7]. We attempt to'steel-man'5 the GEFC hypothesis by discarding the faulty scalar model, and directly probing nonlinear GR for the claimed phenomena in the presence of reasonable, lenticular baryon profiles. We are dissapointed to find _no such phenomena_ at next-to-leading order, though we consider a range of gauges and perturbation schemes. In GEFC/[7] the effect of graviton self-interaction on rotation curves is actually modelled by considering the gravitoelectric field lines as the trajectories of massless gravitons, which are then gravitationally lensed by the galactic density distribution in the same way as photon trajectories; that the paths of electric field lines in GR follow precisely those of null geodesics has been discussed previously by Padmanahban [65]. The modified gravitoelectric field at any point, and hence the force on a test particle, is then determined by calculating the flux of the lensed field lines through a small surface at that point. Based on this interesting method, however, our own calculations will indicate lensing effects _three orders of magnitude smaller_ than those claimed in GEFC/[7]. Footnote 5: We use the term ‘steel-man’ in contrast to the more commonplace ‘strawman’, to mean that the strongest or most promising interpretation of the GEFC proposal should be considered, where possible. The remainder of this paper is organised as follows. We conclude this section by introducing some conventions in Sections I.1 and I.2. In Section II we consider the physical meaning of the scalar gravity model which underpins GEFC/[1; 3; 4; 5; 6; 8], and which is specifically studied using lattice methods in GEFC/[4]. We speculate as to how substantial differences may arise between the phenomenology of this model, and of GR. We also try, using standard parameterised post-Newtonian (PPN) methods, to account for how the correct Einstein-Infeld-Hoffman potential comes to be produced by this model. Our attempt to understand the meaning of the lattice method itself is confined to Appendix A. In Section III we attempt to'steel-man' the GEFC effect by studying next-to-leading order GR. We tackle the nonlinear gravitoelectric effect directly, by constructing the leading nonlinear correction to the gravitoelectromagnetic (GEM) formalism. This is ostensibly equivalent to the level of approximation used in GEFC/[4]. We consider the consistency of our GEM formulation with the PPN formalism in Appendix D. Finally in Section IV we recalculate the lensing effects presented in GEFC/[7]. We consider both the profile of GEFC/[7] and the independently motivated Miyamoto-Nagai profile of [66]. Conclusions follow in Section V. ### Setup and conventions We will use the 'West coast' signature \((+,-,-,-)\). We sometimes use an overbar for background quantities, and the exact (dimensionless) metric perturbation will be \[g_{\mu\nu}\equiv\bar{g}_{\mu\nu}+h_{\mu\nu},\quad g^{\mu\nu}\equiv\bar{g}^{ \mu\nu}-h^{\mu\nu}+\mathcal{O}(h^{2}). \tag{1}\] Greek indices on perturbations refer strictly to the background metric (and to the metric in any non-perturbative context): we will prefer a flat background \(\bar{g}_{\mu\nu}=\eta_{\mu\nu}\) for most purposes, and introduce a timelike vector field \(\bar{u}^{\mu}\bar{u}_{\mu}\equiv 1\) to define'static' on the background. Indices from the middle of the alphabet, e.g. \(\mu\), \(\nu\), run from 0 to 3 and from the beginning, e.g. \(\alpha\), \(\beta\), run from 1 to 3 over the spatial coordinates \(x^{\alpha}\). The time coordinate is \(t\equiv x^{0}\). For a flat background our coordinates are usually Cartesian Lorentz coordinates, and we also use vector notation such as \([\mathbf{x}]\equiv x^{a}\) so that \(|\mathbf{x}|^{2}\equiv-\eta_{a\beta}x^{\alpha}x^{\beta}\), and \(\dot{\mathbf{x}}\equiv\partial_{x}\mathbf{x}\), but \([\mathbf{\nabla}]\equiv\partial_{a}\). We also use standard cylindrical coordinates with radius \(R\) (not to be confused with the Ricci scalar), azimuthal angle \(\varphi\), and \(z\) anchored to \(z\equiv x^{3}\), and spherical polar coordinates sharing the azimuth \(\varphi\) but with polar angle \(\vartheta\) and radius \(r\). Occasionally, we also suppress indices on four-vectors, e.g. \([x]\equiv x^{\mu}\). The total Einstein-Hilbert action, with matter added, is taken to be \[S_{T}\equiv\int\mathrm{d}^{4}x\sqrt{-g}\left[-\frac{1}{2\kappa}R+L_{M}\right], \tag{2}\] with \(g\equiv\det g_{\mu\nu}\). We can divide up the total action and Lagrangian into (kinetic) gravitational and matter parts \(S_{T}\equiv S_{G}+S_{M}\) and \(\mathcal{L}_{T}\equiv\mathcal{L}_{G}+\mathcal{L}_{M}\), where \[\mathcal{L}_{G}\equiv-\frac{1}{2\kappa}\sqrt{-g}R,\quad\mathcal{L}_{M}\equiv \sqrt{-g}L_{M}, \tag{3}\] with the Ricci tensor \(R^{\mu}_{\nu}\equiv{R^{\mu\sigma}}_{\nu\sigma}\) and scalar \(R\equiv R^{\mu}_{\mu}\), derived from the Riemann tensor and Christoffel symbols according to \[R_{\rho\sigma\mu}^{\phantom{\mu\nu}\nu}\equiv 2\big{(}\partial_{[ \sigma}\Gamma^{\nu}_{\rho]\mu}+\Gamma^{\lambda}_{[\rho|\mu}\Gamma^{\nu}_{[\rho |\lambda]}\big{)}, \tag{4a}\] \[\Gamma^{\mu}_{\nu\sigma}\equiv\frac{1}{2}g^{\mu\lambda}\big{(} \partial_{\nu}g_{\sigma\lambda}+\partial_{\sigma}g_{\nu\lambda}-\partial_{ \lambda}g_{\sigma\nu}\big{)}. \tag{4b}\] The Einstein field equations (EFEs) which follow from a variation of (2) with respect to \(g^{\mu\nu}\) equate the Einstein \(G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\,R\) and energy-momentum tensors \[G_{\mu\nu}=\kappa T_{\mu\nu}\,,\quad T_{\mu\nu}\equiv\frac{2}{\sqrt{-g}}\frac {\delta S_{M}}{\delta g^{\mu\nu}}. \tag{5}\] The Newton and Einstein constants and the Planck mass are related by \(8\pi G\equiv\kappa\equiv 1/{M_{\rm Pl}}^{2}\), and we naturally take the fundamental speed \(c=1\). The nature of the sources considered in GEFC/[1; 3; 4; 5; 6; 8] suggests that we can confine ourselves to perfect fluid lagrangia for which the stress-energy tensor takes the form \[T^{\mu\nu}=(\rho+P)\,u^{\mu}u^{\nu}-Pg^{\mu\nu}. \tag{6}\] In (6) we define the rest-mass energy density \(\rho\) of the fluid, including all internal chemical, kinetic and thermal contributions, \(P\) is the rest pressure, and \(u^{\mu}\) is the four-velocity \(u^{\mu}\equiv\mathrm{d}x^{\mu}/\mathrm{d}s\) of the fluid. Note that we are encouraged in GEFC/[4] and GEFC/[7] to assume \(P=0\) to all perturbative orders. A particularly convenient way to label the perturbative gravitational effects of such a fluid is via the velocity, assuming that velocity is non-relativistic, the source has suitably virialised under its own gravitation and that a variety of other reasonable statistical conditions are met [42]. If the coordinate velocity is denoted \(v^{\mu}\equiv u^{a}/u^{0}\) and we assume \(\gamma_{\nu}\equiv\big{(}1-|\mathbf{v}|^{2}\big{)}^{-1/2}\approx 1\), we will accordingly refer to \(\mathcal{O}\left(|\mathbf{v}|^{2n}\right)\) as \(\mathcal{O}\left(\epsilon^{n}\right)\), in keeping with the conventions of the PPN formalism [43]. Of course, the coordinate velocity in this case need not be that expressed in (6), if we relax \(P=0\). Indeed, careful arguments have shown [42] that \(\mathcal{O}\left(P/\rho\right)\) is generally _synonymous_ with \(\mathcal{O}\left(\epsilon\right)\), and we will briefly recall in Sections I.2 and II.4 that the same is true of \(\mathcal{O}\left(h\right)\). An additional assumption suggested in GEFC/[1; 3; 4; 5; 6; 8] is that of _staticity_. Any number of flagrant inconsistencies are readily seen to arise when we try to combine \(\mathbf{v}=0\) with \(P=0\) at nonlinear orders. On these grounds such assumptions ought to be disqualifying, and a reasonable approach to verifying the claims of GEFC/[1; 3; 4; 5; 6; 8] might be to include pressure, or rotation, or both. In fact, we do not believe such onerous extensions are necessary: we will show over Sections II to IV that GEFC suffers more fundamental problems than susceptibility to the Jeans instability. In demonstrating this, we observe that (i) the'static dust' picture will cause contradictions to arise at various points in the analysis, which must be overlooked if any comparison with GEFC/[1; 3; 4; 5; 6; 8] is to be made, and (ii) the \(\mathcal{O}\left(\epsilon^{n}\right)\) PPN formalism may be formally retained in what follows, even though there are (somehow) no velocities. ### Linearised general relativity Two different types of coordinate transformation connect quasi-Minkowskian systems to each other: global Lorentz transformations \(x^{\prime\mu}=\Lambda^{\mu}_{\nu}x^{\nu}\) and infinitesimal general coordinate transformations \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}(x)\), under which \(h^{\prime}_{\mu\nu}=\Lambda_{\mu}\ell\Lambda_{\nu}{}^{\sigma}h_{\rho\sigma}\) and \(h^{\prime}_{\mu\nu}=h_{\mu\nu}-\partial_{\mu}\xi_{\nu}-\partial_{\nu}\xi_{ \mu}+\mathcal{O}\left(\left(\partial\xi\right)^{2}\right)\), respectively. This suggests that instead of considering a slightly curved spacetime to represent the general-relativistic weak field, one can reinterpret \(h_{\mu\nu}\) simply as a special-relativistic symmetric rank-2 tensor field that represents a weak gravitational field on a Minkowski background spacetime and possesses the gauge freedom \[h_{\mu\nu}\to h_{\mu\nu}-\partial_{\mu}\xi_{\nu}-\partial_{\nu}\xi_{\mu}+ \mathcal{O}\left(\left(\partial\xi\right)^{2}\right). \tag{7}\] Expanding the Einstein equations (5) to first-order in \(h_{\mu\nu}\) to yield \(G^{(1)}_{\mu\nu}\equiv R^{(1)}_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}R^{(1)}=\kappa T _{\mu\nu}+\mathcal{O}\left(\epsilon^{2}\right)\), one obtains the linearised field equations \[\begin{split} G^{(1)}_{\mu\nu}&\equiv-\frac{1}{2} \left(\mathbf{\square}\,\hbar_{\mu\nu}+\eta_{\mu\nu}\partial_{\rho}\partial_{ \sigma}h^{\rho\sigma}-\partial_{\nu}\partial_{\rho}h^{\rho}_{\mu}-\partial_{ \mu}\partial_{\rho}h^{\rho}_{\nu}\right)\\ &=\kappa T_{\mu\nu}+\mathcal{O}\left(\epsilon^{2}\right),\end{split} \tag{8}\] where \(\hbar_{\mu\nu}\equiv h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}h\) is the trace reverse6 of \(h_{\mu\nu}\), with \(h\equiv\eta_{\mu\nu}h^{\mu\nu}\), and \(\mathbf{\square}\equiv\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}\) is the d'Alembertian operator. As expected, the LHS of (8) is invariant under the gauge transformation (7). By choosing \(\xi^{\mu}(x)\) to satisfy \(\mathbf{\square}\,\xi^{\mu}=\partial_{\rho}\hbar^{\mu\rho}\), one may impose the Lorenz gauge condition \(\partial_{\rho}\bar{h}^{\mu\rho}=0\); note that this gauge condition is preserved by any further gauge transformation of the form (7) provided that the functions \(\xi^{\mu}\) satisfy \(\mathbf{\square}\,\xi^{\mu}=0\). In the Lorenz gauge, the linearised field equations (8) reduce to the simple form Footnote 6: The trace reverse should not be confused with background quantities: we will apply it only to symbols denoting perturbative quantities. \[\mathbf{\square}\,\mathbf{\bar{h}}^{\mu\nu}=-2\kappa T^{\mu\nu}+\mathcal{O}\left( \epsilon^{2}\right). \tag{9}\] The general solution to the inhomogeneous wave equation (9) is most easily obtained by using a Green's function approach, in a similar manner to that employed for solving the analogous problem in electromagnetism. Denoting spatial 3-vectors by \(\mathbf{x}\), this yields \[\begin{split}\bar{h}^{\mu\nu}(\mathbf{x})&=-4G\int\frac{T^ {\mu\nu}(t-|\mathbf{x}-\mathbf{x}^{\prime}|,\mathbf{x}^{\prime})}{|\mathbf{x}-\mathbf{x}^{\prime}| }\,\mathrm{d}^{3}x^{\prime}\\ &\quad+\mathcal{O}\left(\epsilon^{2}\right).\end{split} \tag{10}\] For a stationary source, \(\partial_{0}T^{\mu\nu}=0\), such that the time dependence vanishes and retardation is irrelevant, so (10) reduces to \[\bar{h}^{\mu\nu}(\mathbf{x})=-4G\int\frac{T^{\mu\nu}(\mathbf{x}^{\prime})}{|\mathbf{x}-\mathbf{x} ^{\prime}|}\,\mathrm{d}^{3}x^{\prime}+\mathcal{O}\left(\epsilon^{2}\right). \tag{11}\] Following on from our discussion in Section I.1, for a _stationary_, non-relativistic source with \(P=0\), we approximate the energy-momentum tensor as \[\begin{split} T^{00}=\rho+\mathcal{O}\left(\epsilon^{2}\right), \quad T^{a0}=\rho v^{\alpha}+\mathcal{O}\left(\epsilon^{5/2}\right),\\ T^{a\beta}=\mathcal{O}\left(\epsilon^{2}\right).\end{split} \tag{12}\] Indeed, this is also consistent with the Lorenz gauge condition \(\partial_{\rho}\bar{h}^{\mu\rho}=0\), which implies that \(\partial_{\alpha}\bar{h}^{\beta\alpha}=-\partial_{0}\bar{h}^{\beta\beta}\), which vanishes for stationary systems. In the linearised theory, there is an inconsistency between the field equations (8) and the equations of motion for matter in a gravitational field. From (8), one quickly finds that \(\partial_{\mu}T^{\mu\nu}=0\), which should be contrasted with the requirement from the full GR field equations that \(\mathbf{\nabla}_{\mu}T^{\mu\nu}=0\). The latter requirement leads directly to the geodesic equation of motion for the worldline \(x^{\mu}(s)\) of a test particle, namely \[\frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}s^{2}}+\Gamma^{\mu}_{\nu\sigma}\frac{ \mathrm{d}x^{\nu}}{\mathrm{d}s}\frac{\mathrm{d}x^{\sigma}}{\mathrm{d}s}=0, \tag{13}\] whereas the former requirement leads to the equation of motion \(\mathrm{d}^{2}x^{\mu}/\mathrm{d}s^{2}=0\), which means that the gravitational field has _no effect_ on the motion of the particle and so clearly contradicts the geodesic postulate. Despite this inconsistency, the effect of weak gravitational fields on test particles may still be computed by inserting the linearised connection coefficients into the geodesic equations (13) - we will make use of this approach in Section III.1. ## II Analysis of the scalar model In this opening section, we will attempt to show that the scalar model of gravity, which has been used as a basis for many GEFC calculations (see GEFC/[1; 3; 4; 5; 6]), is not descriptive of GR within the regime of its application. ### The matter coupling We begin our analysis by considering the incorporation of matter sources into the gravity model. A starting point of the GEFC approach is a perturbative expansion of (2) in the field \(\varphi_{\mu\nu}\equiv M_{\mathrm{Pl}}h_{\mu\nu}\), proposed in GEFC/[4] to take the specific form \[\begin{split}\mathcal{L}_{T}&=[\partial\varphi \partial\varphi]+\frac{\sqrt{2}}{M_{\mathrm{Pl}}}\left[\varphi\partial\varphi \partial\varphi\right]+\frac{2}{M_{\mathrm{Pl}}^{2}}\left[\varphi^{2} \partial\varphi\partial\varphi\right]\\ &-\frac{\sqrt{2}}{M_{\mathrm{Pl}}}\varphi_{\mu\nu}\bar{T}^{\mu\nu} -\frac{1}{M_{\mathrm{Pl}}^{2}}\varphi_{\mu\nu}\varphi_{\lambda\sigma}\bar{T}^ {\mu\nu}\eta^{\sigma\lambda}+\dots\,,\end{split} \tag{14}\] where the notation \([\cdot]\) with indices suppressed denotes particular contractions following from the Einstein-Hilbert term, and the customary perturbation is in powers of \(\varphi_{\mu\nu}/M_{\mathrm{Pl}}\). More or less equivalent series to (14) are proposed in GEFC/[1; 3; 5; 6; 8]. The lowest order terms in (14) are of course the massless Fierz-Pauli theory, coupled to a matter current. At higher perturbative orders however, we are tempted to move from the outset to an adjacent theory with a modified matter sector. Our reasons for this are illustrated by a formal expression7 which we can construct for the perturbative expansion of (2) Footnote 7: see e.g. [67] for a similar approach \[\begin{split}& S_{M}\equiv-\sum_{n=0}^{\infty}\frac{1}{n!}\left[ \int\mathrm{d}^{4}x\frac{\varphi^{\mu\nu}}{M_{\mathrm{Pl}}}\frac{\delta}{ \delta\bar{g}^{\mu\nu}}\right]^{n}\mathcal{S}_{M}=\mathcal{S}_{M}\\ &-\sum_{n=1}^{\infty}\frac{1}{n!}\left[\int\mathrm{d}^{4}x\frac{ \varphi^{\mu\nu}}{M_{\mathrm{Pl}}}\frac{\delta}{\delta\bar{g}^{\mu\nu}} \right]^{n-1}\int\mathrm{d}^{4}x\frac{\sqrt{-\delta}\varphi^{\rho\sigma}}{2M _{\mathrm{Pl}}}\tilde{T}_{\rho\sigma}\,.\end{split} \tag{15}\] Even assuming (as sometimes applies) that \(L_{M}\) contains no derivatives of \(g_{\mu\nu}\), it would then seem from Eq. (15) that a perturbative expansion roughly of the form (14) would require the curious condition on (or off) the background \[\left[\frac{\partial}{\partial g^{\mu\nu}}\right]^{n}\left(\sqrt{-\delta}T_{ \rho\sigma}\right)={}^{(n)}X_{\dots\,,\rho\sigma}{}^{\kappa\lambda}T_{\kappa \lambda}\,, \tag{16}\] where \({}^{(n)}X_{\dots\,,\rho\sigma}{}^{\kappa\lambda}\) is some suitably symmetrized and indexed density concommtant of (the undifferentiated) \(g_{\mu\nu}\). Possibly (16) can be satisfied by the cosmological constant, though without any other matter present this would appear restrictive. The other option, that \(T^{\mu\nu}\) is independent of \(g_{\mu\nu}\), is also restrictive; it does not apply even for a spin-0 boson. Moreover this option contradicts the expectation that \(\mathcal{L}_{M}\) be a covariant density: the only ansatz in that case is \(\mathcal{L}_{M}=c_{1}\sqrt{-g}T+c_{2}\det T_{\mu\nu}\,,\) where \(T\equiv T_{\mu}^{\mu}\), and this ansatz is not consistent with the EFEs in (5). Accordingly, and without detailed knowledge of the matter sector, we are not wholly confident that the matter coupling in GEFC/[1; 3; 4; 5; 6; 8] is safe8. We will return to this issue in Section II.4, where we do have such detailed knowledge, but select instead a conventional relativistic point particle action under the standard PPN perturbation scheme. Footnote 8: It is also possible that the stress energy tensor is being re-introduced to the Lagrangian via a solution to the lowest-order field equations: while we are not able to confirm this in the case of (14), we use a similar approach in Section III.3. ### The non-relativistic scalar For the moment the representation of the matter sector does not impede our discussion, since (15) can be used to obtain all the corrections in (14) to the gravitational sector, and it is this sector which is principally targeted in GEFC/[4]. There and elsewhere in GEFC/[1; 3; 6; 8] it is argued that for static spacetimes, the gravitational field may be represented by the _single_ degree of freedom \[\varphi^{\mu\nu}=2\left(2\bar{u}^{\mu}\bar{u}^{\nu}-\bar{g}^{\mu\nu}\right)\varphi, \tag{17}\] which in the static, perturbative context is the Newtonian scalar potential \(2M_{\rm Pl}\nabla^{2}\varphi\varphi=\rho+\mathcal{O}\left(\epsilon^{2}\right)\). However it is made clear in GEFC[4] that \(\varphi\) also dominates in some non-perturbative static regime of physical relevance, and so we are effectively being invited to promote (17) to an isotropic Cartesian line element \[\mathrm{d}s^{2}=\left(1+\frac{2\varphi}{M_{\rm Pl}}\right)\mathrm{d}t^{2}- \left(1-\frac{2\varphi}{M_{\rm Pl}}\right)\mathrm{d}x^{2}, \tag{18}\] which is signature-preserving within the range \(\left|\varphi\left/M_{\rm Pl}\right|<1/2\right.\). The ansatz (17) is apparently _substituted directly into_ (14), and the static assumption \(\varphi\equiv\partial_{t}\varphi=0\) imposed to obtain a scalar Euclidean lattice action up to the required perturbative order9 - which for much of GEFC[4] entails the equivalent of a \(\mathcal{O}\left(\epsilon^{2}\right)\) correction, or \(\mathcal{O}\left(\varphi^{2}/{M_{\rm Pl}}^{2}\right)\). But if the relevant solutions are indeed non-perturbative, why truncate (14) at all? We might not be sure for reasons discussed in Section II.1 how (14) relates to \(\mathcal{L}_{M}\), but we can skip ahead on the gravitational side of the action by substituting the line element (18) _directly into_ the fully nonlinear \(\mathcal{L}_{G}\) as it is written in (3) to give a novel action \(\mathcal{S}_{G}\equiv\int\mathrm{d}t\mathrm{d}^{3}x\tilde{\mathcal{L}}_{G}\). Doing so, we find that the lattice calculations in GEFC[4] are really attempting to probe (under the assumption of staticity) the following non-relativistic theory Footnote 9: Our understanding of the approach is also based on the relevant section in [1]. \[\tilde{\mathcal{L}}_{G}\equiv-\frac{3\left(1-\frac{2\varphi}{M_{\rm Pl}} \right)\phi^{2}+\left(1-\frac{6\varphi}{M_{\rm Pl}}\right)\left|\nabla\varphi \right|^{2}}{\left(1-\frac{2\varphi}{M_{\rm Pl}}\right)\sqrt{1-\frac{4\varphi ^{2}}{{M_{\rm Pl}}^{2}}}}. \tag{19}\] We will discover in Section II.3 that the theory (19) is essentially arbitrary, and not concretely related to nonlinear gravitostatics. Certainly, it is inconsistent even at lowest order with the linearised EFEs in (5). But does it even impart stability to the static surfaces identified by the lattice? To answer this we will very briefly consider the quantum mechanical implications of (19). Our approach in doing so, whilst providing convenient visualisation of the problem in Fig. 1, should not be taken too seriously in the context of the strictly classical GEFC proposal. Bearing this caveat in mind, we imagine that a high-powered Euclidean lattice calculation converges on a static background \(\bar{\varphi}\) (not to be confused in this context with \(\bar{g}_{\mu\nu}=\eta_{\mu\nu}\)), to which the various interesting solutions in GEFC[4] are presumably approximations. Now (19) evidently imposes a non-relativistic theory of fluctuations around \(\bar{\varphi}\), whose propagator in the representation of momentum \(p^{\mu}\), with energy \(E\equiv p^{0}\), reads10 Footnote 10: To avoid discussion of the second quantization of a ghost, we define the propagator here as being just the Greens function of the equation of motion, or the inverse of the perturbative Lagrangian. The customary shift introduced by the \(+i\epsilon\) term should not be necessary in this case, since the poles are rotated. Independent of the GEFC analysis, it is of interest to plot Eq. (20) in Fig. 1, since the ghost propagator is not frequently illustrated in the literature. \[D(x_{1}-x_{2})_{F}=\lim_{\epsilon\to 0}\int\frac{\mathrm{d}E \mathrm{d}^{3}p}{(2\pi)^{4}}ie^{-ip_{\mu}\left(x_{1}^{\mu}-x_{2}^{\mu}\right)} \sqrt{1-\frac{4\bar{\varphi}^{2}}{{M_{\rm Pl}}^{2}}}\] \[\times\left[-6E^{2}-2\frac{\left(1-6\bar{\varphi}/M_{\rm Pl} \right)}{\left(1-2\bar{\varphi}/M_{\rm Pl}\right)}|\mathbf{p}|^{2}-\mu\left(\bar{ \varphi},\mathbf{\nabla}^{2}\bar{\varphi}\right)^{2}+i\epsilon\right]^{-1},\] \[\mu\left(\bar{\varphi},\mathbf{\nabla}^{2}\bar{\varphi}\right)^{2} \equiv\frac{4\left(1+\bar{\varphi}/M_{\rm Pl}+6\bar{\varphi}^{2}/{M_{\rm Pl }}^{2}\right)}{\left(1-2\bar{\varphi}/M_{\rm Pl}\right)\left(1-4\bar{\varphi} ^{2}/{M_{\rm Pl}}^{2}\right)}\frac{\mathbf{\nabla}^{2}\bar{\varphi}}{M_{\rm Pl}}. \tag{20}\] The pathologies in (20) appear to be quite severe. For weak fields we might expect \(\mathbf{\nabla}^{2}\bar{\varphi}\) to be small outside the matter source on the lattice, however it may be coupled, suppressing the effective mass \(\mu\). In that case the residue about \(|\mathbf{p}|=0\) suggests that no unitary quantum theory lives on the portion of the background in which the light cone structure of (18) is preserved. This ghost is invisible on the lattice, which appears furthermore to be shielded from gradient instabilities within the range \(-1/2<\bar{\varphi}/M_{\rm Pl}<1/6\). By analogy to the Newtonian potential, we might expect the lattice to prefer \(\bar{\varphi}/M_{\rm Pl}<0\) anyway, possibly accounting for the excellent numerical results of GEFC[4]. There follows a brief window where the lattice solutions would not be destabilised by classical waves of the dynamical theory, before such waves exit the light cone of \(\bar{g}_{\mu\nu}\). In general we expect sources and the nonlinear aspects of (19) to sometimes induce a substantial \(\mu\), which could be analysed for any tachyonic character. These observations are illustrated in Fig. 1. It is clear from the above analysis that the theory (19) diverges wildly from the GR phenomena if the assumption of staticity is relaxed. In the context of the wider literature, the possibility of a _linear_ gravitational potential would ordinarily lead to the same conclusion in the static case; however that phenomenon is actually proffered in GEFC[4] as being innate to GR, and so in the static case we must proceed more carefully. ### The non-relation to GR How can we decide whether static extrema of (19) are really representing the gravitostatic limit of GR? That action entails only one vacuum equation of motion \[c_{j}\,q_{j}=0, \tag{21}\] where \([q_{j}]\equiv\left(\bar{\varphi},\bar{\varphi}^{2}/\varphi,\mathbf{\nabla}^{2} \varphi,|\mathbf{\nabla}\varphi|^{2}/\varphi\right)\) and \(c_{j}\) is a rational function in \(\varphi/M_{\rm Pl}\) whose components are \[c_{1}\equiv 6\left(1-2\varphi/M_{\rm Pl}\right)\left(1-4\varphi^{2}/{M_{\rm Pl }}^{2}\right), \tag{22a}\] \[c_{2} \equiv 12\left(\varphi/M_{\rm Pl}\right)^{2}\left(1-2\varphi/M_{\rm Pl }\right), \tag{22b}\] \[c_{3} \equiv 2\left(1-6\varphi/M_{\rm Pl}\right)\left(1-4\varphi^{2}/{M_{ \rm Pl}}^{2}\right),\] (22c) \[c_{4} \equiv 4\left(\varphi/M_{\rm Pl}\right)\left(1+\varphi/M_{\rm Pl }+6\varphi^{2}/{M_{\rm Pl}}^{2}\right). \tag{22d}\] However, once a gauge such as (18) is chosen the EFEs in (5) can impose up to _six_ such equations, in addition to _four_ constraints on the initial data: these had better all be consistent with (21), otherwise we will no longer be studying gravity in any regime whatever. Taking for example the line element (18) substituted into the vacuum equation \(G_{\mu}^{\mu}=0\) and accompanying constraint \(G^{\mu\nu}\bar{u}_{\mu}\bar{u}_{\nu}=0\), we obtain the system \[c_{aj}q_{j}=0, \tag{23}\] where \(c_{aj}\) is a \(2\times 4\) matrix which can be diagonalised over the first \(2\times 2\) block. Now if we assume staticity, so \(\varphi=\bar{\varphi}=0\), we need retain only the second \(2\times 2\) block \(c_{ab}\), writing \(c_{ab}q_{b}=0\) where \([q_{b}]\equiv\left(\nabla^{2}\varphi,|\nabla\varphi|^{2}/\varphi\right)\). However direct calculation yields \[\det c_{ab}\propto\frac{\left(\varphi/M_{\rm Pl}\right)\left(7+6\varphi/M_{ \rm Pl}\right)}{\left(1-2\varphi/M_{\rm Pl}\right)\left(1-4\varphi^{2}/{M_{ \rm Pl}}^{2}\right)^{3}}, \tag{24}\] so the EFEs do not actually seem to admit any nontrivial static solutions under the GEFC ansatz. On this basis it would appear that (18) is too restrictive for nonlinear gravity, and this is not surprising. In the perturbative case the field \(\varphi\) essentially corresponds to the principal PPN potential, which cannot be considered in isolation at any PN order [42]. Note that \(\det c_{ab}\to 0\) as \(\varphi/M_{\rm Pl}\to 0\). In this limit we recover the only link between the GEFC scalar and gravity, namely the static vacuum condition \(\nabla^{2}\varphi=0\). We can now attempt to diagnose a potential issue in (19), and so also in the scalar approach of GEFC[1, 3, 4, 5, 6, 8]. The discrepancy between (21) and the EFEs in (5) appears to occur because the GEFC ansatz is substituted before variations (or equivalently lattice path integrals) are performed. These are in general non-commuting operations: in reverse order they may well yield the phenomena described in GEFC[1, 3, 4, 5, 6, 8] which, however colourful, seem less likely to be gravitational in origin. ### Why the two-body potential _looks_ correct As a final check on the scalar model which underpins GEFC/[1, 3, 4, 6, 8], we consider the significance of the observation in GEFC/[4], that the perturbative interpretation of (19) recovers the parameterised post-Newtonian (PPN) energy of a system of (proper) point masses, \(m_{\mu}^{\star}\) at \(\mathbf{x}_{n}\). We firstly point out that the perturbative ansatz (17) is already consistent with the standard PPN gauge [42] at \(\mathcal{O}(\epsilon)\), for which \(\bar{g}_{\mu\nu}=\eta_{\mu\nu}\). In that gauge the components of the (dimensionless) metric perturbation \(h_{\mu\nu}\equiv\varphi_{\mu\nu}/M_{\rm Pl}\) in (1), with the PPN parameters fixed to those of GR, are \[h_{00} =-2U+2\left\{\Phi_{2}+U^{2}\right\}+\mathcal{O}\left(\epsilon^{3} \right), \tag{25a}\] \[h_{a\bar{b}} =2U\eta_{a\bar{b}}+\mathcal{O}\left(\epsilon^{2}\right), \tag{25b}\] Figure 1: Propagator of the scalar gravity model which underpins GEFC[1, 3, 4, 5, 6, 8], with a variety of static background potentials \(\bar{\varphi}\), and with the healthy Klein–Gordon propagator (mass \(m\)) shown below for comparison. The scalar model is obtained by discarding all d.o.f in the nonlinear Einstein–Hilbert action beyond the isotropic metric perturbation \(\varphi\) leads to a non-relativistic theory (19) which does not appear to be healthy, though it may seem so on a Euclidean lattice under the assumption of staticity. \[h_{0\alpha}=-4V_{a}\,+\frac{1}{2}\partial_{a}\dot{X}+\mathcal{O}\left(\epsilon^{5/ 2}\right), \tag{25c}\] where we use \(\left\{\cdot\right\}\) to signify those contributions which originate in the \(\mathcal{O}\left(\epsilon^{2}\right)\) correction to \(h_{00}\), and with PPN potentials defined by adapting the PPN conventions in [42] to our choice of signature \[U \equiv\frac{\kappa}{8\pi}\int\frac{\mathrm{d}^{3}x^{\prime}{p^{ \ast}}^{\prime}}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|},\quad X\equiv\frac{ \kappa}{8\pi}\int\mathrm{d}^{3}x^{\prime}{p^{\ast}}^{\prime}|\mathbf{x}-\mathbf{x}^{ \prime}|, \tag{26}\] \[\Phi_{2} \equiv\frac{\kappa}{8\pi}\int\frac{\mathrm{d}^{3}x^{\prime}{p^{ \ast}}^{\prime}U^{\prime}}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|},\quad\mathbf{V}^ {\alpha}\equiv\frac{\kappa}{8\pi}\int\frac{\mathrm{d}^{3}x^{\prime}{p^{\ast}} ^{\prime}{v^{\prime}}^{\alpha}}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|},\] where the source fluid has total rest mass \(M\), rest mass density \(\rho\) and we introduce the conserved density \(\rho^{\ast}\equiv\rho\sqrt{-g}u^{0}\). Recall the coordinate velocity is \(v^{\prime\alpha}\equiv u^{\alpha}/u^{0}\) and the four-velocity is \(u^{\mu}\). It is clear that the standard PPN gauge (25a) may deviate from (1) above \(\mathcal{O}\left(\epsilon\right)\). To put this another way, careful (and well-tested) consideration of nonlinear effects suggests departure from (1) and GEFC[4]. Let us now consider the PN energy associated with a collection of point sources, i.e. \[\rho^{\ast}=\sum_{n=1}^{N}m_{n}^{\ast}\delta^{3}(\mathbf{x}-\mathbf{x}_{n}),\quad M \equiv\int\mathrm{d}^{3}x\rho^{\ast}=\sum_{n=1}^{N}m_{n}^{\ast}. \tag{27}\] No compressional energy is involved, and so the relativistic matter Lagrangian in (3) will simply be \[\mathcal{L}_{M}=-\rho^{\ast}/u^{0}=-\rho^{\ast}\left[g_{00}+2g_{0\alpha}v^{ \alpha}+g_{\alpha\beta}v^{\alpha}v^{\beta}\right]^{\frac{1}{2}}. \tag{28}\] We recall from (28) why the \(\mathcal{O}\left(\epsilon^{2}\right)\) part of \(h_{\alpha\beta}\) is allowed to be suppressed in (25b) when considering the first nonlinear corrections to the dynamics, but why the same is not (immediately) true for \(h_{00}\). In other words, we could imagine that \(h_{00}=h_{\alpha\alpha}+\mathcal{O}\left(\epsilon^{3}\right)\), in line with (17), and retain only \(h_{00}\) to next-to-leading order through the calculations. Accordingly, expanding (28) to \(\mathcal{O}\left(\epsilon^{2}\right)\) under the scheme Eqs. (25a) to (25c), we find quite directly \[\mathcal{L}_{M} =\frac{1}{2}\rho^{\ast}\left(1+3U\right)\left|\mathbf{v}\right|^{2}+ \frac{1}{8}\rho^{\ast}|\mathbf{v}|^{4} \tag{29}\] \[\quad-\rho^{\ast}\mathbf{v}\cdot\left(4\mathbf{V}+\frac{1}{2}\mathbf{\nabla }\dot{X}\right)\] \[\quad-\rho^{\ast}\left(1-U-\frac{1}{2}U^{2}+\left\{\Phi_{2}+U^{2 }\right\}\right)+\mathcal{O}\left(\epsilon^{3}\right).\] The \(\mathcal{O}\left(\epsilon^{2}\right)\) expansion of \(\mathcal{L}_{G}\) as it is defined in (3) is more challenging, but by shaking out all surface terms and reducing the potentials (see Appendix B) we eventually find that the Einstein-Hilbert contribution has the form \[\mathcal{L}_{G} =\frac{1}{2}\rho^{\ast}\mathbf{v}\cdot\left(4\mathbf{V}+\frac{1}{2}\mathbf{ \nabla}\dot{X}\right) \tag{30}\] \[\quad+\rho^{\ast}\left(\frac{1}{2}-\frac{1}{2}U-U^{2}+\left\{\Phi _{2}+U^{2}\right\}\right)+\mathcal{O}\left(\epsilon^{3}\right),\] where again we track the \(\mathcal{O}\left(\epsilon^{2}\right)\) contribution to \(h_{00}\) through the calculation and retain it in braces. By comparing Eqs. (29) and (30) we now see that the gravitational and matter corrections stemming from the \(\mathcal{O}\left(\epsilon^{2}\right)\) correction to the gravitaional field have an _equal and opposite_ effect on the total action. In other words, the leading PN correction to the GEFC scalar _does not survive_ in the \(\mathcal{O}\left(\epsilon^{2}\right)\) corrections to the phenomena when they are properly calculated, which are instead quadratic in the \(\mathcal{O}\left(\epsilon\right)\) Newtonian potential. What are these phenomena in the context of GEFC/[4], i.e. static point sources? The total Lagrangian obtained from adding Eqs. (29) and (30) is \[\mathcal{L}_{T} =\frac{1}{2}\rho^{\ast}\left(1-3U\right)\left|\mathbf{v}\right|^{2}+ \frac{1}{8}\rho^{\ast}|\mathbf{v}|^{4} \tag{31}\] \[\quad-\frac{1}{2}\rho^{\ast}\mathbf{v}\cdot\left(4\mathbf{V}+\frac{1}{2} \mathbf{\nabla}\dot{X}\right)-\frac{1}{2}\rho^{\ast}\left(1-U+U^{2}\right)\] \[\quad+\mathcal{O}\left(\epsilon^{3}\right).\] To discover the true potential energy associated with the two-static-point-source setup in GEFC[4], we can substitute (27) into (31) with \(N=2\). Integration over the Cauchy surface to remove the Dirac functions, once suitably regularised self-energies have been discarded, leads to a reduced Lagrangian over the time coordinate \[\int\] \[\quad+\frac{\kappa m_{1}^{\ast}m_{2}^{\ast}}{8\pi|\mathbf{x}_{1}-\mathbf{x}_ {2}|}\left(1-\frac{\kappa(m_{1}^{\ast}+m_{2}^{\ast})}{16\pi|\mathbf{x}_{1}-\mathbf{x}_ {2}|}\right)+\mathcal{O}\left(\epsilon^{3}\right). \tag{32}\] The post-Newtonian kinetic corrections, which we suppress, are such that variation of (32) with respect to \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) yields the two-body Einstein-Infeld-Hoffman (EIH) equations [68]. The final term in (32), with its own post-Newtonian correction, is the negative of the static two-point potential that we sought. This potential is also put forward in GEFC/[4] as evidence in favour of the scalar model discussed in Sections II.2 and II.3. The steps by which it is extracted from Eqs. (1) and (14) appear to run as follows. The field \(\mathbf{\varphi}\) is calculated up to'some' \(\mathcal{O}\left(\epsilon^{2}\right)\) correction by adding the \(\mathcal{O}\left(\epsilon^{2}\right)\) expansion of (19) to the \(\mathcal{O}\left(\epsilon\right)\) (i.e. Fierz-Pauli) matter current in (14), and solving the resulting field equation. The influence of the higher-order couplings in (14) appears to be neglected, and it is the \(\mathcal{O}\left(\epsilon\right)\) Fierz-Pauli coupling term which is finally recycled (up to self energies) to give a statement of the potential complete with a \(\mathcal{O}\left(\epsilon^{2}\right)\) correction. As discussed in Section II.1, we lose confidence in the Fierz-Pauli coupling \(\mathcal{L}_{M}=-\sqrt{2}h_{\mu\nu}\tilde{T}^{\mu\nu}\) at PN orders, preferring the well-known Lagrangian formulation for relativistic point particles \(\mathcal{L}_{M}=-\rho^{\ast}/u^{0}\) (which incorporates PN corrections covariantly). Given that the correct potential is produced in GEFC[4], something must therefore be compensating for the use of the Fierz-Pauli coupling. The likely culprist is the use of the first PN correction to \(\varphi\) within the strict context of the GEFC scalar model (1). As we have shown in Section II.3, the scalar model does not describe gravitostatics at PN orders for reasons which have nothing to do with the proposed matter coupling. In studying the dynamical Lagrangian, we are allowed to imagine that the PN correction to \(h_{\mu\nu}\) still ad to the scalar model, but as we have just witnessed that correction _cancels_ in the analysis and does not contribute to the PN potential correction, which happens to just comprise squared Newtonian terms. It would appear in summary that the correct PPN potential is produced in GEFC/[4] because an even number of physically unsound steps have been introduced. As a final observation, it may seem better to be cautious about how convincing such a result _could_ have been. Any PN model, adjusting for self-energy diagrams, must necessarily correct the Newtonian tadpole with an EIH-like term in (32), with the only freedom being in the magnitude of that same correction. ## III Nothing new at second order We have outlined in Section II some concerns about the specific scalar gravity model used in GEFC/[1; 3; 4; 5; 6; 8]. However our analysis is insufficient to rule out the broader _principle_ of GEFC. In this section, we would therefore like to'steel-man' the GEFC proposal by discarding the scalar model, whilst still exploring the nonlinear but perturbative regime at the level of rigour just set in Section II.4. We will discuss speculative extensions to the strong gravity regime in Section V. Our treatment in this section will also facilitate a transition to studying _axisymmetric_ GEFC applications, i.e. to galactic rotation curves, which we continue to discuss in Section IV. ### Axisymmetric spacetime We first set up a metric, without reference to the PPN gauge, that is possibly the most general needed for a static axisymmetric system, viz: \[\begin{split} g^{00}=\left(1+a_{1}\right)^{2},\quad g^{11}=- \left(1+b_{1}\right)^{2},\\ g^{22}=-\left(1+d_{1}\right)^{2},\quad g^{33}=-\left(1+c_{1} \right)^{2},\end{split} \tag{33}\] where \(a_{1}\) through \(d_{1}\) are functions of (cylindrical) \(R\) and \(z\). We then set \[d_{1}=-\frac{a_{1}}{1+a_{1}}. \tag{34}\] The point of this is that it aligns the implied metric with the zero-rotation case of that used by Cooperstock & Tieu [62], who say that 'their metric is in the most general form necessary'. Nothing in the exact Einstein equations appears to call for this particular value of \(d_{1}\), but on the other hand there are no obvious problems that emerge from imposing it, and since it simplifies the exact equations considerably, we use it here. Next we carry out a \(\mathcal{O}\left(\varepsilon\right)\) linearisation of the exact Einstein equations implied by this metric. Specifically, we assume each of \(a_{1}\), \(b_{1}\) and \(c_{1}\) and the matter density \(\rho\) is \(\mathcal{O}\left(\varepsilon\right)\) and then expand the Einstein equations to \(\mathcal{O}\left(\varepsilon\right)\). Note that in contrast to Section II.4 we will choose to work with the covariantly conserved density \(\rho\). This then implies the relations \(b_{1}=c_{1}=d_{1}=-a_{1}\), together with the single remaining relation \(\mathbf{\nabla}^{2}a_{1}=4\pi G\rho+\mathcal{O}\left(\varepsilon^{2}\right)\), showing us that \(-a_{1}\) is the Newtonian potential. The above can be recapitulated, but with the expansions taken to \(\mathcal{O}\left(\varepsilon^{2}\right)\) instead. Thus we now set up our metric as \[\begin{split} g^{00}=\left(1+a_{1}^{f}+a_{1}^{s}\right)^{2}, \quad g^{11}=\left(1-a_{1}^{f}+b_{1}^{s}\right)^{2},\\ g^{33}=\left(1-a_{1}^{f}+c_{1}^{s}\right)^{2}.\end{split} \tag{35}\] The superscript \(f\) and \(s\) refer to \(\mathcal{O}\left(\varepsilon\right)\) (first order) and \(\mathcal{O}\left(\varepsilon^{2}\right)\) (second order) quantities. Notice that we already substitute for \(d_{1}\) in terms of \(a_{1}\) as given in (34) in the exact Einstein and geodesic equations, hence we do not need to specify an ansatz for this part of the metric. It is clear that the pressure only enters at \(\mathcal{O}\left(\varepsilon^{2}\right)\), and following from our discussion in Section I.1 we will briefly consider its presence until Section III.2 to strengthen the'steel-man' approach to GEFC. Thus we are assuming the \(\rho\) density is specified in advance, and then the pressure \(P\), the 'potential' \(a_{1}\) and other quantities will be derived from it. In terms of its physical meaning, \(\rho\) is the eigenvalue associated with the timelike eigenvector of the matter stress-energy tensor, and hence is physically well-defined and gauge invariant. If we were taking another approach to the equations, e.g. by assuming a given equation of state, then it would be sensible to have \(\rho\)'s defined at different orders of solution, but we do not need that here. Given these choices, the Einstein equations are automatically satisfied at \(\mathcal{O}\left(\varepsilon\right)\), and at \(\mathcal{O}\left(\varepsilon^{2}\right)\) we are able to reorganise part of them into the following interesting expression: \[\mathbf{\nabla}^{2}a_{1}^{s}=-12\pi\left(a_{1}^{f}\rho+P\right)+\left|\mathbf{\nabla} a_{1}^{f}\right|^{2}+\mathcal{O}\left(\varepsilon^{3}\right). \tag{36}\] This looks like a fully physical equation, and in principle enables us to find the \(\mathcal{O}\left(\varepsilon^{2}\right)\) contribution to the potential, \(-a_{1}^{s}\), once the \(\mathcal{O}\left(\varepsilon\right)\) one has been found _exactly_ from the Poisson equation \[\mathbf{\nabla}^{2}a_{1}^{f}=-4\pi\rho, \tag{37}\] and also assuming that we can find the pressure (see below for more on the latter). Before continuing, it is worth quickly verifying that the spacetime solution we are constructing in Eqs. (36) and (37) is consistent with the PPN result in Eqs. (25a) to (25c). This check can be performed by expanding \(u^{0}\) and \(\sqrt{-g}\) to find \[\rho=\left(1-3U\right)\rho^{*}+\mathcal{O}\left(\varepsilon^{2} \right), \tag{38a}\] \[\frac{\kappa}{8\pi}\int\frac{\mathrm{d}^{3}x^{\prime}\rho^{ \prime}}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}=U-3\Phi_{2}+\mathcal{O}\left( \varepsilon^{3}\right). \tag{38b}\] By (temporarily) substituting \(P=0\) into (36) and applying the useful identity (109), we can then recover (25a) by inverting the metric (35) to \(\mathcal{O}\left(\varepsilon^{2}\right)\). If we then want to find out the effect of this \(\mathcal{O}\left(\varepsilon^{2}\right)\) correction on the rotation curve of the galaxy, we need to be careful since the relation between the rotation curve velocity and \(a_{1}\) may itself be modified by \(\mathcal{O}\left(\varepsilon^{2}\right)\) effects. Indeed, now expanding the exact geodesic equations (13) for massive particles to \(\mathcal{O}\left(\varepsilon^{2}\right)\) using the ansatz (35) above, we get the following result for the circular velocity: \[\begin{split}|\mathbf{v}|^{2}&=-R\frac{\partial}{\partial R }\left(a_{1}^{f}+a_{1}^{s}\right)\\ &\quad+R\frac{\partial a_{1}^{f}}{\partial R}\left(a_{1}^{f}+R \frac{\partial a_{1}^{f}}{\partial R}\right)+\mathcal{O}\left(\epsilon^{3} \right).\end{split} \tag{39}\] The first term is what we may expect, but the second is new, and would also need to be taken into account. Finally we discuss the pressure, and whether this can successfully be found from the equations. It is easy to find the following relation from the \(\mathcal{O}\left(\epsilon^{2}\right)\) equations: \[\frac{\partial P}{\partial R}=\rho\,\frac{\partial a_{1}^{f}}{\partial R}+ \mathcal{O}\left(\epsilon^{3}\right). \tag{40}\] Finding the equivalent relation for the \(z\) derivative of the pressure, we need to consider the \(R\) and \(z\) derivatives of \(b_{1}^{s}\). We will not go through the details here, but it turns out that once one has fixed \(d_{1}\) to the value in (34), then both \(b_{1}\) derivatives are available explicitly, and we can commute on these to get a constraint. This yields \[\frac{\partial P}{\partial z}=\rho\,\frac{\partial a_{1}^{f}}{\partial z}+ \mathcal{O}\left(\epsilon^{3}\right), \tag{41}\] and then the consistency relation between the these derivatives will imply the result, that \[\frac{\partial\rho}{\partial R}\frac{\partial a_{1}^{f}}{\partial z}=\frac{ \partial\rho}{\partial z}\frac{\partial a_{1}^{f}}{\partial R}+\mathcal{O} \left(\epsilon^{3}\right), \tag{42}\] i.e. that the shape of the density distribution has to be the same as the shape of the \(-a_{1}^{f}\) potential, which does not seem possible, for a realistic distribution. This will be discussed elsewhere, and may be of some relevance to the _fluid ball conjecture_[69]. In any case, the consistent derivatives would allow us to reconstruct \(P\) if we wished to, hence all the elements needed for explicitly calculating the \(\mathcal{O}\left(\epsilon^{2}\right)\) potential from (36) are available. In general, this would have to be done via numerical evaluation of integrals, but it is of interest to see the machinery working in a completely analytic case, and so in Appendix C we calculate the \(\mathcal{O}\left(\epsilon^{2}\right)\) GR correction to the Newtonian potential for a uniform density sphere. Of course, the spherical case is not expected to lead to a GEFC effect: it is the breaking of the spherical symmetry which allows the collapse process, and this is supposed to be indicated in GEFC[2] by the correlation between the assumed size of the dark matter halo and the optical ellipticity of the host galaxy. By testing the result (36) for the spherical case (for which an exact solution is known), we can show that it does legitimately _correct_ the Newtonian approximation, and we illustrate this in Fig. 19. We then remember that the formula (36) should also be valid for a general axisymmetric situation, and notice that there is _no hint_ in this expression that cases with extreme flattening will lead to anything special. Direct application of (36) to the axisymmetric and flattened case is of course also possible, but quite involved. In order to render the analysis tractable, and to reconnect with the chromoelectric analogy in Section I, we will instead address the flattened case in Section III.4 using a heuristic nonlinear extension of the GEM formalism, which we now develop in Sections III.2 to III.3. ### Gravitoelectromagnetism Our initial axisymmetric analysis in Section III.1 and Appendix C does not suggest GEFC phenomena, but nor is it grounded in any systematic perturbation scheme for _general_ spacetimes: for this we might look to the PPN formalism introduced in Section II.4. Alternatively, the GEM formalism would seem to be naturally suited to the GEFC hypothesis. Since GEFC is a proposed graviton self-coupling effect, we may imagine a nonlinear extension of GEM in which the gravitoelectric charge (mass-energy) is augmented by a contribution from the gravitoelectric field strength density. In keeping with the'steel-man' directive, we therefore now transition to the GEM formalism in the hope that a hidden GEFC effect will become apparent. GEM provides a useful, notionally-familiar description of linearised general relativity (GR), by drawing a close analogy with classical electromagnetism (EM). We will limit our discussion to non-relativistic stationary matter sources, for which one may obtain GEM field equations _and_ a GEM 'Lorentz' force law that are fully consistent and have forms precisely analogous to their counterparts in EM, which is not possible for more general time-dependent scenarios. In particular, these assumptions regarding the matter source are appropriate for modelling rotation curves in galaxies. In the standard approach to such modelling, one assumes the more restrictive static, Newtonian limit for the matter source, in which a test particle is subject only to the gravitoelectric force derived from the usual gravitational potential produced by the galactic density distribution. This usual approach fails to predict the flat rotation curves observed in many galaxies in terms of their visible matter distribution, as discussed in Section I. The GEM formalism for linear GR with a stationary, non-relativistic source is based on the simple ansatz of relabelling the components of \(\tilde{h}^{\mu\nu}\) as11 Footnote 11: Conventions in the literature vary up to a multiplicative constant for the definition of the gravitomagnetic vector potential \(A^{a}\). These factors variously modify the analogues of the EM field equations and the Lorentz force law, with no scaling choice allowing all the GEM and EM equations to be perfectly analogous. Here, we follow the convention used in [25]. \[\begin{split}\tilde{h}^{00}=4\Phi+\mathcal{O}\left(\epsilon^{2} \right),\qquad\tilde{h}^{0\alpha}=A^{a}+\mathcal{O}\left(\epsilon^{5/2} \right),\\ \tilde{h}^{a\beta}=\mathcal{O}\left(\epsilon^{2}\right),\end{split} \tag{43}\] where we have defined the gravitational scalar potential \(\Phi\) and spatial gravitomagnetic vector potential \(A^{a}\). On lowering indices, the corresponding components of \(h_{\mu\nu}\) are \(h_{00}=h_{11}=h_{22}=h_{33}=2\Phi+\mathcal{O}\left(\epsilon^{2}\right)\) and \(h_{0a}=A_{a}+\mathcal{O}\left(\epsilon^{5/2}\right)\). Thus, the linear GEM potentials in (43) can be _approximately_ defined in terms of the PPN potentials in (26) \[\Phi\equiv-U+\mathcal{O}\left(\epsilon^{2}\right),\quad A^{a}\equiv-4V^{a}+ \mathcal{O}\left(\epsilon^{5/2}\right). \tag{44}\] Just as we resurrected \(P\) within Section III.1 and Appendix C, we see from (44) and (26) that GEM allows us to resurrect the fluid velocity - though in making a fair comparison to the GEFC proposal we will suppress this velocity again in Section III.4. It should be remembered that raising or lowering a spatial (Roman) index introduces a minus sign with our adopted metric signature. Thus the numerical value of \(A_{\alpha}\) is minus that of \(A^{a}\), the latter being the \(a\)th component of the spatial vector \(\mathbf{A}\). It is also worth noting that both \(\Phi\) and \(A_{\alpha}\) are dimensionless, thereby yielding dimensionless components \(h_{\mu\nu}\), which is consistent with our choice of coordinates \([x^{\mu}]=(t,x^{\alpha})\) having dimensions of length. Indeed, reverting for the moment to the viewpoint in which \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\) defines the metric of a (slightly) curved spacetime, one may write the line element in the limit of a stationary, non-relativistic source in quasi-Minkowski coordinates as \[\mathrm{d}s^{2}=(1+2\Phi)\,\mathrm{d}t^{2}-2\mathbf{A}\cdot\mathrm{d}t\mathbf{x}-(1-2 \Phi)\,\mathrm{d}\mathbf{x}^{2}+\mathcal{O}\left(\epsilon^{2}\right). \tag{45}\] Determining the geodesics of this line element provides a straightforward means of calculating the trajectories of test particles in the weak gravitational field of a stationary, non-relativistic source. In particular, one need not assume that the test particles are slowly moving, and so the trajectories of photons may also be found by determining the null geodesics of the line element (45). In Section II.4 we chose to work exactly with the density \(\rho^{*}\), which satisfies the Euclidean conservation law \(\partial_{\mu}\left(\rho^{*}u^{\mu}/u^{0}\right)=0\). In Section III.1 and Appendix C we chose to work instead with the physical density \(\rho\equiv\rho^{*}/\sqrt{-g}u^{0}\), which appears in the perfect fluid stress-energy tensor (6) and which satisfies the covariant conservation law \(\mathbf{\nabla}_{\mu}\left(\rho u^{\mu}\right)=0\). Within this section and Sections III.3 and III.4, we will further complicate matters slightly (for later convenience) by introducing the density \[\begin{split}\rho^{\dagger}\equiv T_{00}&=\rho^{*} \left(1-5U\right)+\mathcal{O}\left(\epsilon^{3}\right)\\ &=\rho\left(1-2U\right)+\mathcal{O}\left(\epsilon^{3}\right). \end{split} \tag{46}\] With the identifications Eqs. (43) and (46), we may choose to write the linearised field equations (9) in the Lorenz gauge for a stationary, non-relativistic source _exactly_ in the scalar/vector form \[\mathbf{\nabla}^{2}\Phi\equiv\frac{\kappa}{2}\rho^{\dagger},\quad\mathbf{\nabla}^{2} \mathbf{A}\equiv 2\kappa\mathbf{j}^{\dagger}, \tag{47}\] where we have defined the momentum density (or matter current density) \(\mathbf{j}^{\dagger}\equiv\rho^{\dagger}\mathbf{\nu}\), and the (linear) Lorenz gauge condition \(\partial_{\rho}\tilde{\mathbf{j}}^{\mu\rho}=0\) itself becomes \(\mathbf{\nabla}\cdot\mathbf{A}=0\). Moreover, the general solutions to the equations (47) may be read off directly from (11), (12) and (43) to yield (26). Clearly, the first equations in (47) and (26) recover, respectively, the Poisson equation and its solution for the gravitational potential, familiar from Newtonian gravity, whereas the second pair of equations determine the gravitomagnetic vector potential that describes the 'extra' (weak) gravitational field predicted in linearised GR, which is produced by the _motion_ of the fluid elements in a stationary, non-relativistic source. One may take the analogy between linearised GR and EM further by defining the _gravitoelectric_ and _gravitomagnetic_ fields \(\mathbf{E}\equiv-\mathbf{\nabla}\Phi\) and \(\mathbf{B}\equiv\mathbf{\nabla}\times\mathbf{A}\). Using the equations (47), it is straightforward to verify that the fields \(\mathbf{E}\) and \(\mathbf{B}\) satisfy the _gravitational Maxwell equations_ \[\begin{split}\mathbf{\nabla}\cdot\mathbf{E}&=-\frac{\kappa }{2}\rho^{\dagger},\quad\mathbf{\nabla}\cdot\mathbf{B}=0,\\ \mathbf{\nabla}\times\mathbf{E}&=\mathbf{0},\quad\mathbf{\nabla} \times\mathbf{B}=-2\kappa\mathbf{j}^{\dagger}.\end{split} \tag{48}\] The gravitoelectric field \(\mathbf{E}\) describes the standard (Newtonian) gravitational field produced by a static mass distribution, whereas the gravitomagnetic field \(\mathbf{B}\) is the 'extra' gravitational field produced by _moving_ fluid elements in the stationary, non-relativistic source. The equation of motion for a test particle is the geodesic equation (13) in the metric (45), from which one may determine the trajectories of either massive particles, irrespective of their speed, or massless particles, by considering timelike or null geodesics, respectively. In line with the PPN assumptions set out in Section I.1, we will assume here, however, that the test particle is massive and _slow-moving_, i.e. its speed \(\upsilon\) is sufficiently small that we may neglect terms in \(\upsilon^{2}\) and higher. Hence we may take \(\gamma_{\nu}\equiv\left(1-\upsilon^{2}\right)^{-1/2}\approx 1\), so that the 4-velocity of the particle may be written \(u^{\mu}\equiv\gamma_{\nu}(1,\mathbf{\nu})\approx(1,\mathbf{\nu})\). This immediately implies that \(\ddot{x}^{\sigma}=0\) and, moreover, that \(\mathrm{d}t/\mathrm{d}s=1\), so one may consider only the spatial components of (13) and replace dots with derivatives with respect to \(t\). Expanding the summation in (13) into terms containing respectively two time components, one time and one spatial component, and two spatial components, neglectng the purely spatial terms since their ratio with respect to the purely temporal term is of order \(\upsilon^{2}\), expanding the connection coefficients to first-order in \(h_{\mu\nu}\) and remembering that for a stationary field \(\partial_{0}h_{\mu\nu}=0\), one obtains \[\begin{split}\frac{\mathrm{d}\nu^{\alpha}}{\mathrm{d}t}& =-\frac{1}{2}\delta^{\alpha\beta}\partial_{\beta}h_{00}-\delta^{ \alpha\gamma}\left(\partial_{\gamma}h_{0\beta}-\partial_{\beta}h_{0\gamma} \right)\upsilon^{\beta}\\ &+\mathcal{O}\left(\epsilon^{2}\right).\end{split} \tag{49}\] Recalling that one inherits a minus sign on raising or lower a spatial (Roman) index, this equation of motion may be rewritten in vector form in terms of GEM fields as \[\frac{\mathrm{d}\mathbf{\nu}}{\mathrm{d}t}=-\mathbf{\nabla}\Phi+\mathbf{\upsilon}\times( \mathbf{\nabla}\times\mathbf{A})=\mathbf{E}+\mathbf{\upsilon}\times\mathbf{B}+\mathcal{O}\left( \epsilon^{2}\right), \tag{50}\] which recovers the _gravitational Lorentz force law_ for slow-moving massive particles in the gravitational field of a stationary non-relativistic source. The first term on the right-hand side gives the standard Newtonian result for the motion of a test particle in the field of a static non-relativistic source, whereas the second term gives the 'extra' force felt by a _moving_ test particle in the presence of the 'extra' field produced by _moving_ fluid elements in the stationary non-relativistic source. ### Second-order general relativity As mentioned in Section I, GEFC/[7] has proposed a separate approach to using GR to model galaxy rotations curves without dark matter, which neglects gravitomagnetic forces entirely but instead includes the effects of graviton self-interaction. To include this effect, at least to leading order in the self-interaction, one must consider second-order GR. We again closely follow the approach used in [25]. In this approach, one again assumes a weak gravitational field, but now expands the Einstein equations (5) to second-order in \(h_{\mu\nu}\) to yield \(G^{(1)}_{\mu\nu}+G^{(2)}_{\mu\nu}=\kappa T_{\mu\nu}\), where the second-order Einstein tensor is given by \[G^{(2)}_{\mu\nu}=R^{(2)}_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}R^{(2)}-\frac{1}{2} h_{\mu\nu}R^{(1)}+\frac{1}{2}\eta_{\mu\nu}h^{\rho\sigma}R^{(1)}_{\rho\sigma}, \tag{51}\] where \(R^{(2)}_{\mu\nu}\) denotes the terms in the Ricci tensor that are second order in \(h_{\mu\nu}\), and \(R^{(1)}\) and \(R^{(2)}\) denote the terms in the Ricci scalar that are first and second order in \(h_{\mu\nu}\), respectively. One may show, however, that, unlike \(G^{(1)}_{\mu\nu}\), the quantity \(G^{(2)}_{\mu\nu}\) is _not_ invariant under the gauge transformation (7). Before addressing this shortcoming, it is useful to perform a trivial rearrangement of the second-order field equations to yield \(G^{(1)}_{\mu\nu}=\kappa\left(T_{\mu\nu}+t_{\mu\nu}\right)\), where we have defined \(t_{\mu\nu}\equiv-\kappa^{-1}G^{(2)}_{\mu\nu}\), which may then be interpreted as the energy-momentum of the linearised gravitational field to leading order in the field self-interaction. This interpretation prompts one to take seriously the fact that the energy-momentum of a gravitational field at a point in spacetime has no real meaning in GR, since at any particular event one can always transform to a free-falling frame in which gravitational effects disappear. A convenient opportunity to distance oneself from the gauge-dependent nature of gravitational energy arises when one is concerned only with the corrective back-reaction to spacetimes dominated at \(\mathcal{O}(h_{\mu\nu})\) by _gravitational radiation_[24]. One can in such cases, at each point in spacetime, average \(G^{(2)}_{\mu\nu}\) at a'mezoscale' granularity (i.e. between the back-reaction and wavenumber scales) in order to probe the physical curvature of the spacetime, which gives a gauge-invariant measure of the gravitational field strength. Denoting this averaging process by \(\left\langle\cdot\right\rangle\), the second-order field equations should then read \(G^{(1)}_{\mu\nu}+\left\langle G^{(2)}_{\mu\nu}\right\rangle=\kappa T_{\mu\nu}\), or equivalently \(G^{(1)}_{\mu\nu}=\kappa\left(T_{\mu\nu}+\left\langle t_{\mu\nu}\right\rangle\right)\). Of course, GEFC is not proposed to be radiative in origin; but in order to provide measurable phenomena it _must_ be gauge-invariant. For the remainder of this section and in Section III.4 therefore, we will employ the radiative average \(\left\langle\cdot\right\rangle\) to arrive at heuristic proxies for second-order gravitoelectrostatic corrections of the kind apparently implicated in GEFC. In this way will be able to correct certain model Newtonian galactic rotation curves by means of analytically tractable integrals, and confirm explicitly that such corrections are of no astrophysical significance. In particular, this \(\mathcal{O}\left(\epsilon^{2}\right)\) approach will not be tied in any way to axisymmetry, as was the case with our earlier attempt in Section III.1 and Appendix C. Corrections obtained in this way will not be faithful to GR, but they introduce 'radiative' errors which are no greater than \(\mathcal{O}\left(\epsilon^{2}\right)\), and which are therefore too small to conceal the claimed GEFC phenomena. In summary, we are asserting that _'perturbative calculations give perturbative results'_ - a tautology which we are forced to explore directly since it does not appear to be adequately addressed in GEFC/[1; 2; 3; 5; 6; 7; 8; 9]. In moving forward, we nonetheless take care in Appendix D to track the error introduced by the radiative average12. Footnote 12: It is worth noting that although the radiative average is usually envisaged as being taken over some small spacetime ’patch’ at each point, the formalism does not require this interpretation. To fulfil its practical usefulness in calculations, it is necessary only for the averaging to allow one to assume that the first derivatives of any function of spacetime position vanish (at least on scales smaller than the averaging scale). For spacetimes with particular symmetries, one may thus equally well average over larger regions that contain the Killing congruences of the spacetime. For example, in a static, spherically-symmetric spacetime, one may average over a thin spherical shell at each radius. Similarly, in a stationary, axisymmetric spacetime that is symmetric about the centre-plane \(z=0\) (which is a reasonable approximation for galactic systems), each averaging region may have the form two ’halo’-shaped tubes of small cross-section centred on the coordinate curves \(R=R_{0}\), \(\varphi=\varphi_{0}\) and \(z=\pm z_{0}\). In both cases, in Cartesian Lorentz coordinates the first derivatives of any spacetime function of position will average to zero over such regions, as required, and one also is prevented from adopting any coordinate system that constitutes a free-fall frame over the whole of such a region, thereby yielding gauge-invariant results. For now let us assume that the solution to the second-order field equations has the form \(h_{\mu\nu}=\ell_{\mu\nu}+\delta h_{\mu\nu}\), where \(\ell_{\mu\nu}\) is the solution to the first-order (linear) field equations \(G^{(1)}_{\mu\nu}=\kappa T_{\mu\nu}\) and \(|\delta h_{\mu\nu}|\ll|\ell_{\mu\nu}|\) is a small perturbation to it. We will assume as described above that \(\ell_{\mu\nu}\) is'susceptible' to radiative averaging, without itself being radiative, and that this operation has some physical justification. Since \(G^{(1)}_{\mu\nu}\) is linear in \(h_{\mu\nu}\), one may write \(G^{(1)}_{\mu\nu}(h)=G^{(1)}_{\mu\nu}(\ell)+G^{(1)}_{\mu\nu}(\delta h)\), where the function arguments are merely a shorthand for the various gravitational field pseudotensors, rather than denoting their traces. Since \(G^{(2)}_{\mu\nu}\) is non-linear (quadratic) in \(h_{\mu\nu}\), one instead has \[\begin{split} G^{(2)}_{\mu\nu}(h)&=G^{(2)}_{\mu\nu}( \ell)+\frac{\partial G^{(2)}_{\mu\nu}}{\partial h_{\rho\sigma}}\Bigg{|}_{\ell }\delta h_{\rho\sigma}+\mathcal{O}\left(\epsilon^{4}\right)\\ &=G^{(2)}_{\mu\nu}(\ell)+\mathcal{O}\left(\epsilon^{3}\right). \end{split} \tag{52}\] Adopting a'mean-field' approach, one ignores the final term on the RHS, and so the second-order field equations may be written symbolically as \[G^{(1)}_{\mu\nu}(\delta h)+\left\langle G^{(2)}_{\mu\nu}(\ell)\right\rangle= \mathcal{O}\left(\epsilon^{3}\right), \tag{53}\] where the two terms on the LHS are of order \(\mathcal{O}(\delta h)\) and \(\mathcal{O}(\ell^{2})\), respectively, and hence both second-order small. Equation (53) thus determines the correction \(\delta h_{\mu\nu}\) to the solution \(\ell_{\mu\nu}\) of the linearised GR field equations that occurs due to the leading-order graviton self-interaction. It now remains only to determine the form of \(\left\langle G^{(2)}_{\mu\nu}(\ell)\right\rangle\); again our calculation closely follows that in [25]. First, since \(\mathscr{L}_{\mu\nu}\) is the solution to the linearised GR field equations, one may express the last two terms on the right-hand side of (51), which depend on the first-order Ricci tensor and Ricci scalar, in terms of the matter energy-momentum tensor as \[\begin{split} G^{(2)}_{\mu\nu}(\mathscr{L})&=R^{(2)}_ {\mu\nu}(\mathscr{L})-\frac{1}{2}\eta_{\mu\nu}R^{(2)}(\mathscr{L})\\ &\quad+\frac{1}{2}\kappa\left(\bar{\mathscr{L}}_{\mu\nu}T+\eta_{ \mu\nu}\mathscr{L}^{\rho\sigma}T_{\rho\sigma}\right).\end{split} \tag{54}\] One then requires only an expression for \(R^{(2)}_{\mu\nu}(\mathscr{L})\), from which \(R^{(2)}(\mathscr{L})\) can be found by contraction. Expanding connection coefficients to second-order in \(\mathscr{L}_{\mu\nu}\), substituting them into the usual expression for the Ricci tensor and keep only those terms quadratic in \(\mathscr{L}_{\mu\nu}\), one finds \[\begin{split} R^{(2)}_{\mu\nu}(\mathscr{L})&=\frac{ 1}{4}\partial_{\mu}\mathscr{L}^{\rho\sigma}\partial_{\nu}\mathscr{L}_{\rho \sigma}-\frac{1}{2}\mathscr{L}^{\rho\sigma}\big{(}\partial_{\mu}\partial_{ \sigma}\mathscr{L}_{\nu\sigma}\\ &\quad+\partial_{\nu}\partial_{\sigma}\mathscr{L}_{\mu\rho}- \partial_{\mu}\partial_{\nu}\mathscr{L}_{\rho\sigma}-\partial_{\rho}\partial_{ \sigma}\mathscr{L}_{\mu\nu}\big{)}\\ &\quad-\frac{1}{2}\partial^{\sigma}\mathscr{L}_{\nu}^{\rho} \big{(}\partial_{\rho}\mathscr{L}_{\sigma\mu}-\partial_{\sigma}\mathscr{L}_{ \rho\mu}\big{)}\\ &\quad-\frac{1}{2}\left(\partial_{\sigma}\mathscr{L}^{\rho\sigma }-\frac{1}{2}\partial^{\rho}\mathscr{L}\right)\left(\partial_{\mu}\mathscr{L} _{\nu\rho}+\partial_{\nu}\mathscr{L}_{\mu\rho}-\partial_{\rho}\mathscr{L}_{ \mu\nu}\right).\end{split} \tag{55}\] Although the third group of terms on the right-hand side is not manifestly symmetric in \(\mu\) and \(\nu\), this symmetry is easy to verify. In fact, in subsequent calculations it is convenient to maintain manifest symmetry by writing out this term again with \(\mu\) and \(\nu\) reversed and multiplying both terms by one-half. To evaluate the averaged expression \(\left\langle R^{(2)}_{\mu\nu}\right\rangle\), one merely notes that first derivatives average to zero. Thus, for any function of spacetime position \(a(x)\), one has \(\left\langle\partial_{\mu}a\right\rangle=0\). This has the important consequence that \(\left\langle\partial_{\mu}(ab)\right\rangle=\left\langle(\partial_{\mu}a)b \right\rangle+\left\langle a(\partial_{\mu}b)\right\rangle=0\), and hence we may swap derivatives in products and inherit only a minus sign, i.e. \(\left(\left\langle\partial_{\mu}a\right\rangle b\right\rangle=-\left\langle a (\partial_{\mu}b)\right\rangle\). One first makes use of this result to rewrite products of first derivatives in (55) in terms of second derivatives. Using the first-order field equations to substitute for terms of the form \(\boldsymbol{\square}\)\(\mathscr{L}_{\mu\nu}\), and then applying the averaging result once more to rewrite terms containing second derivatives as products of first derivatives, one finally obtains \[\begin{split}\left\langle R^{(2)}_{\mu\nu}(\mathscr{L})\right\rangle &=-\frac{1}{4}\bigg{\langle}\partial_{\mu}\mathscr{L}_{\rho\sigma }\partial_{\nu}\mathscr{L}^{\rho\sigma}-2\partial_{\sigma}\mathscr{L}^{\rho \sigma}\partial_{(\mu}\mathscr{L}_{\nu)\rho}\\ &\quad+2\partial_{\rho}\mathscr{L}^{\rho}\partial_{(\mu} \mathscr{L}^{\rho}_{\nu)}-\partial_{\mu}\mathscr{L}^{\rho}\partial_{\nu} \mathscr{L}\\ &\quad+\kappa\left(2\mathscr{L}_{\mu\nu}T+2\mathscr{L}T_{\mu\nu}- \eta_{\mu\nu}\mathscr{L}T-4\mathscr{L}_{\rho(\mu}T^{\rho}_{\nu)}\right) \bigg{\rangle}.\end{split} \tag{56}\] Contracting this expression, and once again making use of the averaging result and the first-order field equations, one quickly finds that \(\left\langle R^{(2)}(\mathscr{L})\right\rangle=\frac{1}{2}\kappa\left\langle \mathscr{L}^{\rho\sigma}T_{\rho\sigma}\right\rangle\). Combining these expressions and writing the result (mostly) in terms of the trace reverse field, one thus finds \[\begin{split}\left\langle G^{(2)}_{\mu\nu}(\mathscr{L})\right\rangle &=-\frac{1}{4}\bigg{\langle}(\partial_{\mu}\bar{\mathscr{L}}_{\rho \sigma})\partial_{\nu}\bar{\mathscr{L}}^{\rho\sigma}-2(\partial_{\sigma}\bar{ \mathscr{L}}^{\rho\sigma})\partial_{(\mu}\bar{\mathscr{L}}^{\nu})_{\rho}\\ &\quad-\frac{1}{2}(\partial_{\mu}\bar{\mathscr{L}})\partial_{\nu }\bar{\mathscr{L}}^{\rho}-\kappa\left(4\bar{\mathscr{L}}_{\rho(\mu}T^{\rho}_{ \nu)}+\eta_{\mu\nu}\mathscr{L}^{\rho\sigma}T_{\rho\sigma}\right)\bigg{\rangle}.\end{split} \tag{57}\] It may be verified by direct substitution that this expression is indeed invariant under the gauge transformation (7) (with \(h_{\mu\nu}\) replaced by \(\mathscr{L}_{\mu\nu}\)), as required. One may then substitute (57) and the expression for \(G^{(1)}_{\mu\nu}\) in the linearised GR field equations (8) (with \(\bar{h}_{\mu\nu}\) replaced by \(\delta\bar{h}_{\mu\nu}\)) into (53) to obtain an equation for the trace-reversed correction \(\delta\bar{h}_{\mu\nu}\) in an arbitrary gauge. Since both terms in (53) are separately invariant to the gauge transformation (7) (with \(h_{\mu\nu}\) replaced by \(\delta h_{\mu\nu}\) or \(\mathscr{L}_{\mu\nu}\), respectively), however, one can impose the separate Lorenz gauge conditions \(\partial_{\rho}\delta\bar{h}^{\mu\rho}=0\) and \(\partial_{\rho}\bar{\mathscr{L}}^{\mu\rho}=0\), which yields \[\begin{split}\boldsymbol{\square}\,\delta\bar{h}_{\mu\nu}& =\frac{1}{2}\Big{\langle}\kappa\left(4\bar{\mathscr{L}}_{\rho(\mu}T^{\rho}_{ \nu)}+\eta_{\mu\nu}\mathscr{L}^{\rho\sigma}T_{\rho\sigma}\right)\\ &\quad-\partial_{\mu}\bar{\mathscr{L}}_{\rho\sigma}\partial_{\nu} \bar{\mathscr{L}}^{\rho\sigma}+\frac{1}{2}\partial_{\mu}\bar{\mathscr{L}}_{\rho }\bar{\mathscr{L}}_{\nu}\bar{\mathscr{L}}\Big{\rangle}+\mathcal{O}\left(\epsilon^ {3}\right).\end{split} \tag{58}\] ### Second-order gravitolectrostatics Equipped with the apparatus from Section III.3, we here develop a GEM formalism for _second-order_ GR, thereby including the leading-order graviton self-interactions while avoiding the heuristic approach of GEFC[7] which considers the lensing of gravitoelectric field lines. We will consider the question of lensing separately in Section IV. We will again confine our attention to stationary, non-relativistic matter sources. By analogy with the GEM ansatz (43), in which we now replace \(h_{\mu\nu}\) with \(\mathscr{L}_{\mu\nu}\), one may make a corresponding identification for the corrections \(\delta h_{\mu\nu}\), such that \[\delta\bar{h}^{00}=4\delta\Phi,\qquad\delta\bar{h}^{0\alpha}=\delta A^{\alpha}, \qquad\delta\bar{h}^{\alpha\beta}=\mathcal{O}\left(\epsilon^{2}\right). \tag{59}\] and we again approximate the energy-momentum tensor of a stationary, non-relativistic source using (12). In this paper we do not consider the general case, which includes matter currents and the resulting gravitomagnetic field, rather we make contact with the GEFC approach by considering the more restrictive case of a static matter source, for which one instead assumes the space-time components of the matter energy-momentum tensor to vanish, \(T^{\alpha 0}=0\). In this case, there exists only the gravitoelectric field derived from the Newtonian potential and one must strictly also assume the fluid to support pressure in order to establish an equilibrium configuration for the galaxy. Following GEFC and our discussion in Section I.1, however, we will ignore this pressure contribution in determining the correction to the Newtonian potential resulting from leading-order graviton self-interactions. In this simplified case, one need consider only the 00-component of the general result (58) in the absence of any time-dependence or source motions. Thus, with no sum on \(\alpha\), one has that \(\mathscr{L}_{00}=\mathscr{L}_{aa}=2\Phi\), \(\mathscr{L}_{00}=4\Phi\), \(\mathscr{L}_{aa}=0\), \(T_{00}=\rho^{\dagger}\), \(T_{aa}=0\) and time derivatives \(\partial_{0}\) of any quantity vanish, such that \(\boldsymbol{\square}=-\nabla^{2}\). This yields the remarkably simple result \[\boldsymbol{\nabla}^{2}\delta\Phi=-\frac{9\kappa}{4}\Phi\rho^{\dagger}+ \mathcal{O}\left(\epsilon^{3}\right), \tag{60}\] which is the principal fruit of the radiative averaging process. Thus, in principle, for any specified density distribution \(\rho^{\dagger}\), one need only determine the Newtonian gravitational potential \(\Phi\) using the first equation in (26) and then substitute this result into (60), to which, by analogy, the solution is given by \[\delta\Phi=\frac{9\kappa}{16\pi}\int\frac{\mathrm{d}^{3}x^{\prime}\Phi^{\prime }{\rho^{\dagger}}^{\prime}}{|\mathbf{x}-\mathbf{x}^{\prime}|}+\mathcal{O}\left(\epsilon ^{3}\right)=-\frac{9}{2}\Phi_{2}+\mathcal{O}\left(\epsilon^{3}\right)\,, \tag{61}\] where we compare again with (26). The resulting \(\mathcal{O}\left(\epsilon^{2}\right)\) solution for the gravitational potential is then simply \(\Phi+\delta\Phi\). Let us pause for a moment to reconnect with the PPN formulation in Section II.4. The correction (61) and definitions (44) imply \[\tilde{h}_{00}=-2U-\frac{9}{2}\Phi_{2}+\mathcal{O}\left(\epsilon^{3}\right), \tag{62}\] which could be consistent with (25a), but turns out not to be when the trace \(\delta\tilde{h}\) is also calculated from (58). This discrepancy might arise because (25a) encodes the standard PPN gauge, whilst the gauge choices made en route to (58) are only linearly equivalent to the harmonic gauge and are not, to our knowledge, used beyond this work. As shown in [43], the \(\mathcal{O}\left(\epsilon^{3}\right)\) corrections to \(h_{00}\) do not differ between the standard PPN and harmonic coordinate gauges when pressures and accelerations are neglected. In fact, we show in Appendix D that it is the radiative averaging procedure, rather than the gauge choice, which causes the deviation from PPN. Moving forward, as a first example we connect with Section III.1 by calculating in Appendix C the \(\mathcal{O}\left(\epsilon^{2}\right)\) gravitational potential of a sphere of uniform density. As a result of the radiative average discussed in Section III.3, our result in (101) is not required to be strictly faithful to the exact results Eqs. (101b) and (101c) which follow from the treatment in Section III.1, but we are satisfied that the corrections are comparable in magnitude. An example system for which one may even more straightforwardly derive an analytical result is two static point particles - as considered already in Section II.4 - for which the density is given by (27) with \(N=2\), and for which \(m_{n}^{\dagger}\) may be solved for in terms of \(m_{1}^{*}\) via (46). Substituting \(\rho^{\dagger}\) into the integral solution in the first equation in (47), one immediately obtains the well-known result \[\Phi=-\frac{Gm_{1}^{\dagger}}{|\mathbf{x}-\mathbf{x}_{1}|}-\frac{Gm_{1}^{\dagger}}{| \mathbf{x}-\mathbf{x}_{2}|}. \tag{63}\] Substituting this expression and that for \(\rho^{\dagger}\) into (61), and ignoring the infinite self-energy terms, then gives \[\begin{split}\delta\Phi&=-\frac{9G^{2}}{2}\frac{m_ {1}^{\dagger}m_{2}^{\dagger}}{|\mathbf{x}_{1}-\mathbf{x}_{2}|}\left(\frac{1}{|\mathbf{x}- \mathbf{x}_{1}|}+\frac{1}{|\mathbf{x}-\mathbf{x}_{2}|}\right)\\ &+\mathcal{O}\left(\epsilon^{3}\right).\end{split} \tag{64}\] The above analysis may be easily extended to an arbitary number \(N\) of point particles. As with (101), we do not really expect a precise agreement between (64) and an equivalent exact formula following from our considerations in Section II.4 -- once again however, the corrections are of a comparable magnitude. For modelling galaxy rotation curves while retaining some analytical simplicity, however, one must consider axisymmetric density distributions of the kind introduced already in Section III.1. In this case, the integral solution in the first equation in (26) may be written in cylindrical polar coordinates with azimuthal symmetry: \[\begin{split}\Phi(R,z)=-2G\int_{0}^{\infty}\mathrm{d}R^{\prime} \int_{-\infty}^{\infty}&\mathrm{d}z^{\prime}\,\rho^{\dagger}(R ^{\prime},z^{\prime})\\ &\times R^{\prime}\sqrt{\frac{m}{RR^{\prime}}}K(m),\end{split} \tag{65}\] where \(K(m)\) is a complete elliptic integral function of the first kind and \(m\equiv 4RR^{\prime}/\left[(R+R^{\prime})^{2}+(z-z^{\prime})^{2}\right]\). Moreover, the derivatives \(\partial\Phi/\partial R\) and \(\partial\Phi/\partial z\) may also be expressed analytically as \[\frac{\partial\Phi}{\partial R} =G\int_{0}^{\infty}\mathrm{d}R^{\prime}\int_{-\infty}^{\infty} \mathrm{d}z^{\prime}\,\rho^{\dagger}(R^{\prime},z^{\prime})\] \[\times\frac{R^{\prime}}{R}\sqrt{\frac{m}{RR^{\prime}}}\left[K(m)+ \frac{1}{2}\left(\frac{R}{R^{\prime}}-\frac{2-m}{m}\right)\frac{mE(m)}{1-m} \right], \tag{66a}\] \[\frac{\partial\Phi}{\partial z} =\frac{G}{2}\int_{0}^{\infty}\mathrm{d}R^{\prime}\int_{-\infty}^ {\infty}\mathrm{d}z^{\prime}\,\rho^{\dagger}(R^{\prime},z^{\prime})\] \[\times\left(\frac{z-z^{\prime}}{R}\right)\sqrt{\frac{m}{RR^{ \prime}}}\frac{mE(m)}{1-m}, \tag{66b}\] where \(E(m)\) denotes a complete elliptic integral of the second kind. By analogy, the \(\mathcal{O}\left(\epsilon^{2}\right)\) correction (61) may immediately be written as \[\begin{split}\delta\Phi(R,z)=9G\int_{0}^{\infty}\mathrm{d}R^{ \prime}&\int_{-\infty}^{\infty}\mathrm{d}z^{\prime}\,\Phi(R^{ \prime},z^{\prime})\,\rho^{\dagger}(R^{\prime},z^{\prime})\\ &\times R^{\prime}\sqrt{\frac{m}{RR^{\prime}}}K(m)+\mathcal{O} \left(\epsilon^{3}\right).\end{split} \tag{67}\] Some analytical density-potential pair solutions to (65) exist [70; 71], most notably for uniform density spheroids (plus some non-axisymmetric examples, such as a uniform density triaxial ellipsoid [72]). In principle, one could model a galaxy using a very flattened uniform density prolate spheroid to obtain an analytical expression for \(\Phi(R,z)\), or perhaps the closely related Miyamoto-Nagai density distribution [66] employed by Ludwig [73] - we will consider this distribution further in Section IV, in the context of intragalactic lensing. The resulting expression for \(\Phi(R,z)\) may then be substituted into (67) to obtain \(\delta\Phi(R,z)\), but no analytical solution exists in this latter case, even if the density is uniform. Thus, there seems no alternative to evaluating (67) numerically. Since none of the analytical density-pair solutions to (65) are a particularly good approximation to a real galaxy, one might instead consider an alternative form for the specified density distribution that is more appropriate, but in that case one must perform _both_ integrals (65) and (67) numerically, where the integrand of the latter is itself described only numerically as output from the former. Either approach is reasonable depending on the required level of approximation of realistic galactic density profiles. In fact, since we are most interested here in galaxy rotation curves it is useful before performing any numerical integrations first to derive a direct expression for \(|\mathbf{v}(R,z)|^{2}=R\partial\Phi_{T}(R,z)/\partial R\), where we have defined the total gravitational potential up to \(\mathcal{O}\left(\varepsilon^{2}\right)\) as \(\Phi_{T}(R,z)\equiv\Phi(R,z)+\delta\Phi(R,z)\) and we follow the approach in GEFC/[7] of neglecting the second term in the \(\mathcal{O}\left(\varepsilon^{2}\right)\) massive particle equations of motion (39). Following Ludwig in [73], we note that observations of the rotation velocity are typically measured along the galactic equatorial plane, so one may consider merely \(|\mathbf{v}(R,0)|^{2}=R\partial\Phi_{T}(R,0)/\partial R+\mathcal{O}\left( \varepsilon^{3}\right)\). In particular, from (66a), one has \[\begin{split}|\mathbf{v}(R,0)|^{2}=G&\int_{0}^{\infty }\mathrm{d}R^{\prime}\int_{-\infty}^{\infty}\mathrm{d}z^{\prime}\,\left[1- \frac{9\Phi(R^{\prime},z^{\prime})}{2}\right]\\ &\times\rho^{\dagger}(R,z^{\prime})\,F(R,R^{\prime},z^{\prime})+ \mathcal{O}\left(\varepsilon^{3}\right),\end{split} \tag{68}\] where for convenience we have defined the function \[\begin{split} F(R,R^{\prime},z^{\prime})\equiv R^{\prime}& \sqrt{\frac{n}{RR^{\prime}}}\left[K(n)\right.\\ &+\left.\frac{1}{2}\left(\frac{R}{R^{\prime}}-\frac{2-n}{n}\right) \frac{nE(n)}{1-n}\right],\end{split} \tag{69}\] in which \(n\equiv 4RR^{\prime}/\left[(R+R^{\prime})^{2}+z^{\prime 2}\right]\). In principle, one may obtain the rotation curve \(|\mathbf{v}(R,0)|\) in the equatorial plane of the galaxy for any specified density distribution \(\rho^{\dagger}(R,z)\) by (numerically) evaluating the double integral (68), where \(\Phi(R,z)\) in the integrand is itself given by (65). Again following Ludwig in [73], one may avoid the evaluation of double integrals by analytically approximating the integrals over \(z^{\prime}\) in (65) and (68) under the assumption of a vertically symmetric galactic density distribution and a thin-disc approximation of the form \[\rho^{\dagger}(R,z)\approx\rho^{\dagger}(R,0)\exp\left(-\frac{z^{2}}{2\Delta ^{2}(R)}\right), \tag{70}\] where \(\Delta(R)\) is a radially-dependent characteristic disc width with some assumed form, and we necessarily lose strict contact with the order of \(\varepsilon\), transitioning to the notation (\(\approx\)). For small values of \(\Delta(R)\), one can estimate integrals over \(z^{\prime}\) analytically using the Laplace approximation, which amounts to setting \(z^{\prime}=0\) in the integrand and multiplying by the volume \(\sqrt{2\pi}\,\Delta(R)\) of the Gaussian factor in (70); this yields \[\begin{split}|\mathbf{v}(R,0)|^{2}\approx\sqrt{2\pi}GR\int_{0}^{ \infty}&\left[1-\frac{9\Phi(R^{\prime},0)}{2}\right]\rho^{ \dagger}(R^{\prime},0)\\ &\times\Delta(R^{\prime})\,F(R,R^{\prime},0)\,\mathrm{d}R^{\prime }.\end{split} \tag{71}\] where the function \(F(R,\,R^{\prime},0)\) may be written in the simplified form \[\begin{split} F(R,R^{\prime},0)=\frac{2R^{\prime}}{R+R^{\prime }}&\left[K\,\left(\frac{4RR^{\prime}}{(R+R^{\prime})^{2}}\right) \right.\\ &+\frac{R+R^{\prime}}{R-R^{\prime}}\,E\left(\frac{4RR^{\prime}} {(R+R^{\prime})^{2}}\right)\,\Bigg{]},\end{split} \tag{72}\] and \(\Phi(R,0)\) in the integrand of (71) is itself given by \[\begin{split}\Phi(R,0)\approx-4\sqrt{2\pi}G\int_{0}^{\infty}& \frac{R^{\prime}\rho^{\dagger}(R^{\prime},0)\Delta(R^{\prime})}{R+R^{\prime }}\\ &\times K\left(\frac{4RR^{\prime}}{(R+R^{\prime})^{2}}\right)\, \mathrm{d}R^{\prime}.\end{split} \tag{73}\] It is worth noting that GEFC/[7] also assumes a vertically symmetric thin disc approximation for the galactic density distribution, but of a slightly different form to that in (70). In particular, GEFC/[7] assumes the fully separable distribution \[\rho^{\dagger}(R,z)=\rho_{0}^{\dagger}\exp\left(-\frac{R}{h_{R}}\right)\exp \left(-\frac{|z|}{h_{z}}\right), \tag{74}\] where \(\rho_{0}^{\dagger}\equiv\rho^{\dagger}(0,0)\) and both the radial and vertical factors are exponentials characterised by the constant scale lengths \(h_{R}\) and \(h_{z}\), respectively. One may adopt an approach analogous to the Laplace approximation used above, whereby one sets \(z^{\prime}=0\) in the integrand in (68) but now multiplies by the volume \(2h_{z}\) of the exponential \(z\)-dependent factor in (74). In this case, the expressions (71) and (73) are replaced by \[\begin{split}|\mathbf{v}(R,0)|^{2}\approx 2h_{z}GR\rho_{0}^{ \dagger}\int_{0}^{\infty}\left[1-\frac{9\Phi(R^{\prime},0)}{2}\right]\\ \times\exp\left(-\frac{R^{\prime}}{h_{R}}\right)\,F(R,R^{\prime},0)\,\mathrm{d}R^{\prime},\end{split} \tag{75a}\] \[\begin{split}\Phi(R,0)\approx-8h_{z}G\rho_{0}^{\dagger}\int_{0}^{ \infty}\frac{R^{\prime}}{R+R^{\prime}}\exp\left(-\frac{R^{\prime}}{h_{R}}\right) \\ \times K\left(\frac{4RR^{\prime}}{(R+R^{\prime})^{2}}\right)\, \mathrm{d}R^{\prime}.\end{split} \tag{75b}\] It is of interest to compare the results obtained from the above equations with those of GEFC/[7]; that work instead uses an approach based on gravitational lensing, which we analyse separately in Section IV. One could work directly with the equations (75a) and (75b), but the integral in (75a) is numerically challenging to perform directly. As a first step, however, one may evaluate more straightforwardly the integral (75b) and the corresponding expression for \(\delta\Phi(R,0)\), which is given by \[\begin{split}\delta\Phi(R,0)&\approx 36h_{z}G\rho_{0}^{ \dagger}\int_{0}^{\infty}\frac{R^{\prime}}{R+R^{\prime}}\,\Phi(R^{\prime},0)\\ &\times\exp\left(-\frac{R^{\prime}}{h_{R}}\right)\,K\left(\frac{4 RR^{\prime}}{(R+R^{\prime})^{2}}\right)\,\mathrm{d}R^{\prime}.\end{split} \tag{76}\] The integrands in (75b) and (76) each contain an integrable singularity at \(R^{\prime}=R\), where the argument of \(K\) becomes unity; this singularity occurs because the Green's function must reproduce a delta function at this point. The singularity is easily accommodated by breaking the integral into two parts and using a standard one-sided open quadrature formula on either side of the singularity. Following GEFC[7], we consider a galaxy having the density distribution (74) with total (baryonic) mass \(M^{\dagger}\equiv 4\pi h_{z}h_{R}^{2}\rho_{0}^{\dagger}=3\times 10^{11}\) M\({}_{\odot}\), radial scale length \(h_{R}=1.5\) kpc and vertical scale length \(h_{z}=0.03h_{R}\). It is worth noting that our use of the Laplace method to approximate integrals over \(z^{\prime}\) analytically means that the expressions (75b) and (76) are independent of the value of \(h_{z}\), and depend only on \(M^{\dagger}\) and \(h_{R}\). Since the Laplace method is valid only in the thin-disc approximation \(h_{z}\ll h_{R}\), we expect this to be reasonably accurate for our choice of \(h_{z}=0.03h_{R}\). In Figure 2, we plot the fractional correction \(\delta\Phi(R,0)/\Phi(R,0)\) to the Newtonian potential as a function of galactic radius (in kpc) that arises from the leading-order graviton self-interaction, as obtained by performing the integrals (75b) and (76) numerically. As one can see from the figure, \(\delta\Phi(R,0)/\Phi(R,0)\sim\mathcal{O}\left(10^{-5}\right)\) over the entire range in galactic radius. Such a small correction will lead to a similarly small fractional correction to the orbital velocity \(|\mathbf{v}(R,0)|\) and so we conclude that the leading-order graviton self-interaction has a negligible effect on galaxy rotation curves. This is in stark contrast to the findings in GEFC[7], derived using a gravitational lensing approach. It will be of interest to determine how the GEFC calculation leads to such a different conclusion, and we will explore this issue further in Section IV.4. As a check of the numerical calculation, it is straightforward to calculate the resulting rotation curve, which is plotted in Figure 3 and agrees well with that of GEFC[7] in the absence of the supposed self-interaction correction. We reiterate in closing that the explicit rotation curve corrections obtained within this section are _proxies_ for the genuine \(\mathcal{O}\left(\epsilon^{2}\right)\) effects implied by GR. However, as we anticipated in Section III.3, nothing in the analysis suggests that the GEFC phenomena are preferentially hiding in the physics which is thrown out by radiative averaging. An attempt to obtain the genuine \(\mathcal{O}\left(\epsilon^{2}\right)\) rotation curve would likely yield similar, uninteresting results, but without enjoying the simple perscription in (60): if this claim is to be refuted, the calculation should be performed. By this point we hope it is apparent that such calculations are not necessary. Particularly, the adjustments to the rotation curve introduced by the baryon profile clearly dwarf the non-linear phenomena; our analysis makes it clear that one may rearrange the \(\mathcal{O}\left(\epsilon^{2}\right)\) effects entirely by demanding that a given profile be represented by \(\rho\), \(\rho^{*}\) or \(\rho^{\dagger}\) -- galactic baryon distributions, even if they can be measured, are not likely to be consistent to such precision across galaxies with flat and rising rotation curves [47]. ## IV Intragalactic lensing In Section III we attempted to'steel-man' the case for GEFC near galactic discs, by discarding the GEFC scalar graviton of GEFC[1, 3, 4, 5, 6, 8] and studying the \(\mathcal{O}\left(\epsilon^{2}\right)\) phenomena in the context of actual GR. However the main claims of GEFC/[7], i.e. the most substantial exploration of galactic disc GEFC, are not directly based on either of these methods. The methods used there instead concern the lensing of light rays in galaxies. These are meant to show how gravitational field lines are distorted in such a way that the (cylindrical) radial gravitational force near the edge of the galaxy declines like \(1/R\) rather than the Newtonian \(1/R^{2}\). The key claim made in GEFC/[7] is that if one calculates the geodesic paths of photons emitted radially from the nucleus of an axisymmetric disc galaxy, then those emitted close to the disc are deflected such that they end up moving _parallel_ to the disc by the time the edge of the galaxy is reached. These photon paths are meant to model gravitational field lines, meaning that the spreading of the field lines is just in one rather than two dimensions, leading to the \(1/R\) force dependency. We therefore devote this final section to exploring this claim, before concluding in Section V. Figure 2: The fractional correction \(\delta\Phi(R,0)/\Phi(R,0)\) to the Newtonian potential as a function of galactic radius (in kpc), which arises from the leading-order graviton self-interaction, for a galaxy having the density distribution (74) with total (baryonic) mass \(M^{\dagger}=4\pi h_{z}h_{R}^{2}\rho_{0}^{\dagger}=3\times 10^{11}\) M\({}_{\odot}\), radial scale length \(h_{R}=1.5\) kpc and vertical scale length \(h_{z}=0.03h_{R}\). ### Exact lensing within the linearised background The calculations of the photon/graviton paths in GEFC/[7] are carried out by using the small angle deflection formula \[\delta\beta=\frac{4GM_{G}}{h}, \tag{77}\] for a 'field line' passing near a point mass \(M_{G}\) with an impact parameter \(h\), and with (in this case polar) angle of the ray tangent \(\beta\). This deflection is then integrated along paths using a mass distribution model in which the galaxy is decomposed into slices in the form of concentric rings. We will return to consider these calculations later, but in the meantime we note that clearly, since the GEFC/[7] deflection effects are just added up along a path assuming this simple formula, it will be sufficient for our own comparison purposes to carry out the calculations for a photon path in a linearised gravitational background for an axisymmetric system. If we do the photon path calculations _exactly_ in this linearised background, this must certainly capture any effects which the GEFC[7] approach is able to capture. The non-linearity which is proposed in GEFC/[7] to be responsible for the rotation curve effects, would then come about from the gravitational field lines suffering distortion as they propagate within this background. The way in which we carry out the lensing calculations merits some explanation in terms of the effective metric used. In the first instance, we employ our axisymmetric formulation from Section III.1. Working with (33) and (13), we derive the _exact_ equations for particle and photon motion. Thus this is _exact_ lensing within what has the possibility, at least, of being an exact setup for a general static axisymmetric system. We then insert \(a_{1}\), \(b_{1}\) etc. into the lensing equations, expand these to \(\mathcal{O}\left(\varepsilon\right)\), and insert the results just given in Section III.1 for the values of \(a_{1}\) through \(d_{1}\) into these equations, in order to calculate the lensing deflection for a given \(a_{1}\). At the relevant order, clearly the equivalent metric which we can think of as giving rise to the lensing, is therefore \[\begin{split}\mathrm{d}s^{2}&=\left(1-2a_{1}\right) \mathrm{d}t^{2}\\ &\quad-\left(1+2a_{1}\right)\left(\mathrm{d}R^{2}+R^{2}\mathrm{d }\varphi^{2}+\mathrm{d}z^{2}\right),\end{split} \tag{78}\] in cylindrical polar coordinates. We have concluded in Section II that exact static spacetimes corresponding to (78) cannot be used, but the ansatz is nonetheless consistent with the line elements Eqs. (17) and (18), or Eqs. (25a) and (25b) to \(\mathcal{O}\left(\varepsilon^{2}\right)\). The coefficient \(a_{1}\) is meant to be small, and to be a function of \(R\) and \(z\). The physical setup envisaged is a static axisymmetric mass distribution, with no pressure or rotation (i.e. \(P=\mathbf{v}=0\)). As discussed in Section I.1 this would of course collapse ordinarily, but we assume that the static distribution is possible since we are treating the density as small, of the same order as \(a_{1}\), and that the pressure that would be needed for support in the absence of rotation (which GEFC/[7] assumes), would come in at the next order in both quantities. We would like to use a density profile for the galaxy that is continuously differentiable so that there are no possible issues with lack of analycity in the calculations. Also we would like the density profile to result in an explicit analytic expression for the Newtonian potential, so that when we integrate the photon path numerically, we can evaluate the equations of motion for the photon without having to perform a numerical integral in order to get the potential and its gradients at the position where the photon is. To do this we will use a Miyamoto-Nagai (MN) profile for the density and potential. In this approach to density profiles [66], one uses a fairly simple potential distribution given by \[a_{1}=\frac{GM}{\sqrt{R^{2}+\left(a+\sqrt{b^{2}+z^{2}}\right)^{2}}}. \tag{79}\] Here \(a\) and \(b\) are characteristic scales in the \(R\) and \(z\) directions. Note that within Section IV we will be using a system of units where length is measured in kiloparsecs, appropriate to galactic scales. The density profile which is implied by the Poisson equation (26b) is then \[\begin{split}&\rho(R,z)=\frac{Mb^{2}}{4\pi}\\ &\times\frac{aR^{2}+\left(a+3\sqrt{b^{2}+z^{2}}\right)\left(a+ \sqrt{b^{2}+z^{2}}\right)^{2}}{\left[R^{2}+\left(a+\sqrt{b^{2}+z^{2}}\right)^ {2}\right]^{5/2}\left(b^{2}+z^{2}\right)^{3/2}},\end{split} \tag{80}\] which agrees with the \(\rho(R,z)\) given in [73]. Recall that when integrated over space (without the determinant of the imposed spatial metric), the \(\rho\) defined here yields the overall'mass' \(M\) used in the potential. Our choices here differ from those made in GEFC[7], which uses a density distribution which is the product of exponentials in the \(R\) and \(z\) directions. This leads to a cusp in density, and therefore a lack of analycity along the galactic plane. Furthermore, for a thick exponential disc we are likely to need a remaining integral to be done numerically in order to get the potential and its derivatives at a given point, whereas, as we have seen, the Miyamoto-Nagai density/potential pair are both simple analytic expressions. We will return below to any differences with the current analysis this causes, but we will aim to make the example galaxy we use as much like the one used in GEFC/[7] as possible in terms of its overall properties, such as mass, and the typical scales in the \(R\) and \(z\) directions. ### Gravitational lensing calculations As described above, we are going to carry out the calculations in the case where the gravitational fields are treated at \(\mathcal{O}\left(\varepsilon\right)\), and the 'non-linearity' is brought in by considering the lensing of light rays, which act as a proxy for 'gravitational field lines'. So the aim is that we send out light rays from the origin and see how much they bend before heading off to infinity. Reverting to the Cartesian coordinates, we parameterise the photon momentum \(p^{\mu}\) (with energy \(E=p\)) as \[\begin{split}& p^{0}=p,\quad p^{1}=p\cos\alpha\cos\,\beta,\\ & p^{2}=-p\sin\alpha\cos\beta,\quad p^{3}=p\sin\beta.\end{split} \tag{81}\] Then treating the lensing exactly, but within the linearised gravitational fields, one can demonstrate the following general results for motion in the \((R,z)\) plane, where \(\alpha=\varphi=0\). Introducing the affine parameter \(\lambda\) along the photon path, in place of the interval \(s\) \[\begin{split}\frac{\mathrm{d}\beta}{\mathrm{d}\lambda}& =2p\left(-\sin\beta\frac{\partial a_{1}}{\partial R}+\cos\beta \frac{\partial a_{1}}{\partial z}\right),\\ \frac{\mathrm{d}p}{\mathrm{d}\lambda}&=p^{2}\left( \cos\beta\frac{\partial a_{1}}{\partial R}+\sin\beta\frac{\partial a_{1}}{ \partial z}\right),\\ \frac{\mathrm{d}R}{\mathrm{d}\lambda}&=p\cos\beta, \quad\frac{\mathrm{d}z}{\mathrm{d}\lambda}=p\sin\beta,\quad\frac{\mathrm{d} \vartheta}{\mathrm{d}\lambda}=\frac{p\cos(\beta+\vartheta)}{\sqrt{R^{2}+z^{2} }},\end{split} \tag{82}\] where \(\vartheta\) is the conventional polar angle of spherical coordinates. Note that \(\beta\) and \(\vartheta\) are defined in opposite'senses', and respectively parameterise position and deflection, but are both polar in nature. We first carry out a numerical evolution of these equations, and then seek to find an analytic approximation to the results, to help with understanding their physical meaning. We will do this using the Miyamoto-Nagai potential for \(a_{1}\), since here there are no discontinuities in derivatives, no infinitesimal mass sheet in the \(z=0\) plane, and everything concerning the potential and density distributions themselves is analytic. For the galaxy parameters, we will choose values yielding a similar ellipticity and overall dimensions as used in GEFC[7], but with them relating to the MN potential and density distribution, rather than one which has a product of exponentials in \(R\) and \(z\) for the density. So our values are \(M=3\times 10^{11}M_{\odot}\), \(a=1.5\,\mathrm{kpc}\) and \(b=0.045\,\mathrm{kpc}\). The rotation curve we would get for such a galaxy is shown in Fig. 4. It can be seen that such a galaxy produces high rotation velocities, over \(400\,\mathrm{km\,s^{-1}}\) at the peak. Nevertheless, it is not completely unreasonable, and we will use it as the example for our tests. The contours for density are shown in Fig. 5. and we can see that the \(a:\,b\) ratio of \(~{}33\) has given a highly flattened galaxy. Again, this seems alright for a test, however, since it is best to look for effects in an object which stands the best chance of yielding something interesting, whilst not being impossible. Fig. 6 shows the change in the inclination angle \(\beta\) when the photon is launched with a starting inclination (to the \(x\)-axis) of \(18\) arcsec. The vertical scale is in arcsec and shows that the 'flattening' is by just \(0.008^{\prime\prime}\) for this case. In terms of the trajectory, this is completely imperceptible, and we do not show the trajectory in the \((x,z)\) plane for this example, since it looks just like a radial straight line. To explore the parameter space of this very small effect, and seek to find if we can get much larger values of the deflection, it makes sense to attempt to get an analytical approximation to the numerical results. We will then at least know the dependencies of the deflection on quantities such as the mass and two 'principle radii' of the galaxy. We can do this by inserting the MN results (79) for the \(a_{1}\) derivatives into the expression for \(\mathrm{d}\beta/\mathrm{d}\lambda\) in equation (82), and then expanding in small quantities. Note we assume that both the deflection _and_ the initial angle to the \(x\)-axis are small -- this matches the type of case we were looking at just now in the full numerical integration. Integrating along an affine path length \(\lambda\) and assuming an initial angle of \(\beta_{0}\) we find the following expression for the deflection in \(\beta\): \[\Delta\beta=-\frac{2\,M\beta_{0}a\left(\sqrt{a^{2}+2ab+b^{2}+\lambda^{2}}-a-b \right)}{(a+b)b\sqrt{a^{2}+2ab+b^{2}+\lambda^{2}}}. \tag{83}\] To assess the quality of the approximation, we can can evaluate this expression for the same parameters as used to create Fig. 6. In fact we do not show a fresh plot for this case, since the expression just given matches the full numerical result over the whole range better than the eye can discern the difference. We should also get approximations for the \(R\) and \(z\) coordinates of the photon, since in principle it is these that measure the trajectory and from which we should derive the deflection (though we would expect this to match what we get from the momentum angle \(\beta\) very closely in these cases). For \(z\) we get the interesting expression \[\begin{split}& z(\lambda)=\beta_{0}\lambda-\frac{2a\beta_{0}M}{(a+b)b} \\ &\quad\times\left[\ln\left(\frac{(a+b)}{\lambda+\sqrt{a^{2}+2ab+ b^{2}+\lambda^{2}}}\right)(a+b)+\lambda\right].\end{split} \tag{84}\] We plot this against the exact numerical answer, for the same case as in Fig. 6, in Fig. 7. Here we can see a slight difference between exact result and the analytical approximation, but it is clearly small. To see whether this \(z\)-result ties in with the \(\beta\) result of Fig. 6, we need to understand also how \(R\) evolves. In fact, at the accuracy being used here, we can take \[R=\lambda, \tag{85}\] and this is verified in the following plot, Fig. 8 which shows Figure 4: Rotation curve, in \(\,\mathrm{km\,s^{-1}}\), for the example galaxy. the difference between the exact numerical \(R\) and the affine parameter \(\lambda\) as the latter goes over the range of integration used for the photon path, i.e. from 0 to 5. It can be seen that there is less than 1 part in \(10^{7}\) deviation between the two over this range. This means that the angle of motion, \(\mathrm{d}z/\mathrm{d}R\) can be obtained just by differentiating equation (84) w.r.t. \(\lambda\). Then evaluating this for the parameters of the galaxy, and for \(\lambda=5\), we find that the difference from the initial \(\beta_{0}\) is \(\approx 0.008\,\mathrm{arcsec}\), matching the result for \(\beta\) shown in Fig. 6. Thus the various approximations are all consistent for a case such as the present one. ### Parameter values for significant deflection In order to get a \(1/R\) instead of \(1/R^{2}\) behaviour for the force in the galactic plane, GEFC/[7] needs the photon paths (which are being used as proxies for 'gravitational field lines') to be bent enough that they end up moving roughly parallel to the plane once the edge of the galaxy is reached, as in the top right plot of Fig. 3 in GEFC/[7], for example. Although we have seen that for what appears to be a similar example galaxy to the one GEFC/[7] uses, the actual photon path deviation is many orders of magnitude below what GEFC/[7] proposes, it is of interest to see what sort of parameter values, specifically for \(M\), \(a\) and \(b\), we would need in order to get this type of behaviour happening. To do this, we can use our analytical approximations to get an estimate of what sort of values are required, and then check these out with the exact integrations, since it is likely that there will be some deviations between the two, given the extreme values of the parameters required. To get motion parallel to the plane, we need the deflection angle \(\Delta\beta\) in equation (83) to equal minus the initial angle, \(\beta_{0}\), itself. Solving this equation and assuming large \(\lambda\), i.e. that this is happening for the eventual asymptotic motion of the photon, we find that we need \[M\to\frac{(a+b)\,b}{2a}. \tag{86}\] For a highly flattened galaxy, with \(a\gg b\), we have \(M\to b/2\). This is very revealing. For a \(b\) of \(0.045\,\mathrm{kpc}\) as above, this means the mass needs to be \(\sim 4.7\times 10^{14}\,M_{\odot}\), i.e. of the scale of a large cluster of galaxies! Figure 5: Isodensity contours for the example galaxy discussed in the text. This has a Miyamoto–Nagai profile with \(M=3\times 10^{11}\,M_{\odot}\), \(a=1.5\,\mathrm{kpc}\) and \(b=0.045\,\mathrm{kpc}\). The outer contour is \(1/10^{\mathrm{h}}\) of the central density. Figure 6: Change in the \(\beta\) angle of emitted photon as a function of affine parameter, in an exact numerical calculation. The units of the \(\beta\) (vertical) axis are arcsec. Figure 7: Comparison of \(z\) taken from the exact numerical integration (red) with the approximation given in equation (84) (blue), where \(\lambda\) is the affine parameter for the photon path, covering the interval 0 to 5. Note that the undisturbed trajectory \(z=\lambda\beta_{0}\) has been subtracted from each curve so that the residuals can be compared. Ignoring the obvious problems with this, let us see what plots of the photon paths look like for this case, using first our analytical approximation. In Fig. 9 we show plots of paths in the \((z,R)\) plane for photons fired out at a range of initial \(\beta\) angles, going in 11 steps between \(\beta_{0}=-0.001\) and \(\beta_{0}=+0.001\). (Note these angles are in radians, not arcsec.) We can see that indeed the paths become almost flattened. In the current analytical approximation, one finds that to get complete flattening, one needs \(M\approx 1.15\,b/2\). If one goes beyond this, then an interesting 'focussing' effect becomes visible. These two cases are shown in Fig. 10. We now need to look at how the trajectories behave if we carry out exact numerical integration, rather than using our analytical approximation. In Fig. 11 we show the result of the exact numerical calculation in blue, and of the analytical approximation in red, for two different values of \(M\). At the top we have the result for the initial case, corresponding to Fig. 9, where \(M\) was put to \(b/2\). The red curves here are thus the same as in Fig. 9. We can see that the exact calculation gives _less_ deflection than the approximate one, although the two sets of curves are not wholly dissimilar. In the bottom panel of Fig. 11, we show the equivalent but for \(M=0.88\,b\). The point of choosing this \(M\) is that for this value the exact curves (blue) become asymptotically parallel to the disc. Meanwhile, the larger deflection of the approximate curves (red) means that they turn round and refocus in this case. We have not shown it here, but if we continue increasing the mass, then the exact curves start to refocus as well, which is not surprising, and like the approximate curves, they appear to refocus exactly, i.e. all the curves go through the same point. This will be discussed further below. Another thing which it is useful to do at this point, is to illustrate where the types of trajectories we are plotting lie in relation to the disc of the galaxy itself. This is actually quite hard to show since the galaxy is very flattened, and the rays themselves are being emitted at angles very close to 0. Thus it is not possible to discern anything on a plot which has the same scaling for the \(R\) and \(z\) axes. In Fig. 12, we show a 'compromise' plot, where the \(z\)-axis scale is sufficiently expanded that the galactic density contours are clearly visible -- the outer contour here represents 1/20th of the central density, so gives some feel for the extent of the galaxy on the plot. The blue curves are the same as those shown in the right panel of Fig. 11, i.e. they are the exact curves for the case where \(M=0.88\,b\), which leads to the trajectories just being flattened. We see that the photon paths we are looking at are indeed very close to the disc of the galaxy. Finally, in terms of the exact integrations, we show a plot for \(M=1.2\,b\), which leads to refocussing even in the exact case, but where we have covered a wider range of initial angles. This is shown in Fig. 13, We can see here that the rays closest to the disc show refocussing, those slightly further out are 'just flattened' and those further out still are relatively undeflected.This behaviour seems at first sight realistic, and is _not_ what we obtain from the analytical approximation. We show this for the same range of initial \(\beta\) but for a slightly smaller \(M\) of \(0.88\,b\) (since otherwise the refocussing happens too quickly), in Fig. 14. Here it is clear that each trajectory has exactly the same'shape', with just a different vertical scaling. This is already evident from the form of approximation in equations (83) and (84), of course, which just scale directly Figure 8: Deviation of \(R\) taken from the exact numerical integration with the approximation \(R=\lambda\), where \(\lambda\) is the affine parameter for the photon path, covering the interval 0 to 5. Figure 9: Photon paths in the \((R,z)\) plane calculated using the analytical approximation for a range of initial \(\beta_{0}\) in the case \(M=b/2\). Note the two axes have been scaled independently. proportional to the initial angle \(\beta_{0}\). So we might think that this is less realistic behaviour, and fails to capture what the exact results are telling us, and the type of behaviour we would need for the GEFC[7] hypothesis to be true. However, the regime we are operating in for \(M\sim b\) is completely unachievable in practice -- it would need, as already stated, an object of the mass of a rich cluster of galaxies confined to a region with typical scales of \(1.5\,\mathrm{kpc}\) horizontally, and \(0.045\,\mathrm{kpc}\) vertically. Basically, as we can already see from \(M\sim b\), we are talking about something that is effectively a 'black hole' in the \(z\) direction, and it is only an object like this which can lead to any of the interesting effects seen here. If we were to plot the photon trajectories for realistic masses of a few times \(10^{11}M_{\odot}\), then we would just Figure 11: Left: The blue curve shows the results of the exact numerical calculation for \(M=b/2\), while the red curve is for the analytic approximation results for the same case. Right: same, but for the case \(M=0.88\,b\), which is just enough to give flattened field lines in the exact case. Figure 10: Left: same as for Fig. 9 but for \(M=1.15(b/2)=0.575\,b\). This is just enough to flatten the trajectories at infinity. Right: same but for \(M=b\), where an interesting ‘focussing’ effect is visible. get perfect looking radial lines for any initial \(\beta_{0}\) for both the exact and analytical approximation cases, and no effects of the type that GEFC/[7] describe would be visible. We plot such a case in Fig. 15, which is for the \(M\) corresponding to our original example galaxy above, i.e. \(M=3\times 10^{11}M_{\odot}\).This is for the exact numerical integrations, but exactly the same curves at the limit of resolution would be found for the analytical approximation here, and in particular there is no change in the'shape' as the disc is approached. We promised earlier to discus the fact that the rays computed via the exact rather than approximate method, all appear to go through the same point when the refocusing occurs. This is not surprising from the approximate formula, since as already noted all the trajectories have the same shape here, but is perhaps surprising from the point of view of the exact calculations, since the curves are not all just vertically scaled versions of one another in this case. This would be worth investigating, except that of course this case, with an object approaching an effective black hole in the vertical direction, would need to be investigated using the fully non-linear Einstein equations, and not within the simple linearised ansatz for the fields (78) which we have been using here. ### Postmortem of GEFC lensing In the context of our findings in Sections IV.2 and IV.3 it is of interest to understand where the calculations of lensing in GEFC/[7] may have gone astray. The example calculation for which GEFC/[7] gives some details, and for which we can attempt to follow through what is happening, is for the effects of a annulus of matter on the path of a photon emitted radially from the centre of the galaxy. This is discussed in section II.B. of GEFC/[7], which says that 'the dominant bending comes from the rings with mid-planes at \(z=0\), henceforth referred to as "central rings"". The total effect of the galaxy is then found by adding the effects from all the different types of rings and slices together. What we will do here to compare, is to repeat our calculations above, but this time computing the lensing caused by an annulus of matter stretching from \(R^{\prime}\) to \(R^{\prime}+\Delta R\) in the \(R\) direction, and effectively infinitesimally thin in the \(z\) direction, since instead of a 3d density distribution \(\rho(R,z)\) we will just ascribe to it a surface density distribution \(\Sigma(R)\), with \(R\) evaluated at \(R^{\prime}\) for the annulus of interest. This in fact differs somewhat from the setup envisaged by GEFC/[7] for the annulus, which is shown as the blue object in Fig. 2 of GEFC/[7]. Here the vertical height of the annulus is given by the \(z\) of the photon track at that point. However, we shall show below that the actual calculation was based on finding the Newtonian potential of the annulus assuming it is concentrated along \(z=0\). Hence we shall follow this line in working out our results, and from these demonstrate that in fact it is allowable to take this approach for a non-infinitesimally thin annulus as well. So we will work out the Newtonian potential for such a ring in the \(z=0\) plane, and then use it (in the guise of \(a_{1}\)) in the Figure 14: Same as for Fig. 13, but for the analytic approximation results for the case \(M=0.88\,b\). Here the curves do not change in shape, except for a vertical scaling, as one goes out to high initial \(\beta\). Figure 12: This figure attempts to show some of the rays we have already discussed, in relation to the disc of the galaxy itself. The black lines are isodensity contours for the example galaxy, where the outermost contour corresponds to 1/20th of the central density. The blue lines show the same trajectories as the blue lines in Fig. 11, i.e. they are the exact numerical calculations for the case \(M=0.88\,b\). The vertical scale is much larger than the horizontal scale, hence the galaxy no longer looks flattened, but even so it is difficult to see the individual trajectories. Figure 13: Same as for Fig. 12 but for \(M=1.2\,b\), and for a wider range of initial \(\beta\) values. Here the exact results show that the trajectories change shape as one moves outwards, with those closest to the \(x\)-axis refocusing, while those at the outside are defelected much less. Figure 15: Exact trajectories (blue) shown relative to the galactic disc (black contours) for realistic galaxy mass of \(M=3\times 10^{11}M_{\odot}\). Now no deflections at all are visible. formula for the rate of change of photon inclination \(\beta\) given in equation (82). We can then do an exact numerical integration as above, to get an answer for the bending that will be suffered by a radially moving photon due to the annulus. Having established what the exact results are, we can then go through a process of approximation to get an explicit approximate answer, and check that this works to a sufficient level of accuracy. Finally, we can then check this approximate answer against what GEFC/[7] says the result is for the same case, and see how the answers compare. The first step is to get the Newtonian potential of the annulus. For this we can use e.g. equation (34) of the paper by Cohl & Tohline, [74], which is for precisely this case. We find \[a_{1\,\mathrm{ring}}(R,z)=\frac{2G}{\sqrt{R}}\,\Delta\,R\,\sqrt{R^{\prime}}\, \Sigma(R^{\prime})\,m\,K(m)+\mathcal{O}\left(\epsilon^{2}\right)\,, \tag{87}\] where \[m\equiv\sqrt{\frac{4\,RR^{\prime}}{(R+R^{\prime})^{2}+z^{2}}}, \tag{88}\] and \(K\) is a complete elliptic function of the first kind. Note one can verify by direct differentiation that this function satisfies \(\mathbf{\nabla}^{2}a_{1\,\mathrm{ring}}=0\) away from the annulus. We now carry out a direct numerical integration using the equations in (82), with this new potential. Fig. 16 shows how the photon inclination angle \(\beta\) changes from its initial angle as it passes by the annulus, which is located at \(R^{\prime}=2\,\mathrm{kpc}\). The parameters for the annulus used here are fairly arbitrary, since we just want to show indicative effects, but in detail they are that the width in the \(R\) direction is \(\Delta\,R=0.01\) and the surface density \(\Sigma\) is \(10^{-5}\) in the system where the unit of length (which therefore gives all the other units) is \(1\,\mathrm{kpc}\). (This corresponds to about \(2\times 10^{11}M_{\odot}\,\mathrm{kpc}^{-2}\).) The initial \(\beta\) angle is \(0.0025\,\mathrm{rads}\) and we see that the total deflection happens more or less impulsively as the photon passes the annulus, and has a value of about \(1.2\times 10^{-6}\,\mathrm{rads}\). We can now go about finding the (exact) deflection angles for a range of parameters, such as the ring radius \(R^{\prime}\) and the initial angle \(\beta_{0}\) and use these as the 'truth' in a comparison with an approximate solution which we would also like to find. To carry out the latter, quite an involved chain of approximations is necessary, starting from the exact formulae in which (87) is inserted into (82). This requires being able to approximate the elliptic \(K(m)\) function and the complete elliptic function of the second kind \(E(m)\) that appears in its derivative, in the case where the argument \(m\) defined in (88) is close to \(1\). This comes about since at the point where the photon is just passing the annulus, we can take \(R\approx R^{\prime}\) and \(z\) small, hence \(m\) will be just below \(1\). After this we need to be able to integrate the resulting expression for \(\hat{\beta}\) over the photon path to get the total deflection. We omit these details here, and just give the result, to second order in the \(z\) at closest approach. We get \[\Delta\beta=-\Delta\,R\,\,\Sigma(R^{\prime})\left(4\pi-4\left(\ln 2-\ln \theta\right)\theta-\frac{3\pi}{2}\theta^{2}\right), \tag{89}\] where \(\theta\) is used to denote the small quantity \(z/R^{\prime}\). This deflection looks as though it might be singular as \(z\) (and therefore \(\theta\)) tends to \(0\), but \(\ln\theta\) is multiplied by \(\theta\) and in fact the expression tends smoothly to the result \(\Delta\beta=-4\pi\,\,\Delta\,R\,\,\Sigma(R^{\prime})\). Moreover, the first term in the brackets in (89) (i.e. \(4\pi\)) will in general be much larger than the others, hence we can predict from this there will be a relatively small dependence of the deflection angle on either the \(z\) at closest approach or the \(R^{\prime}\) of the annulus location, and therefore also on the initial inclination angle \(\beta_{0}\). This is borne out by what we find with the exact calculations. In Fig. 17 we show exact calculations (red) for the final deflection angle, for a range of initial \(\beta\)'s chosen to give the range of \(z\)'s at closest approach as shown on the horizontal axis. The annulus is at a fixed radius of \(R^{\prime}=2\,\mathrm{kpc}\) but we get a very similar plot for other choices of \(R^{\prime}\). In blue we show our approximate answer (89), which clearly yields a good approximation. In particular both the exact and approximate answers confirm that the first (constant) term in (89) dominates, and that the deflection goes smoothly to this value as the \(z\) at closest approach goes to \(0\). Given this agreement between the behaviour of the approximate and exact results, we can take it that this is what actually happens. Initially, at least, this behaviour may seem somewhat surprising. If we were considering the passage of photons past a point source, we know that the deflection would go reciprocally with the distance of closest approach, and therefore be singular. Extending the point source to an annulus, our intuitive guess for the result might be that the singularity is softened to become logarithmic in the closest approach distance, rather than reciprocal, but to still be singular. What we seem Figure 16: In red, exact calculation of the deviation of photon inclination angle \(\beta\) from its initial value, as a function of \(R\), for a matter distribution consisting of an infinitesimally thin annulus at \(R^{\prime}=2\,\mathrm{kpc}\). The initial \(\beta\) angle is \(0.0025\,\mathrm{rads}\) for this example. In blue, a representation of the leading term of the approximate answer for the total deflection, equation (89), is shown. to have found is that instead there is no singularity, and the value of the deflection is roughly constant with the \(z\) of closest approach. In fact such behaviour is widely known about already, for a case which initially looks totally different, but is in fact basically the same as we are seeing here. This case is that for lensing by _cosmic strings_. In [75], Vilenkin showed how a line-like topological defect (which might be formed in an early universe phase transition) could cause lensing of light rays by an amount which (as long as the radius of curvature of the string was much bigger than the distance of closest approach) did not vary with impact parameter for the photon passing the string, but just changed with sign according to whether the photon passed one side or another of the string. The amount of the lensing was by a deflection angle \(\Delta\beta\) given by \[\Delta\beta=4\pi\mu, \tag{90}\] where \(\mu\) is the mass per unit length of the string. In the cosmic string literature, such behaviour is attributed to the string effectively causing a 'wedge' to be taken out of an otherwise flat cylindrical spacetime surrounding the string, with a 'defect angle' of the wedge of \(2\Delta\beta\), and then the remaining spacetime having the cut edges glued back together in a form of spacetime surgery to form what is called a 'conical' spacetime. One can picture that this could indeed give rise to the behaviour of light rays as described, since these would travel in straight lines in the still-flat remaining spacetime, but nevertheless, rays on opposite sides would appear to converge together after passage of the string. In our current case, we have a much more prosaic example of the same phenomenon. The 'cosmic string' is now the thin annulus, and as long as our photon is on a path that takes it much closer to the annulus in terms of \(z\) at closest approach than the annulus radius at that position, then we can expect the same logic to apply, and for the photon to be deflected by a fixed angle of \(4\pi\) times the mass per unit length of the annulus. Since the latter is \(A\,R\Sigma(R^{\prime})\) (remember \(\Delta R\) is the annulus width and \(\Sigma(R^{\prime})\) its surface density at radius \(R^{\prime}\)), then we expect a deflection of \(-4\pi\Delta R\)\(\Sigma(R^{\prime})\), exactly as found in the first term of (89). Of course in the current case we are not obliged to think in terms of'spacetime surgery' and topology -- just the weak field forces on the passing photons are enough to give us what we need, and indeed more generally one can give a full treatment of cosmic strings in terms of gauge fields in flat space (as in electromagnetism), rather than in terms of topological surgery upon spacetime -- see [76] for a discussion of this approach. Having got this satisfactory confirmation and justification for our result, we now turn to the answer that GEFC/[7] gets for this case, which is the only one for which an explicit answer is given. What we need to compare with is equation (6) in GEFC/[7], namely \[\delta\beta(R,z)=\frac{GM}{\pi}E(R,z), \tag{91}\] where the following definition is given for the quantity \(E(R,z)\) \[E(R,z)\equiv 2\int_{0}^{\pi}\frac{d\psi}{\sqrt{\left(2R\sin\left(\psi/2\right) \right)^{2}+z^{2}}}, \tag{92}\] which is described as 'the complete elliptical integral of the first kind'. This seems like a non-standard designation (and we emphasise that it would conflict with our conventions both in Section III.3 and above in this section), but nevertheless, since an explicit expression for \(E(r,z)\) is given, we can carry out the indicated integral and obtain \[E(R,z)=\frac{4K\left(2R/\sqrt{z^{2}+4R^{2}}\right)}{\sqrt{z^{2}+4R^{2}}}, \tag{93}\] where the \(K\) here is the complete elliptic integral of the first kind. Clearly what we are doing here with \(E(R,z)\) is finding the average inverse distance from a point (\(R,z,\varphi=0\)) in cylindrical coordinates, to the infinitesimally thin ring (\(R,z=0,\varphi\)) as \(\varphi\) varies over \(0\) to \(2\pi\). Indeed, comparing with equation (87), in which we need to set \(R^{\prime}=R\), we see that \(-E(R,z)GM/(2\pi)\), where \(M\) is the mass of the ring, will be the Newtonian potential at the point (\(R,z,\psi=0\)). The GEFC/[7] answer for the angular deflection (91) then appears to be twice this Newtonian potential. It is not clear why GEFC/[7] believes that this is the way in which to get the deflection, and in particular it disagrees with Figure 17: In red, exact calculation of the final deviation of photon inclination angle \(\beta\) from its initial value, where the initial values are varied so as to give the range of \(z\)’s at closest approach. (These \(z\)’s form the horizontal axis.) The example matter distribution consisting of an infinitesimally thin annulus at \(R^{\prime}=2\,\mathrm{kpc}\) is being used. In blue, the approximate answer equation (89) is shown. In black we show the result that follows from the equations given by GEFC/[7] for this case. our result (89) by being _singular_ as \(z\to 0\). We can see this by expanding \(E(R,z)\) in \(z\), for which we get \[\begin{split} E(R,z)&\approx\frac{6\ln 2-2\ln z+2\ln R }{R}\\ &\quad+\frac{-3\ln 2+\ln z-\ln R+1}{8R^{3}}z^{2}+...\end{split} \tag{94}\] which has a logarithmic singularity for small \(z\). By contrast, our answer, backed up by the exact numerical calculations, tends to a constant for small \(z\). To show the comparison between the GEFC and our answers, then in Fig. 17 we have include a curve showing the prediction of the GEFC[7] result (91), calculated using the same ring mass. Now this looks like a big discrepancy, and a possible source of why GEFC[7] says that the rays become parallel near the galaxy disc edge, whereas as we have seen, this would require densities about 1000 times larger than typical galactic densities. However, some caveats are in order. As we have shown, the actual GEFC[7] calculation seems to be assuming an infinitesimally thin ring, but that paper also contains a figure describing the setup and showing the ring vertical height being equal to the current \(z\) of the photon path. Since we have shown that one can wind down the \(z\) of the photon path to be as close as one likes to an infinitesimal ring, this is not a problem as long as \(z\) is small compared to the \(R^{\prime}\) of the ring. Our answer should still apply. More significantly, linked to this, is the fact that the mass of the ring as used in GEFC[7] incorporates the height, i.e. for a fixed galactic density then in both that approach, and in applying ours to what is being done there, then (assuming a height \(z\) over which the ring density does not vary much vertically as compared to its value at \(z=0\)), the ring mass should be taken as proportional to \(z\). This willipe out the singularity shown in Fig. 17, since the GEFC[7] curve will now go like \(z\ln z\) at small \(z\), while ours will now go as \(z\). These still differ in ratio by a factor of \(\ln z\), but the absolute value of the discrepancy will not be large, and it seems difficult to understand how factors of order \(10^{3}\) in the lensing could arise. ## V Conclusions This paper has sought to demonstrate an effect which we term '_gravitoelectric flux collapse_' (GEFC), and which is proposed in GEFC/[1; 2; 3; 4; 5; 6; 7; 8; 9]. GEFC promises, among other things, to render galactic dark matter halos redundant by explaining flat and rising rotation curves via purely general-relativistic effects. To this end we have attempted to reproduce the remarkable results of GEFC[4] and GEFC[7]. We have enjoyed little success, and cannot conclude that the GEFC programme, in its current form, has a sound physical basis. In particular we repeat certain observations which were made along the way:- 1. We found in Sections II.1 and II.2 that the scalar gravity model which seems to underpin GEFC[1; 3; 4; 5; 6; 8; 9] is essentially arbitrary, and not necessarily descriptive of GR. Superficially, the model would seem from Section II.3 to be _inconsistent_ with the nonlinear, static, vacuum EFEs. Its consistency with the Einstein-Infeld-Hoffman potential appears from Section II.4 to be coincidental, and not too unlikely. 2. The theoretical basis for the lattice techniques used to probe gravitational potentials in GEFC/[1; 4] does not appear fully watertight, as discussed in Appendix A. 3. At next-to-leading-order near a typical galactic baryon profile, usual tensorial GR does not appear to support substantial GEFC-type effects as proposed in GEFC[4; 5; 6; 7]. This was verified throughout Section III using a variety of perturbation schemes and gauge choices. 4. The lensing effects claimed in GEFC/[7], which are used as a heuristic for GEFC-type phenomena, appear from Section IV to have been overstated by three orders of magnitude. As mentioned in Section I, it is not clear how many of the other interesting effects promoted in GEFC[1; 3; 5; 6; 8; 9] can be salvaged if the points raised above are not adequately addressed. In terms of mapping a road forwards, we are particularly interested in establishing clarity on the following question: _'How are non-perturbative phenomena expected to emerge from closed, perturbative methods?_' In our calculations, for example, we encounter no warning that the perturbative approach is failing, such as divergent or unbounded quantities. It is then not too surprising that we recover only small corrections to the Newtonian phenomena. Despite this outlook, we recall that the above methods have, as a by-product, suggested a couple of interesting research avenues:- 1. The result (42) in Section III.1 may be of relevance to the fluid ball conjecture (Lichnerowicz's conjecture [69]). 2. The path integral separation in Appendix A may orient a new gravitational energy localisation scheme. Finally, we distance ourselves from the previous analyses in GEFC/[1; 2; 3; 4; 5; 6; 7; 8; 9] to emphasise that our'steel-man' approach precludes GEFC effects _in degree but not in kind_. The nonlinear regime of gravity is very real, and doubtless still hides many unknown and exotic phenomena. Questions of utility and astrophysical realisation aside, a principled and considered correspondence between general relativity, nonlinear gravitoelectromagnetism and quantum chromodynamics -- should it exist -- would be a great asset in addressing the broader question of gravitational confinement. ###### Acknowledgements. We are grateful to Alexandre Deur for rapid, thorough replies and vital clarifications at several junctures. We are also grateful to Craig Mackay and Amel Durakovic for several useful discussions, and to John Donoghue and Subodh Patil for helpful correspondence. W. E. V. B. is grateful for the kind hospitality of Leiden University and the Lorentz Institute, and is supported by Girton College, Cambridge. The supplemental materials provided in [77] incorporate elements of Cyril Pitrou's code from the repository at www.github.com/xAct-contrib/examples. During the course of this work, the literature advocating for GEFC was augmented by the preprint GEFC/[9], which appears to be grounded in the same scalar gravity model addressed in the correct article.
2307.04608
Learning Interpretable Heuristics for WalkSAT
Local search algorithms are well-known methods for solving large, hard instances of the satisfiability problem (SAT). The performance of these algorithms crucially depends on heuristics for setting noise parameters and scoring variables. The optimal setting for these heuristics varies for different instance distributions. In this paper, we present an approach for learning effective variable scoring functions and noise parameters by using reinforcement learning. We consider satisfiability problems from different instance distributions and learn specialized heuristics for each of them. Our experimental results show improvements with respect to both a WalkSAT baseline and another local search learned heuristic.
Yannet Interian, Sara Bernardini
2023-07-10T14:52:14Z
http://arxiv.org/abs/2307.04608v1
# Learning Interpretable Heuristics for WalkSAT ###### Abstract Local search algorithms are well-known methods for solving large, hard instances of the satisfiability problem (SAT). The performance of these algorithms crucially depends on heuristics for setting noise parameters and scoring variables. The optimal setting for these heuristics varies for different instance distributions. In this paper, we present an approach for learning effective variable scoring functions and noise parameters by using reinforcement learning. We consider satisfiability problems from different instance distributions and learn specialized heuristics for each of them. Our experimental results show improvements with respect to both a WalkSAT baseline and another local search learned heuristic. ## 1 Introduction The satisfiability problem (SAT), one of the most studied NP-complete problems in computer science, consists in determining if there exists an assignment that satisfies a given Boolean formula. SAT algorithms typically assume that formulas are expressed in conjunctive normal form (CNF). A CNF formula is a conjunction of clauses; a clause is a disjunction of literals; and a literal is a variable or its negation. SAT has a wide range of practical applications, including electronic design automation, planning, scheduling and hardware verification. Stochastic local search (SLS) algorithms are well-known methods for solving hard, large SAT instances [13]. They are incomplete solvers: they typically run with a pre-set number of iterations, after which they produce a valid assignment or return "unsolved." Algorithm 1 shows the pseudo-code of a generic SLS algorithm. Like most SLS solvers, it starts by generating a random assignment. If the formula is satisfied by this assignment, a solution is found. Otherwise, a variable is chosen by a variable selection heuristic (_pickWar_ in Algorithm 1) and that variable is flipped. The loop is repeated until a solution is found or the maximum number of iterations is reached. ``` Input: A formula \(F\) in CNF form Parameter: max_flips, max_tries Output: If found, a satisfying assignment for\(i=1\) to max_triesdo \(flips=0\) \(X\) be a random initial assignment while\(flips<\) max_flipsdo if\(X\) satisfies \(F\)then return\(X\) Pick a variable \(x\) using \(pickVar\) \(X\gets flipVar(x)\) \(flips\) ++ return unsolved ``` **Algorithm 1**_SLS_ algorithm WalkSAT-type algorithms also use a noise parameter \(p\) (see Algorithm 2) to control the degree of greediness in the variable selection process. This parameter has a crucial impact on the algorithms' performance [1, 1, 13]. Hoos et al. (2002) propose a dynamic noise adaptation algorithm in which high noise values are only used when the algorithms appear to not be making progress. Designing SLS algorithms requires substantial problem-specific research and a long trial-and-error process by the algorithm experts. Also, algorithms seldom exploit the fact that real-world problems of the same type are solved again and again on a regular basis, maintaining the same combinatorial structure, but differing in the data. Problems of this type include, for example, SAT encodings of AI Planning instances Robinson et al. (2008) and Bounded Model Checking instances Benedetti and Bernardini (2005). Recently, there has been increased interest in applying machine learning techniques to design algorithms to tackle combinatorial optimization problems Bello et al. (2016); Khalil et al. (2017); Bengio, Lodi, and Prouvost (2021); Zhang et al. (2020). In line with this work, our paper focuses on using machine learning to design algorithms for SAT. More specifically, we investigate the use of reinforcement learning to learn both adaptive noise strategies and variable scoring functions for WalksAT-type algorithms. We call the resulting strategy _LearnWSAT_. The main contributions of this paper are as follows: * Our technique automatically learns a scoring function and an adaptive noise strategy for WalksAT-type algorithms. * Our scoring functions are simple and interpretable. When coded efficiently, they would have a running time per iteration similar to WalksAT. * Our approach outperforms both a WalkSAT baseline algorithm and a previously published learned SLS-type algorithm Yolcu and Poczos (2019). * Our technique uses a "warm-up" strategy designed to substantially decrease training time. * Our algorithm, when trained on a specific distribution, generalizes well to both unseen instances and larger instances of the same distribution. We remark that our goal in this paper is to show how reinforcement learning could be leveraged to make WalkSAT-type algorithms more efficient and their design more practical; we do not aim to offer the fastest WalkSAT implementation, which we leave as future work. 1 Footnote 1: The implementation can be found here [https://github.com/yanneta/learning_heuristics_sat](https://github.com/yanneta/learning_heuristics_sat) ## 2 Related Work The literature regarding SAT is vast. We focus here only on the following two topics, which are the most pertinent to our contribution. ### Machine Learning for SAT Guo et al. (2022) give an in-depth survey of machine learning for SAT. In their classification, our work falls into the category described as "modifying local search solvers with learning modules". There are two other works Yolcu and Poczos (2019); Zhang et al. (2020) that fall into the same category. Yolcu and Poczos (2019) use reinforcement learning with graph neural networks to learn an SLS algorithm. The graph neural network takes a factor graph associated which the SAT formula and the current assignment to score each variable. Scoring each variable at every iteration incurs a large overhead, which leads the authors to run experiments only on small SAT instances. Our work is similar to Yolcu and Poczos's (2019) in that we also use a model to score variables. On the other hand, our approach differs from theirs in four ways. Our scoring model is a linear function of a small set of features, which is simple and interpretable. At every iteration, we only score variables from one unsatisfied clause, which makes our model much more scalable and practical. Our features are able to encode time dependencies (e.g. last time a variable was flipped). We learn a separate noise strategy. Zhang et al. (2020) propose a system (NLocalSAT) for guiding the assignment initialization of an SLS solver with a neural network. Their model feeds the CNF formula into a Gated Graph neural network for feature extraction. The neural network predicts an assignment for the SAT formula. The model is trained to predict a satisfying assignment. The output of the neural network is used to initialize SLS solvers. Whereas NLocalSAT modifies the initialization of the SLS algorithm, our algorithm modifies its internal loop. Those two improvements are potentially compatible. Selsam et al. (2018) trained a message-passing neural network called NeuroSAT to predict the satisfiability (SAT) or unsatisfiability (UNSAT) of problem instances. The authors trained and evaluated NeuroSAT on random problem instances that are similar to the ones used in our paper. NeuroSAT achieved an accuracy of 85% and successfully solved 70% of the SAT problems. It is worth noting that our approach focuses on predicting satisfiability and does not directly address unsatisfiability. However, our approach demonstrates a significantly higher accuracy on SAT instances. ### Stochastic Local Search for SAT Various strategies have been proposed for picking the variables to flip within WalkSAT. McAllester et al. (1997) analyze six strategies. In all the strategies, a random unsatisfied clause \(c\) is selected, and the variable is chosen within \(c\). With probability \(p\), a random variable is selected from \(c\); otherwise, one of the six following strategies is implemented. 1) Pick the variable that minimizes the number of unsatisfiable clauses. 2) Pick the variable that minimizes the break value (Algorithm 2). 3) Same as the previous strategy, but never make a random move if one with break value 0 exits. 3) Pick the variable that minimizes the number of unsatisfied clauses, but refuse to flip any variable that has been flipped in the last \(t\) steps. 5) Sort the variables by the total number of unsatisfied clauses, then pick the one with the smallest value. Break ties in favor of the least recently flipped variable. 6) Pick a variable using a combination of least recently picked variable and number of unsatisfied clauses. ProbSAT Balint and Schoning (2012) uses a scoring function based on the values \(make(x)\) and \(break(x)\) and samples the variable to pick based on that scoring function. Given a variable \(x\) and an assignment \(X\), \(make(x)\) is the number of clauses that would become true by flipping \(x\). Note that \(make(x)-break(x)\) is the number of unsatisfiable clauses after flipping \(x\). Balint and Schoning (2012) experiment with various types of scoring functions based on \(make\) and \(break\) and find that \(make\) values can be ignored. Hoos (2002) proposes a dynamic noise strategy that uses higher values of noise only when the algorithm is in an "stagnation" stage, which is when there is no improvement in the objective function's value over the last \(\frac{m}{6}\) search steps, where \(m\) is the number of clauses of the given problem instance. Every incremental increase in the noise value is realized as \(p\gets 0.8p+0.2\); the decrements are defined as \(p\gets 0.6p\) where \(p\) is the noise level. The work by McAllester et al. (1997) inspired our selection of features for the variable ranking, and the paper by Balint and Schoning (2012) led us to use features based on \(break(x)\) and ignore \(make(x)\). Finally, the work in Hoos (2002) inspired us to learn an automated noise strategy. ## 3 Methodology Algorithm 3 shows the pseudo-code for our \(pickVar\) module. Our objective is to learn the functions \(p_{w}\) and \(f_{\theta}\) in such a way that they minimize the number of flips needed to solve a SAT problem. We now describe these functions in detail. ### Variable Representation To score each variable, we first compute some features that represent the state of the variable at the current iteration \(t\). From our discussion of previous work in Section 2.2, we know that \(break(x)\) is an important feature in deciding the score of a variable. We also know, from previous work, that we want to avoid flipping variables back and forth. We design features encoding that information. Let \(age_{1}(x)\) be the last iteration in which \(x\) was flipped and \(age_{2}(x)\) the last iteration in which \(x\) was flipped and selected by the algorithm using \(f_{\theta}(x)\). Let \(last_{K}(x)=1\) if \(x\) was flipped in the last \(K\) iterations by \(f_{\theta}(x)\). Let \(\tilde{x}=min(x,10)\). Based on this notation, we represent each variable via the following features: * \(bk(x)=\log(1+break(\tilde{x}))\) * \(\Delta_{1}(x)=1-\frac{age_{1}(x)}{t}\) * \(\Delta_{2}(x)=1-\frac{age_{2}(x)}{t}\) * \(last_{5}(x)\) * \(last_{10}(x)\) We use \(\tilde{x}\) and \(\log\) in the feature \(bk(x)\) to make the feature independent of the size of the formulas. \(bk(x)\) it is also normalized to be between 0 and 1. We have selected these features based on an extensive preliminary evaluation performed on a variety of features and formulas. It would be easy to expand our technique to include additional features whenever relevant. Let \(\mathbf{f}(x)=(bk(x),\Delta_{1}(x),\Delta_{2}(x),last_{5}(x),last_{10}(x))\) be the vector representing the variable \(x\) at iteration \(t\), given a current assignment \(X\) for a formula \(F\). Note that, to compute the vector, we keep updating variables \(age_{1},age_{2},last_{10}\), which is very cheap. Similar to WalkSAT, \(break(x)\) is only computed for variables on one clause at each iteration. ### Models for Scoring Variables and Controlling Noise Our goal is to make our algorithm interpretable and fast, so we use a linear model for scoring variables. Given a feature vector \(\mathbf{f}=\mathbf{f}(x)\) for a variable \(x\), \(f_{\theta}(x)\) is a linear model on \(\mathbf{f}\): \[f_{\theta}(x)=\theta_{0}+\sum_{i}\theta_{i}\cdot\mathbf{f}_{i}\] Inspired by the dynamic noise strategy discussed in Section 2.2, we define the stagnation parameter \(\delta\) as the number of iterations since the last improvement in the number of satisfied clauses, divided by the number of clauses. Instead of increasing or decreasing it at discrete intervals as in Hoos (2002), our noise is a continuous function of \(\delta\), defined as \[p_{w}(\delta)=0.5\cdot Sigmoid(w_{0}+w_{1}\delta+w_{2}\delta^{2})\] We use the sigmoid function to ensure \(p_{w}\) being between \(0\) and \(0.5\). Those are commonly used values for noise. Parameters \(w_{0},w_{1},w_{2}\) are learned together with parameters \(\{\theta_{i}\}_{i=0}^{5}\) by using reinforcement learning. After running our initial experiments, we noticed that the effect of the stagnation parameter \(\delta\) was almost negligible. Therefore, in most of our experiments, we use a noise parameter that is a constant learned for each instance distribution, that is, \(p_{w}=0.5\cdot Sigmoid(w_{0})\). ### Simplicity and Interpretability of Models Domingos (1999) states that one interpretation of Occam's razor in machine learning is the following: "_Given two models with the same generalization error, the simpler one should be preferred because simplicity is desirable in itself_" Following this basic principle, in our technique, we use simple functions (linear and sigmoid functions) involving a small set of input variables and show that we get better results than related algorithms that use much more complex models, e.g. Yolcu and Poczos's one (2019). Simplicity is also valuable because simple linear models are very fast to evaluate, which is crucial to practical SAT solvers. Interpretability refers to a model's capacity to be "_explained or presented in understandable terms to a human_" (Doshi-Velez and Kim 2017). Linear models that use only a few simple variables are typically considered highly interpretable. Our variable-scoring model, which has just six coefficients, is therefore highly interpretable. The interpretability of a model is useful because it allows us to identify which features are significant and important and thus make decisions about adding or subtracting features. If a feature has a coefficient close to 0, we can infer that the feature lacks statistical significance and should be eliminated. By providing insight into the impact of each model feature, interpretability can help algorithm designers simplify the process of adding, removing, and designing features. Table 1 provides an example of the scoring parameters associated with random 3-SAT formulas of various sizes. The absolute value of each coefficient in the table allows us to gauge the contribution of each variable to the model. As demonstrated by the coefficients in Table 1, the \(bk(x)\) feature has a notably negative impact on the variable score, indicating its strong influence compared to other features. Conversely, the coefficients associated with the noise function \(p_{w}(\delta)\) showed that \(\delta\) was not a crucial feature, allowing us to simplify our assumptions regarding the noise parameter. This kind of insight can be extremely valuable. ### Training with Reinforcement Learning To learn heuristics by using reinforcement learning [13], we formalize local search for SAT as a Markov Decision Process (MDP). For clarity, we describe the MDP assuming that the noise parameter is 0, that is, the algorithm always picks a variable \(x\) from a random unsatisfied clause \(c\) using features \(\mathbf{f}(x)\). For each problem distribution \(D\), we have an MDP represented as a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\) where: * \(\mathcal{S}\) is the set of possible states. The state encodes the information needed at iteration \(t\) to pick a variable to flip. In our setting, a state is a tuple \((X,c,\{\mathbf{f}(x)\}_{x\in c},t)\), where \(X\) is our current assignment, \(c\) is a clause unsatisfied by \(X\), and \(\{\mathbf{f}(x)\}_{x\in c})\) are the set of features for all variables in \(c\) and \(t\) is the current step. The formula \(F\) uniformly sampled from \(D\) is also part of the state, but it is fixed through the episode. There are also two end states: \(end_{sat}\) and \(end_{unsolved}\). * \(\mathcal{A}\) is the set of actions. Given a state \(s=(X,c,\{\mathbf{f}(x)\}_{x\in c},t)\), the set of actions corresponds to picking a variable to flip from the state's clause \(c\). * \(\mathcal{P}\) is the transition probability function, defining the probability of going from a state-action pair \((s,a)\) to the next state \(s^{\prime}\). Let \(s=(X,c,\{\mathbf{f}(x)\}_{x\in c},t)\) be our current state, we pick a variable \(x\) in \(c\) with probability \(\frac{e^{f_{\theta}(x)}}{\sum_{y\in c}e^{f_{\theta}(y)}}\), which gets us \(X^{\prime}\), the assignment obtained from \(X\) by flipping variable \(x\). If \(X^{\prime}\) satisfies the formula \(F\), we move to the \(end_{sat}\) state. If the max number of steps is reached and \(X^{\prime}\) does not satisfy \(F\), we move to \(end_{unsolved}\). Otherwise, we move to \((X^{\prime},c^{\prime},\{\mathbf{f}(x)\}_{x\in c^{\prime}},t+1)\), where \(c^{\prime}\) is a random unsatisfied clause by the new assignment \(X^{\prime}\). * \(\mathcal{R}(s)\) is the immediate reward after transitioning to state \(s\). \(\mathcal{R}(end_{sat})=1\) and 0 otherwise. * \(\gamma\in(0,1)\) is the discount factor, which we set to less than 1 to encourage finding solutions in fewer steps. We reformulate the problem of learning informative heuristics for SAT into the problem of finding an optimal policy \(\pi\) for the MDP described above. We use the well-known REINFORCE algorithm [10]. Our policy \(\pi(s)\) is determined by the function \(f_{\theta}(x)\) that we use to sample the variable to flip based on the feature vector of each variable. At each training iteration, we sample a batch of formulas from the distribution \(D\) and generate trajectories for each formula. We accumulate the policy gradient estimates from all trajectories and perform a single update of the parameters. Algorithm 4 shows the pseudo-code of the REINFORCE algorithm for the case of constant noise and batch size of one. \begin{table} \begin{tabular}{l l l l l l l} \hline size & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & \(\theta_{4}\) & \(\theta_{5}\) & \(\theta_{0}\) \\ \hline \multicolumn{6}{l}{\(rand_{3}\)} \\ \hline \((50,213)\) & -21.1 & -1.8 & -2.9 & -0.9 & -1.3 & 0.1 \\ \((75,320)\) & -19.0 & -1.8 & -2.3 & -0.8 & -1.1 & 0.5 \\ \((100,426)\) & -18.1 & -1.7 & -2.0 & -1.2 & -1.4 & 0.6 \\ \((200,852)\) & -19.4 & -2.4 & -2.6 & -1.0 & -1.5 & -0.2 \\ \hline \multicolumn{6}{l}{\(rand_{4}\)} \\ \hline \((30,292)\) & -20.2 & -1.2 & -3.2 & 0.9 & -2.5 & 0.28 \\ \((50,487)\) & -14.3 & -1.0 & -1.4 & 0.7 & -2.1 & -0.31 \\ \hline \end{tabular} \end{table} Table 1: Coefficients of the scoring variable model learned for \(rand_{k}(n,m)\). The first column specifies the size of the formulas \((n,m)\). \(\theta_{0}\) is the model’s bias. ### Training with a Warm-Up Strategy By performing an extensive experimental evaluation, we found that the training of our algorithm takes too long for formulas with over 50 variables when using completely random heuristics and not initially finding a satisfying assignment. Trials without satisfying assignments are not useful for training since they have a reward of zero. To cope with this problem, we design a warm-up strategy to speed up the training process. For a few epochs, we train the function \(f_{\theta}\) in such a way that the sampling mimics the _pickVar_ strategy from WalkSAT with probability \(\frac{e^{f_{\theta}(x)}}{\sum_{y\in c}e^{f_{\theta}(y)}}\). We cast this as a classification problem and use log-loss and gradient descent to train \(f_{\theta}\). Figure 1 displays the training with and without warm-up for formulas in \(rand_{3}(75,320)\), showing the benefit of our approach. ## 4 Experimental Setting ### Data We perform experiments using random formulas generated from the following problems: random 3-SAT, random 4-SAT, clique detection, graph coloring and dominating set. These distributions, except for random 4-SAT, are used in the evaluation of GnnSLS by Yolcu and Poczos (2019). To facilitate comparison, we use the same problem distributions. They also used a vertex covering problem that the CNFgen package (Lauria et al., 2017) no longer supports, so we do not include this problem in our experiments. It has been observed empirically that random \(K\)-SAT problems are hard when the problems are critically constrained, i.e. close to the SAT/UNSAT phase boundary (Mitchell et al., 1992; Selman, Mitchell, and Levesque, 1996). These problems are used as common benchmarks for SAT. The threshold for \(3\)-SAT is when problems have roughly \(4.26\) times as many clauses as variables. To generate hard problems for random 4-SAT, we set the number of clauses to be \(9.75\) times the number of variables (Gent and Walsh, 1994). The other three problems are NP-complete graph problems. For each of these problems, a random Erdos-Renyi graph \(G(N,p)\) is sampled. To sample from \(G(N,p)\), a graph with \(N\) nodes is generated by sampling each edge with probability \(p\). For all these problem distributions, we generate random instances and keep those that are satisfiable. We use the CNFgen package (Lauria et al., 2017) to generate all instances and Minisat (Een and Sorensson, 2003) to filter out the unsatisfiable formulas. ### Algorithms For comparison, we use the SLS algorithm learned via reinforcement learning developed by Yolcu and Poczos (2019), which we call GnnSLS, and follow the same experimental setup. We also consider one of the WalkSAT versions, as described in Selman et al. (1993). Again, we follow Yolcu and Poczos (2019) in using this particular WalkSAT version. We wrote our algorithms in Python and PyTorch, which does not make them competitive with state-of-the-art SAT solvers with respect to running time. Indeed, our goal in this paper is to explore the power of reinforcement learning for formulating effective SAT heuristics. To this aim, we offer a prototype algorithm that proves the concept. Although we do not try here to beat highly-optimized current SAT solvers, our results suggest that our technique has the potential to compete with them if written efficiently. For each problem distribution, we generate 2500 satisfiable formulas. From these, 500 are used for testing, 1900 for training and 100 for validation. As metrics, we use the median of the median number of flips, the average number of flips and the percentage of instances solved. ### Training with Reinforcement Learning We train GnnSLS as described in Yolcu and Poczos (2019)'s paper and use their code from the related GitHub repository. The paper uses curriculum learning, where training is performed on a sequence of problems of increasing difficulty. For example, to train problems for \(rand_{3}(50,213)\), the authors start by first training on \(rand_{3}(5,21)\), using the resulting model to subsequently train on \(rand_{3}(10,43)\), \(rand_{3}(25,106)\) and \(rand_{3}(50,213)\). As mentioned above, for experiments with random formulas, our models are trained using 1900 instances. The 100 validation instances are used to select the model with the best median number of steps. We train for 60 epochs using one cycle training (Smith and Topin, 2017) and AdamW (Loshchilov and Hutter, 2017) as the optimizer (a link to our GitHub repository will be provided in due course). Most of our experiments are run with a discount factor of 0.5. ### Evaluation For evaluation, we use \(max\_{tries}=10\) and \(max\_{flips}=10000\) unless otherwise specified. As said above, for randomly generated problems, we use 500 instances for testing. The noise probability for WalkSAT and GnnSLS is set to \(p=\frac{1}{2}\) as in the experiments by Yolcu and Poczos (2019). ## 5 Experimental Results Comparison to GnnSLS and WalkSAT.Table 2 summarizes the performance of LearnWSAT compared to GnnSLS Figure 1: Comparing median flips (log-scale) over epochs on training data for LearnWSAT with and without a 5 epoch warm-up. Training with 1800 formulas of \(rand_{3}(75,320)\). and WalkSAT. We present results for five classes of problems, \(rand_{3}(50,213)\), \(rand_{4}(30,292)\), \(color_{5}(20,0.5)\), \(clique_{3}(20,0.05)\) and \(domeset_{4}(12,0.2)\) and three metrics, median number of flips (m-flips), average number of flips (a-flips), and percentage solved (solved). Table 3 indicates the number of variables and clauses in the sampled formulas and gives a sense of the size of the SAT problems we tackle. Table 2 shows that, after training, LearnWSAT requires substantially fewer steps than GnnSLS and WalkSAT to solve the respective problems. Our algorithm performs better than WalkSAT because it optimizes the variable scoring and the noise parameter to the particular distribution of SAT problems. Our technique is also better than GnnSLS because of the following two reasons. First, we speculate that GnnSLS underfits the problem. The SAT encoding and the model used by GnnSLS are more sophisticated but also much more complex than our approach. It is not possible to directly train the GnnSLS algorithm with problems that have a few variables (e.g. 50 variables). To get the GnnSLS encoding to work well, smarter training and more data are needed. Second, our approach uses time-dependent variables (the last time a variable has been flipped), which GnnSLS is unable to encode. Generalization to larger instances.In Table 4, we compare the performance of LearnWSAT trained on data sets of different sizes to assess how well the algorithm generalizes to larger instances after having been trained on smaller ones. We consider random 3-SAT instances of different sizes, \(rand_{3}(n,m)\). As in Table 2, we consider three metrics: median number of flips (m-flips), average number of flips (a-flips), and percentage solved (solved). The second column reports the performance of LearnWSAT (indicated LWSAT for brevity) on instances of different sizes when the algorithm is trained on \(rand_{3}(50,213)\) only. In the third column, for comparison, we report the performance of LearnWSAT when it is trained and evaluated on instances of the same size. The fourth column reports the performance of GnnSLS when the algorithm is trained on \(rand_{3}(50,213)\) only. Finally, the last column reports the WalkSAT (indicated WSAT) baseline. The table shows that our model evaluated on \(rand_{3}(50,213)\) performs similarly or better than models trained on larger instances. Training becomes much more expensive as a function of the size of the formula, but this result suggests that we can train on smaller formulas of the same distribution. GnnSLS trained on smaller instances can also be evaluated on larger problems of the same distribution, but the results seem to degrade as the formulas get larger. Table 5 shows results on instances that are harder than the ones shown before. In particular, Minsat is not able to solve some of the instances of \(rand_{3}(500,2130)\) and \(rand_{4}(200,1950)\) in less than ten hours. We generated 100 problems from \(rand_{3}(300,1278)\), \(rand_{3}(500,2130)\) and \(rand_{4}(200,1950)\), respectively. These instances are generated at the SAT/UNSAT threshold, therefore around 50% of them are supposed to be satisfiable. In the case of \(rand_{4}(200,1950)\), it seems that a few more are satisfiable since LearnWSAT is able to solve 68% of them. Noise parameter.In our initial experiments, we learned a noise function that depended on the stagnation parameter \(\delta\). After inspecting the function, we noticed that the effect of \(\delta\) is negligible. In Figure 2, we show the learned noise function as used by the algorithm at evaluation time. We plot the noise function against the iteration until the formula is solved. The stagnation parameter varies per iteration, but these curves show very little variation. We ran experiments in which we fixed \(p_{w}\) to be a constant dependent on the distribution and found that the results are similar \begin{table} \begin{tabular}{l l l l} \hline & LearnWSAT & GnnSLS & WalkSAT \\ \hline \hline \(rand_{3}(50,213)\) & & & \\ \hline m-flips & **119** & 352 & 356 \\ a-flips & **384** & 985 & 744 \\ solved & 100\% & 99.6\% & 100\% \\ \hline \hline \(color_{5}(20,0.5)\) & & & \\ \hline m-flips & **103** & 137 & 442 \\ a-flips & **225** & 497 & 787 \\ solved & 100\% & 100\% & 100\% \\ \hline \hline \(clique_{3}(20,0.05)\) & & & \\ \hline m-flips & **68** & 200 & 176 \\ a-flips & **91** & 345 & 238 \\ solved & 100\% & 100\% & 100\% \\ \hline \hline \(domeset_{4}(12,0.2)\) & & & \\ \hline m-flips & **65** & 72 & 171 \\ a-flips & **97** & 242 & 288 \\ solved & 100\% & 100\% & 100\% \\ \hline \hline \(rand_{4}(50,487)\) & & & \\ \hline m-flips & **685** & - & 2044 \\ a-flips & **1484** & - & 3302 \\ solved & **100\%** & - & 96\% \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of LearnWSAT compared to GnnSLS and WalkSAT. Three metrics are presented: median (m-flips) and average number of flips (a-flips), and percentage solved (solved). \begin{table} \begin{tabular}{l l l} \hline Distribution & \(n\) & \(m\) \\ \hline \(rand_{k}(n,m)\) & \(n\) & \(m\) \\ \(color_{5}(20,0.5)\) & 100 & 770 \\ \(clique_{3}(20,0.05)\) & 60 & 1758 \\ \(domeset_{4}(12,0.2)\) & 60 & 996 \\ \hline \hline \end{tabular} \end{table} Table 3: Size of the formula used in our evaluation. The distribution \(rand_{k}(n,m)\) has exactly \(n\) variables and \(m\) clauses. For all the other distributions, we show the maximum number of variables \(n\) and clauses \(m\) in the sampled formulas. to when the noise function depends on \(\delta\). In particular, we optimize \(p_{w}=0.5\cdot Sigmoid(w)\) by finding a single parameter \(w\) per distribution. After these initial experiments, we ran all the others (as they are reported here) with fixed constants. Note that these constants are small compared to typical values used for WalkSAT (\(p=1/2\)). This is because our \(PickVar\) algorithm (shown in Algorithm 3) injects noise by sampling instead of deterministically picking variables as in the original \(PickVar\) algorithm of WalkSAT (Algorithm 2). Impact of the discount factor.We ran experiments to understand the dependencies of our results on the value of the discount factor for reinforcement learning. Figure 3 shows the median flips as a function of the discount factor. The gray area shows the confidence intervals for each curve. We find that various discount factors give similar results. Impact of the size of training data.Figure 4 shows the median flips as a function of the size of the training data. The experiment uses formulas from \(rand_{3}(50,213)\). The plot shows that we need a training size of at least 40 to learn an algorithm that is better than WalkSAT. For optimal results, we need at least 160 formulas. To run the experiments with smaller datasets, we increased the number of warm-up steps from 5 to 50 and the amount of epochs from 60 to 200. ## 6 Conclusions and Future Work In this paper, we present LearnWSAT, a technique that discovers effective noise parameters and scoring variable func \begin{table} \begin{tabular}{l c c c c} & \begin{tabular}{c} LWSAT \\ (50, 213) \\ \end{tabular} & \begin{tabular}{c} LWSAT \\ (50, 213) \\ \end{tabular} & \begin{tabular}{c} GnnSLS \\ (50, 213) \\ \end{tabular} & \begin{tabular}{c} WSAT \\ \end{tabular} \\ \hline \multicolumn{5}{l}{\((50,213)\)} \\ \hline m-flips & 119 & 119 & 352 & 356 \\ a-flips & 384 & 384 & 985 & 744 \\ solved & 100\% & 100\% & 99.6\% & 100\% \\ \hline \hline \multicolumn{5}{l}{\((75,320)\)} \\ \hline m-flips & 260 & 286 & 969 & 880 \\ a-flips & 904 & 948 & 2253 & 1772 \\ solved & 100\% & 100\% & 96.6\% & 98\% \\ \hline \multicolumn{5}{l}{\((100,426)\)} \\ \hline m-flips & 503 & 575 & 2264 & 1814 \\ a-flips & 1650 & 1682 & 3816 & 3132 \\ solved & 100\% & 100\% & 85.6\% & 93\% \\ \hline \hline \multicolumn{5}{l}{\((200,852)\)} \\ \hline m-flips & 4272 & 4005 & 10000 & 10000 \\ a-flips & 5329 & 5085 & 8359 & 7497 \\ solved & 96.2\% & 95.6\% & 26\% & 46.2\% \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of our algorithm evaluated on different instances of the same distribution. We consider \(rand_{3}(n,m)\) formulas of different sizes (\(n\) and \(m\) refer to the number of variables and clauses in the sampled formulas). The second column corresponds to evaluating our algorithm (indicated as LWSAT) trained on formulas from \(rand_{3}(50,213)\) only. The third column corresponds to evaluating the algorithm on instances of the same size used for training. The fourth corresponds to evaluating GnnSLS on the algorithm trained on \(rand_{3}(50,213)\). The last column reports the WalkSAT (indicated WSAT) baseline. We consider three metrics: median (m-flips) and average number of flips (a-flips), and percentage solved (solved). Figure 3: Comparing median flips as a function of the discount factor for various datasets. The lines correspond to training and evaluation on instances of the following distributions: \(rand_{3}(50,213)\), \(rand_{4}(30,292)\), \(color_{5}(20,0.5)\), \(clique_{3}(20,0.05)\) and \(domeset_{4}(12,0.2)\) \begin{table} \begin{tabular}{l c c c} & \begin{tabular}{c} LearnWSAT \\ \end{tabular} & WalkSAT \\ \hline \(rand_{3}(300,1278)\) & 48\% & 26\% \\ \(rand_{3}(500,2130)\) & 36\% & 9\% \\ \(rand_{4}(200,1950)\) & 68\% & 0\% \\ \hline \end{tabular} \end{table} Table 5: Performance of our algorithm evaluated on larger instances of \(rand_{k}(n,m)\). These instances have not been checked for satisfiability. Around 50% of them are expected to be satisfiable. The metric shown is percentage solved. The max_flip parameter is set to 50000 for both solvers. LearnWSAT was trained on \(rand_{3}(50,213)\) for the first two rows and on \(rand_{4}(50,487)\) for the last row. Figure 2: The data corresponding to each line comes from running an evaluation on a single formula for each of the five problem distributions (\(color_{5}(20,0.5)\), \(clique_{3}(20,0.05)\), \(domeset_{4}(12,0.2)\), \(rand_{3}(50,213)\), \(rand_{4}(30,292)\)) until the SAT assignment is found. The plot shows the noise parameter as a function of the iteration. tions for WalkSAT-type algorithms. Thanks to them, LearnWSAT uses substantially fewer flips than a WalkSAT baseline, as well as an existing learned SLS-type algorithm, to solve the satisfiability problem. Although we do not focus on optimizing the implementation of LearnWSAT in this paper, our experiments suggest that, when coded efficiently, our technique could compete with state-of-the-art solvers. Despite improving over algorithms in the literature, we note that a limitation of LearnWSAT is the need to pre-define a set of features. In addition, training is slow for formulas with 150 variables or more. The last limitation is mitigated by the fact that, as we have shown in the experiments, models trained on smaller formulas generalize well to larger ones. Overcoming these limitations is part of our future work. Finally, we remark that the ideas presented in this work are general and could be adapted to solve other hard combinatorial problems.
2301.11835
Deep Learning-Driven Nonlinear Reduced-Order Models for Predicting Wave-Structure Interaction
Long Short-Term Memory (LSTM) network-driven Non-Intrusive Reduced Order Model (NROM) for predicting the dynamics of a floating box on the water surface in a wavemaker basin is addressed in this study. The ground truth or actual data for these wave-structure interactions (WSI) problems, namely box displacements and hydrodynamic forces and moments acting on the box due to wave interaction corresponding to a particular wave profile, are computed using the Smoothed Particle Hydrodynamics (SPH). The dimensionality of the system is first reduced using the Discrete Empirical Interpolation Method (DEIM) and the LSTM is applied to the reduced system resulting in a DEIM-LSTM network for developing a surrogate for prediction. The network is further enhanced by incorporating the physics information into the loss function resulting in a physics-informed LSTM (LSTM-PINN) for predicting the rigid body dynamics of box motion. The performance of predictions for these networks is assessed for the two-dimensional wave basin WSI problem as a proof-of-concept demonstration.
Rahul Halder, Murali Damodaran, Khoo Boo Cheong
2023-01-27T16:43:43Z
http://arxiv.org/abs/2301.11835v3
# Deep Learning-Driven Nonlinear Reduced-Order Models for Predicting Wave-Structure Interaction ###### Abstract Long Short-Term Memory (LSTM) network-driven Non-Intrusive Reduced Order Model (NROM) for predicting the dynamics of a floating box on the water surface in a wavemaker basin is addressed in this study. The ground truth or actual data for these wave-structure interactions (WSI) problems, namely box displacements and hydrodynamic forces and moments acting on the box due to wave interaction corresponding to a particular wave profile, are computed using the Smoothed Particle Hydrodynamics (SPH). The dimensionality of the system is first reduced using the Discrete Empirical Interpolation Method (DEIM) and the LSTM is applied to the reduced system resulting in a DEIM-LSTM network for developing a surrogate for prediction. The network is further enhanced by incorporating the physics information into the loss function resulting in a physics-informed LSTM (LSTM-PINN) for predicting the rigid body dynamics of box motion. The performance of predictions for these networks is assessed for the two-dimensional wave basin WSI problem as a proof-of-concept demonstration. National University of Singapore, Temasek Laboratories 9 Engineering Drive 1, Singapore 117575 ## 1 Introduction During the past two decades, several Reduced Order Model (ROM) methods outlined in Carlberg et al. [1] have been instrumental in expediting the efficient computational modeling of dynamical systems implying significant benefits for flow prediction in engineered systems. The conventional Computational Fluid Dynamics (CFD) methods offer solutions to well-posed flow problems with proper initial and boundary conditions. Recent efforts in deep learning (DL) appear to offer efficient and effective solutions easily for ill-posed problems such as inverse, regression, and classification problems in diverse fields as outlined in Schmidhuber et al. [2] have gained significant attention in a wide variety of fluid mechanics applications such as the closure model for turbulence using machine learning as in Maulik and San [3] and Duraisamy et al. [4], the combined POD and LSTM network for incompressible flows with spectral proper orthogonal decomposition (SPOD) of Wang et al. [5] and the application of a dimensionality reduction method with DL networks for learning feature dynamics from noisy data sets in Lui and Wolf
2304.05046
Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion
We apply a global sensitivity method, the Hilbert-Schmidt independence criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to identify the most appropriate parameters for reparameterization. Parameter selection remains a challenge in this context as high dimensional optimizations are prone to overfitting and take a long time, but selecting too few parameters leads to poor quality force fields. We show that the HSIC correctly and quickly identifies the most sensitive parameters, and that optimizations done using a small number of sensitive parameters outperform those done using a higher dimensional reasonable-user parameter selection. Optimizations using only sensitive parameters: 1) converge faster, 2) have loss values comparable to those found with the naive selection, 3) have similar accuracy in validation tests, and 4) do not suffer from problems of overfitting. We demonstrate that an HSIC global sensitivity is a cheap optimization pre-processing step that has both qualitative and quantitative benefits which can substantially simplify and speedup ReaxFF reparameterizations.
Michael Freitas Gustavo, Matti Hellström, Toon Verstraelen
2023-04-11T08:14:19Z
http://arxiv.org/abs/2304.05046v1
# Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion ###### Abstract We apply a global sensitivity method, the Hilbert-Schmidt independence criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to identify the most appropriate parameters for reparameterization. Parameter selection remains a challenge in this context as high dimensional optimizations are prone to overfitting and take a long time, but selecting too few parameters leads to poor quality force fields. We show that the HSIC correctly and quickly identifies the most sensitive parameters, and that optimizations done using a small number of sensitive parameters outperform those done using a higher dimensional reasonable-user parameter selection. Optimizations using only sensitive parameters: 1) converge faster, 2) have loss values comparable to those found with the naive selection, 3) have similar accuracy in validation tests, and 4) do not suffer from problems of overfitting. We demonstrate that an HSIC global sensitivity is a cheap optimization pre-processing step that has both qualitative and quantitative benefits which can substantially simplify and speedup ReaxFF reparameterizations. Introduction ### ReaxFF reparameterization The ReaxFF (reactive force field) [1, 2] is a potential energy surface (PES) for modelling reactive chemistry at spatial and temporal scales typically unreachable with other, more accurate, but expensive methods. ReaxFF's speed comes at the cost of replacing accurate formalism with empirical equations that can typically contain hundreds of fitted parameters. Many of these parameters have no easy physical interpretability and only expert users may have a notion of appropriate values. This makes the task of fixing the parameters quite difficult. The procedure typically used to do this is the minimization of some cost function which measures the deviations between predictions made by ReaxFF and some training set of values the user would like to replicate [3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. However, the question of which parameters should be optimized has always been a difficult one. Optimizing all potentially relevant parameters simultaneously is prone to producing overfitted results, and very high dimensional optimizations are costly and insufficiently exploratory [4]. A key conclusion from our previous work [4, 13] is that the careful conditioning of the error function (in terms of parameter selection and training set items) is crucial to obtaining reasonable results. Typically, researchers apply rudimentary sensitivity analyses and expert knowledge to guide parameter selection. However, these processes are usually inaccurate, laborious, or do not sample the space sufficiently. In this work, we aim to address this problem by introducing a systematic global sensitivity analysis technique to guide parameter selection in a more robust manner. This method has also been coded into our ParAMS package [14], making it a seamless part of any reparameterization workflow. ### Global sensitivity methods _Uncertainty_ analysis concerns itself with the propagation of uncertainty from a model's inputs to its outputs, i.e., estimating an outputs' distribution given all possible changes in the inputs [15]. _Sensitivity_ analysis attributes the uncertainty of the outputs to particular inputs. This is broadly done by measuring how much the unconditioned output distribution, \(\mathbb{P}(Y)\), differs from the output distribution given certain inputs, \(\mathbb{P}(Y|X_{i})\)[16, 17]. The type of statistical operator used to make this comparison divides global sensitivity methods into two types; variance-based and distribution-based. Variance-based methods assume that the variances of these distributions are sufficient to describe them [16, 18]. This approach is well-established, and the first we tried. One of the most popular variance-based techniques is known as the Sobol method. It is based on the assumption that the total variance on the outcomes of a model \(Y\) can be linearly apportioned to every input and combination of inputs [19]: \[\sum_{i}S_{i}+\sum_{i}\sum_{j>i}S_{ij}+\sum_{i}\sum_{j>i}\sum_{k>j}S_{ijk}+ \cdots+S_{123\ldots d}=1. \tag{1}\] \(S_{i}\) is the Sobol index for the first order effect of parameter \(i\) calculated as [19]: \[S_{i}=\frac{\mathbb{V}(\mathbb{E}(Y|X_{i}))}{\mathbb{V}(Y)}, \tag{2}\] where \(\mathbb{E}\) and \(\mathbb{V}\) indicate the expected value and variance respectively. Higher-order effects are calculated similarly: \[S_{ij}=\frac{\mathbb{V}(\mathbb{E}(Y|X_{i},X_{j}))}{\mathbb{V}(Y)}. \tag{3}\] A special case is known as the total effect, and is calculated as: \[S_{Ti}=1-\frac{\mathbb{V}(\mathbb{E}(Y|\mathbf{X}_{-i}))}{\mathbb{V}(Y)}, \tag{4}\] where \(\mathbf{X}_{-i}\) denotes the vector of all parameters except \(X_{i}\). A full decomposition of the variance is prohibitive for all but the smallest number of dimensions since a total of \(2^{d}-1\) indices need to be calculated. It has, however, been shown that the first effect and total effect are sufficient for identifying the most sensitive parameters [19]. The main disadvantage of this method is its cost in terms of the number of samples needed. An efficient method to calculate the first and total effect Sobol indices requires \(\eta(d+2)\) function evaluations [19]. There is no prescribed way to select \(\eta\), but it is typically on the order of several thousand [20, 21]. Since the cost of the calculation of the indices themselves is trivial in comparison to the cost of obtaining the samples, the recommended procedure in literature is to iteratively sample and calculate until the indices converge. In this work, we attempt to introduce a preprocessing step to a ReaxFF parameterization, and thus the cost of this should be significantly less than the optimization; literature sources suggest that this will not be the case [20, 21]. The intractable expense of the Sobol method is well-known, and much effort has gone into decreasing its cost [19]. One popular method which is significantly cheaper than the Sobol one, is the elementary effects method, also known as Morris screening [22]. Morris screening uses a careful sampling strategy to calculate an 'estimated effect' which has been shown to be a good proxy for the total effect Sobol index [19, 22]. This method is a popular screening method for dimensionality reduction in literature, particularly for expensive models. In theory, Morris screening should be a suitable method for our purposes, however, in our early investigations, this proved to not be the case. Both Sobol indices and Morris screening impose rules on how samples may be generated. In the case of Sobol indices, certain parameter values must be fixed while allowing others to vary. In the case of Morris screening, samples are gathered in trajectories where individual parameters are changed one at a time. In our particular context, these rules make obtaining an adequate number of valid samples very frustrating and time-consuming. This is because the ReaxFF loss function surface is full of undefined points corresponding to pa rameter values which result in crashed calculations and nonphysical results. If such points are encountered when generating a Morris trajectory, for example, then the entire trajectory must be abandoned because these method do not allow the entry of nonfinite or undefined values. It would be simpler to find a method which allowed these values, or did not have sampling rules so that such instances could simply be discarded. Aside from the real, practical issues related to their use, the assumptions made by variance-based techniques are inappropriate in the context of identifying parameter sensitivities in the ReaxFF error function. This is because it can produce extremely large values, and minima which are often located in narrow valleys and are unlikely to be captured during a sampling procedure [4]. Cheaper approximation methods are typically insufficiently space-filling, or rely on strong assumptions about the structure of the inputs or outputs [15, 23]. The main criticism of variance-based techniques, however, is the fact that they limit the amount of information extracted from distributions to a single metric. Distribution-based sensitivity approaches represent a newer family of methods. Unlike variance-based methods, these techniques do not reduce distributions to only their variance and can, in principle, capture arbitrary dependencies by considering the entirety of the distributions. Different statistics are used to measure the differences between distributions [16], and various methods of this type have been developed [23, 24, 18]. We are interested in the Hilbert-Schmidt Independence Criterion (HSIC) [25] for it simplicity, speed and accuracy. The HSIC was first introduced in Gretton et al. [25]. Through the use of kernels, it can identify arbitrary non-linear dependencies between the inputs and outputs [16]. It is simpler and faster to converge than many other distribution-based techniques, and does not require explicit regularization [25]. It has since been used in: feature selection [26, 27, 28, 29, 23], event detection [30, 31], machine learning [32, 33], and dimensionality reduction in the context of global optimization [17], among other applications. This final use-case is of particular interest in our context and, other than Spagnol et al. [17], we have found no other work that directly applied the HSIC as a pre-optimization step. In this work, we apply an HSIC analysis to a ReaxFF reparameterization problem. Based on the sensitivity results, we reduce the original high-dimensional optimization to a lower dimensional one. We then demonstrate that the optimization in this reduced space is able to produce ReaxFF force fields which are competitive with those produced by the higher dimensional optimization, but converge more quickly and reduce the risk of overfitting. We focus on ReaxFF because parameterizations of this force field occur regularly in literature [34, 35, 36, 10, 37]. However, the techniques we introduce here could conceivably be applied to any high-dimensional parameterization problem of the type that occurs frequently in many fields [38, 39, 40, 41, 42, 43, 44, 45, 46], because they all share the same underlying sloppy mathematical behavior [47]. In the next section we will introduce ReaxFF, the HSIC metric, the kernels used, and the reparameterization loss function. In Section 4 we introduce a test case on which we apply the sensitivity technique, the results of which are described in Section 5, and future research paths are discussed in Section 6. Section 3 details our implementation of the calculation, and conclusions are drawn in Section 7. ## 2 Mathematical derivations ### ReaxFF energy potential The elucidation of the full ReaxFF energy potential is beyond the scope of this work, however, we provide a brief introduction here for context. Interested readers can consult Chenoweth et al. [5] and Senftle et al. [2] for more details. The ReaxFF energy potential is a summation of different energy contributions [5]: \[E_{\text{sys}}= E_{\text{bond}}+E_{\text{lp}}+E_{\text{over}}+E_{\text{under}}+E_{ \text{val}}\] \[+E_{\text{pen}}+E_{\text{coa}}+E_{\text{C2}}+E_{\text{triple}}\] \[+E_{\text{tors}}+E_{\text{conj}}+E_{\text{H-bond}}+E_{\text{vdW}}\] \[+E_{\text{charges}}+\ldots \tag{5}\] The main contribution is the bonding energy, \(E_{\text{bond}}\), which calculates an energy based on bond orders, which are, in turn, calculated from atomic positions (the most fundamental inputs to any PES). The other contributions can be thought of as corrections to the bonding energy. Some corrections are general, for example \(E_{\text{over}}\) and \(E_{\text{under}}\) which penalize atoms which are over- or undercoordinated by their bonding. Some corrections are extremely specific, for example \(E_{\text{C2}}\) which corrects the energy for the C\({}_{2}\) molecule. There is also an energy term for fluctuating charges (\(E_{\text{charges}}\)). ReaxFF typically uses either the EEM [48] or ACKS2 [49] charge models. In this work we use EEM charges. Each of the energy contributions is typically a complicated function of bond orders and empirical parameters. For example [5]: \[E_{\text{bond}} =-D_{e}^{\sigma}\cdot BO_{ij}^{\sigma}\cdot\exp\left[p_{be1}\left( 1-\left(BO_{ij}^{\sigma}\right)^{p_{be2}}\right)\right]\] \[-D_{e}^{\pi}\cdot BO_{ij}^{\pi}-D_{e}^{\pi\pi}\cdot BO_{ij}^{\pi\pi} \tag{6}\] where \(D_{e}^{\sigma}\), \(D_{e}^{\pi}\), \(D_{e}^{\pi\pi}\), \(p_{be1}\) and \(p_{be2}\) are parameters and \(BO_{ij}\) are bond orders. The complexity and number of contributions quickly leads the ReaxFF PES to have a very large number of parameters, particularly because most parameters are a function of one or more atom types. For example, there is a \(D_{e}^{\sigma}\) parameter for every combination of atom types being modelled. In an effort to make the parameters easier to understand, the original authors organised them into the following groups or blocks: general, atoms, bonds, off-diagonal, angles, torsions and hydrogen bonds. These groups have no bearing on the calculation of the PES, but serve as a helpful organisational tool, and appear in the formatting rules for the inputs of most ReaxFF implementations. We will return to these groups when we introduce a new Zn/S/H force field in Section 4.1. We use the term 'force field' in this work to refer to a full set of ReaxFF parameters and their values. Force fields are functions of the atom types they include and are sometimes only appropriate for specific conditions. For example, ReaxFF force fields containing hydrogen and oxygen are generally classed as appropriate for aqueous or combustion chemistry. ### Loss function The reparameterization of a force field requires the construction of a training set (\(\mathbf{b}_{\text{ref}}\in\mathbb{R}^{m}\)), which is a vector of \(m\) scalar training point values of chemical properties that one would like ReaxFF to replicate. The goal of the reparameterization is the minimization of a loss function which quantifies the overall difference between the ReaxFF model and the training data in a single number. This loss function can take various forms, but the most common is the sum of square errors (SSE). We define the SSE as: \[\text{SSE}(\mathbf{x})\coloneqq\sum_{i=1}^{m}w_{i}\left(\frac{b_{\text{ref},i }-b(\mathbf{x},\mathbf{G}_{i})}{\sigma_{i}}\right)^{2}. \tag{7}\] Each item in the training set (\(b_{\text{ref},i}\)) is compared to the result produced by the ReaxFF model (\(b(\mathbf{x},\mathbf{G}_{i})\)) using a vector of parameters which we allow to be adjusted (\(\mathbf{x}\)), and a set of system inputs and fixed parameters (\(\mathbf{G}_{i}\)). We will call the parameters we have chosen to optimize _active_ parameters, and denote the set of active parameters \(\mathscr{D}\). Items in the training set can be of different types (energies, forces, angles, etc.). To make them comparable and independent of units, each term in the loss function is divided by an appropriate \(\sigma_{i}\) with the same unit as \(b_{\text{ref},i}\). The \(\sigma_{i}\) value is often understood as the acceptable error for that item. Finally, different training values can be weighted differently based on their importance via the weighting vector (\(\mathbf{w}\)). For example, it is more important to get energy minima correct than it is to replicate energies for very exaggerated geometries; one might weight the former higher than the latter in the loss function. In the case of multi-dimensional data, like forces, the \(n\times 3\) matrix is simply flattened into the vector of training data. In other words, a single element of \(\mathbf{b}_{\text{ref}}\) would not be 'the forces on molecule A' for example, but rather 'the x-component of the forces on atom 1 in molecule A'. For more details regarding the implementation itself, readers are directed to the ParAMS documentation [14]. \(\boldsymbol{\sigma}\) and \(\mathbf{w}\) perform essentially similar functions since they are both constants which could conceivably be combined. Using one, or the other, or both is a question of preference for different practitioners. Traditionally, only \(\boldsymbol{\sigma}\) was used, however for some users, weights provide a more intuitive value. We have chosen to use a fixed vector of values for \(\boldsymbol{\sigma}\), and adjust the weights to balance the training set. ### Hilbert-Schmidt independence criterion This section introduces the Hilbert-Schmidt independence criterion (HSIC) from a practitioner's perspective. Our presentation is not as general as in other works [17, 25, 28], and we only strive to explain the basic idea to readers interested in parameter selection. We will assume the space of a single parameter is \(\mathcal{X}\subseteq\mathbb{R}\) and the space of loss values is \(\mathcal{Y}\subseteq\mathbb{R}\). Typically, parameters are bounded and loss values are positive but these are not strict requirements for HSIC. Let the Hilbert space of \(\mathbb{R}\)-valued functions on \(\mathcal{X}\) be \(\mathcal{H}_{x}:\mathcal{X}\rightarrow\mathbb{R}\), for which an inner product \(\langle\cdot,\cdot\rangle_{\mathcal{H}_{x}}\) is defined. This is called a _reproducing kernel_ Hilbert space (RKHS) when a kernel \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) exists such that \(\forall x\in\mathcal{X}\) and \(\forall f\in\mathcal{H}_{x}\): \[k(x,\cdot) \in\mathcal{H}_{x}\text{ and }\] \[\langle f,k(x,\cdot)\rangle_{\mathcal{H}_{x}} =f(x).\] The second condition is known as the _reproducing_ property: a function evaluation \(f(x)\) can be reproduced by taking an inner product of \(f\) with a partially evaluated kernel. We similarly define the RKHS, \(\mathcal{H}_{y}\) with kernel \(\ell\), for for the loss-value space \(\mathcal{Y}\). The kernel \(k\) can be defined in terms of a feature map \(\phi:\mathcal{X}\rightarrow\mathcal{F}_{x}\), where \(\mathcal{F}_{x}\) is a new Hilbert space, often called the feature space of \(x\). This means that each value \(x\) is uniquely represented by a function \(\phi(x)\in\mathcal{F}_{x}\). Any positive definite and symmetric kernel \(k\) can always be described as an inner product in feature space: \[k(x,x^{\prime})\coloneqq\left\langle\phi(x),\phi(x^{\prime})\right\rangle_{ \mathcal{F}_{x}} \tag{8}\] In practice, direct use of \(\phi\) is avoided and the kernel \(k\) is used instead to keep calculations manageable. Kernels are generally cheap to evaluate, even when the feature map would be computationally infeasible. Similarly, a feature map \(\psi:\mathcal{Y}\rightarrow\mathcal{F}_{y}\) exists, such that \(\ell(y,y^{\prime})\coloneqq\left\langle\psi(y),\psi(y^{\prime})\right\rangle_{ \mathcal{F}_{y}}\). Now consider random variables \(X\in\mathcal{X}\) and \(Y\in\mathcal{Y}\), with a joint distribution \(\mathbb{P}_{XY}\). Variance-based sensitivity methods characterize the dependence of \(Y\) on \(X\) with \(\operatorname{cov}[X,Y]=\mathbb{E}_{XY}[XY]-\mathbb{E}_{X}[X]\mathbb{E}_{Y}[Y]\), which is only picking up linear correlations. In principle, one may overcome this limitation by computing the covariance in feature space instead: \(\operatorname{cov}[\phi(X),\psi(Y)]=\mathbb{E}_{XY}[\phi(X)\psi(Y)]-\mathbb{E}_ {X}[\phi(X)]\mathbb{E}_{Y}[\psi(Y)]\). However, the latter covariance is impractical because it directly operates in feature space and because the covariance is an element of a new Hilbert space \(\mathcal{F}_{x}\otimes\mathcal{F}_{y}\), which troubles its direct interpretation. Gretton et al. [25] showed how to overcome both issues by rewriting the squared norm of the latter covariance in terms of kernels: \[\text{HSIC}(\mathcal{F}_{x},\mathcal{F}_{y},\mathbb{P}_{xy}) \coloneqq\big{\|}\text{cov}[\phi(X),\psi(Y)]\big{\|}_{\mathcal{F}_{ x}\otimes\mathcal{F}_{y}}^{2} \tag{9}\] \[=\mathbb{E}_{XX^{\prime}YY^{\prime}}[k(X,X^{\prime})\ell(Y,Y^{ \prime})]\] (10) \[\quad+\mathbb{E}_{XX^{\prime}}[k(X,X^{\prime})]\mathbb{E}_{YY^{ \prime}}[\ell(Y,Y^{\prime})]\] \[\quad-2\mathbb{E}_{XY}\big{[}\mathbb{E}_{X^{\prime}}[k(X,X^{ \prime})]\mathbb{E}_{Y^{\prime}}[\ell(Y,Y^{\prime})]\big{]}\] This definition of HSIC yields a positive real number, which increases as \(X\) and \(Y\) are more correlated. When \(k\) and \(\ell\) are linear kernels, HSIC reduces to \(\big{|}\text{cov}[X,Y]\big{|}^{2}\) and there is no added value compared to variance-based sensitivity criteria. In case of non-linear kernels, the HSIC also detects non-linear correlations between \(X\) and \(Y\). Ideally, \(k\) and \(\ell\) are _characteristic kernels_[17], for which \(\text{HSIC}=0\) if and only if \(X\) and \(Y\) are independent (\(\mathbb{P}_{XY}=\mathbb{P}_{X}\mathbb{P}_{Y}\)). Of the common kernels, the Gaussian kernel is characteristic, while the linear and polynomial kernels are not. It is possible to use non-characteristic kernels for HSIC, but then one loses the guarantee that all dependence will be captured [28]. It is usually not possible to obtain \(\mathbb{P}_{xy}\) explicitly, however, the HSIC can be estimated if we sample the distribution. We define a set of samples as: \[Z\coloneqq(\mathbf{X},\mathbf{y}), \tag{11}\] where \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is a matrix of \(n\) rows of uniformly randomly sampled parameter vectors \(\{\mathbf{X}_{1},\mathbf{X}_{2},\dots\mathbf{X}_{n}\}\), and we let \(\mathbf{y}\in\mathbb{R}^{n}=\{\text{SSE}(\mathbf{X}_{1}),\text{SSE}(\mathbf{X }_{2})\dots,\text{SSE}(\mathbf{X}_{n})\}\) be the corresponding set of loss function values. To measure the sensitivity of a parameter with index \(\omega\), we take the associated column with \(n\) sample values, \(\mathbf{X}_{\cdot\omega}\), and the loss values: \[Z_{\omega}\coloneqq(\mathbf{X}_{\cdot\omega},\mathbf{y}), \tag{12}\] and we make use of the _unbiased_ estimator of the HSIC from Song et al. [28]: \[\text{HSIC}(\mathcal{F}_{x},\mathcal{F}_{y},Z_{\omega})\approx\frac{1}{n(n-3)} \left[\rho_{1}+\rho_{2}-\rho_{3}\right], \tag{13}\] where: \[\rho_{1} \coloneqq\text{tr}(\mathbf{\tilde{K}\tilde{L}})\] \[\rho_{2} \coloneqq\frac{\mathbf{1}^{\top}\mathbf{\tilde{K}1}\mathbf{1}^{ \top}\mathbf{\tilde{L}1}}{(n-1)(n-2)}\] \[\rho_{3} \coloneqq\frac{2}{n-2}\mathbf{1}^{\top}\mathbf{\tilde{K}\tilde{L }1}\] \[\mathbf{\tilde{K}}_{ij} \coloneqq(1-\delta_{ij})k(x_{i},x_{j})\quad\forall x_{i},x_{j} \in X_{\omega}\] \[\mathbf{\tilde{L}}_{ij} \coloneqq(1-\delta_{ij})\ell(y_{i},y_{j})\quad\forall y_{i},y_{j} \in\mathbf{y}.\] HSIC is a convenient metric with which to rank the influence of parameters on an output, however, to make interpretation even easier we define the sensitivity of a parameter as the normalized HSIC values [17]: \[s_{\omega}=\frac{\text{HSIC}(\mathcal{F}_{x},\mathcal{F}_{y},Z_{\omega})}{ \sum_{i=1}^{d}\text{HSIC}(\mathcal{F}_{x},\mathcal{F}_{y},Z_{i})}. \tag{14}\] By construction, sensitivity values are bounded (\(0\leq s_{\omega}\leq 1\)) and \(\sum s_{\omega}=1\). This makes sensitivities a more intuitive value to interpret and understand than HSIC. The advantage of the _biased_ HSIC estimator [28] is that it is positive by construction. The _unbiased_ estimator used here, although more accurate, runs the risk of producing small negative values for low sensitivity items. In the context of the sensitivity analysis, where a qualitative consideration of the results is arguably more important that a quantitative one, we simply clip any negative HSIC values to zero before calculating the sensitivities (Equation 14). For a more detailed description of HSIC, the interested reader is directed to Gretton et al. [25] for the original introduction, and to De Lozzo and Marrel [23], Da Veiga [24], Song et al. 28 and Spagnol et al. 17 for more recent works. Note that it is also possible to consider a more fine-grained sensitivity approach and let \(\mathbf{y}\) be individual residual values (i.e., \(y_{i}=b_{\text{ref},i}-b(\mathbf{x},\mathbf{G}_{i})\)). This would allow one to determine sensitivities for particular training set items; further information to assist in training set design. This represents an extension of the current work, and will not be dealt with here, but is left as an avenue for further study. ### Kernels Figure 1 illustrates the kernel distributions used in this work, using different parameter values where appropriate. We select the characteristic Gaussian kernel for the parameters-kernel: \[k(x,x^{\prime})\coloneqq\exp\left[-\frac{(x-x^{\prime})^{2}}{2\sigma^{2}} \right]. \tag{15}\] This captures the dis/similarity between parameters, and allows the distribution to take an arbitrary shape. For the sake of clarity, \(\sigma\) as it is used in Equation 15 is unrelated to the same symbol in Equation 7. We test several candidates for the loss-kernel. We first consider the non-characteristic threshold kernel introduced by Spagnol et al. 17: \[\ell(y,y^{\prime}) \coloneqq h(y)h(y^{\prime}) \tag{16}\] \[h(y) \coloneqq\begin{cases}1&\text{ if }y<q_{\alpha},\\ 0&\text{ otherwise.}\end{cases} \tag{17}\] \(q_{\alpha}\) is the value corresponding to the \(\alpha\) quantile of the data. The above kernel is constructed to focus the weight of the distribution on the smallest numerical values of \(\mathbf{y}\) since these represent good loss function values we would like the most information about. Spagnol et al. 17 showed that thresholding was extremely important in identifying sensitivities near minima, and not allowing the calculation to be swamped by very large function values. We then consider a characteristic and continuous approximation of the threshold kernel, which we call the conjunctive-Gaussian (CG) kernel: \[\ell(y,y^{\prime})\coloneqq\exp\left[-\frac{(y^{2}+y^{\prime 2})}{2\gamma^{2}} \right], \tag{18}\] where lower values of the \(\gamma\) parameter focus the distribution more heavily on the best values. For comparison, we also test a simple non-characteristic linear kernel: \[\ell(y,y^{\prime})\coloneqq yy^{\prime}, \tag{19}\] and the Gaussian kernel introduced in Equation 15. Other kernels are possible, but we leave their consideration for later work. To stabilize the numerics and handle order-of-magnitude concerns, we transform and scale the loss values before applying the loss-kernel to them: \[\tilde{y}(y)=\frac{\ln y-\min\left(\ln\mathbf{y}\right)}{\max\left(\ln\mathbf{ y}\right)-\min\left(\ln\mathbf{y}\right)}. \tag{20}\] ## 3 Software Training data was calculated using the Amsterdam Modeling Suite (AMS) 2022. The sensitivity calculation has been implemented in the Python programming language, and integrated into a development version of ParAMS 2023 [14], a reparameterization toolkit which comes bundled with AMS. It is also used to handle the force field optimizations through the GloMPO optimization management software [4]. Figure 1: Plots of the tested kernel functions. The reparameterization problem Our main goal in this work is to demonstrate the method and utility of our new sensitivity approach. For this reason, the training set we have designed is smaller than what would typically be required to create a production quality parameterization. Designing a large training set would require an entire publication in its own right, and distract heavily from the focus of our article. A smaller training set allows us to keep the discussions more focused and clear. In any case, training set design is always an iterative process where more items are added as validation tests on the new force fields demonstrate deficiencies in the original set [50]. This work can be considered a first iteration upon which later work can build. ### Initial force field We aim to create a new ReaxFF force field which correctly models the adsorption of H2S on ZnS - an absorbent which has received some attention for its favorable electronic properties [51, 52, 53, 54, 55]. H2S is a common, but toxic, gas and its adsorption behavior on ZnS has been investigated for both gas detection [55] and gas removal purposes [53]. As far as we are aware, however, no specially designed Zn/S/H ReaxFF force field exists to model this behaviour. Table 1 details the nomenclature we will use to refer to the various parameter blocks which compose the ReaxFF force field file [56]. For a discussion of the parameter blocks, see Section 2.1 Our initial force field is an amalgamation of parameter values from already published force fields: 1. The ATM:S, BND:S.S and ANG:S.S.S parameter blocks are filled with values from the Li/S force field of Islam et al. [57]; 2. ATM:H, BND:S.H and ANG:H.S.H values are taken from the C/H/O/S force field of Muller and Hartke [50]; 3. GEN, ATM:Zn, BND:H.H, BND:Zn.H, BND:Zn.Zn, BND:Zn.S, ANG:Zn.Zn.S, ANG:Zn.S.Zn, ANG:S.Zn.S, ANG:S.Zn.S and HBD:S.H.S blocks are filled with parameters published in the Zn/O/H force field of Raymond et al. [54]. This force field does not contain any sulfur compounds, so the corresponding oxygen-related parameter blocks are used. For example, BND:Zn.O parameters are filled into the BND:Zn.S block of the initial force field. The force field contains 279 parameters. From this we use some common intuition - based on the contents of the training set and descriptions of the parameters - to select 53 for initial optimization. The selection is 'greedy' in that a parameter is selected for optimization if: 1. it could conceivably affect the behaviour of a training set item, 2. the training set contains items related to the parameter, and 3. the parameter is appropriate for optimization. For example, the atomic masses of the elements are technically parameters, however, changing them would not be appropriate. Similarly, \(\pi\)-bonds are not present in the training set, thus, their related parameters are not chosen for optimization. Finally, we do not optimize any atomic or general block parameters. These are generally known to be more 'expert' level parameters that are not suitable for preliminary stage optimization. [14] \begin{table} \begin{tabular}{l l l} \hline \hline Alias & Description & Number of parameters \\ \hline GEN & General parameters & 41 \\ ATM:W & Parameters for atoms of type W & 32 per atom type \\ BND:W.X & Parameters for W-X bonds & 16 per W.X pair \\ OFD:W.X & Off-diagonal definitions for atom pairs & 6 per W.X pair \\ ANG:W.X.Y & Parameters for angles formed by W-X-Y atoms & 7 per angle group \\ TOR:W.X.Y.Z & Parameters for torsions formed by W-X-Y-Z atoms & 7 per angle group \\ HBD:W.H.X & Hydrogen-bonding parameters for atoms W and X & 4 per W.X pair \\ \hline \hline \end{tabular} \end{table} Table 1: Aliases used to refer to different parameter blocks in the ReaxFF force field file format. The 53 parameters selected for optimization are listed in Table 2. The default parameter ranges supplied by ParAMS[14] were used, and can be found in the supplementary information. ### Training data The training set, calculated with BAND using PBEsol and DZ or DZP basis set, consists of 16 jobs, from which 471 individual training points are extracted. AMS BAND calculates charges via the Hirshfeld, Voronoi deformation, Mulliken and CM5 methods. We have included Hirshfeld charges in our training set. Although these charges are not without problems[59], we believe they are the best available. The accuracy of these charges is also not overly important since we do not activate any charge related parameters in our initial parameter selection (see Section 4.1), and the few charges we include in the training set are mainly used as a sanity check on the results. The training set consists of: 1. energy-volume scans, charges and enthalpies of formation for the cubic zincblende, cubic rocksalt and hexagonal wurtzite structures of ZnS; \begin{table} \begin{tabular}{l l l l} \hline \hline **Block** & **Name** & **Eqn.** & **Description** & **Atoms** \\ \hline ANG & -p\_hb2 & 18 & Hydrogen bond/bond order & H.S.H \\ & -p\_hb3 & 18 & Hydrogen bond parameter & H.S.H \\ & p\_hb1 & 2 & Hydrogen bond energy & H.S.H \\ & p\_val1 & 13a & Valance angle parameter & H.S.H S.S.Zn S.Zn.S S.Zn.Zn Zn.S.Zn \\ & p\_val2 & 13a & Valance angle parameter & H.S.H S.S.Zn S.Zn.S S.Zn.Zn Zn.S.Zn \\ & p\_val4 & 13b & Valance angle parameter & H.S.H S.S.Zn S.Zn.S S.Zn.Zn Zn.S.Zn \\ & p\_val7 & 13c & Under-coordination & H.S.H S.S.Zn S.Zn.S S.Zn.Zn Zn.S.Zn \\ & Theta\_0,0 & 13g & 180\({}^{\circ}\)-(equilibrium angle) & H.S.H S.S.Zn S.Zn.S S.Zn.Zn Zn.S.Zn \\ HBD & r\_hb\^{}0 & 18 & Hydrogen bond equilibrium distance & S.H.S \\ BND & D\_e\^{}sigma & 6, 11a & Sigma-bond dissociation energy & H.S S.S S.Zn Zn Zn.Zn \\ & p\_be1 & 6 & Bond energy parameter & H.S S.S S.Zn Zn Zn.Zn \\ & p\_be2 & 6 & Bond energy parameter & H.S S.S S.Zn Zn Zn.Zn \\ & p\_bo1 & 2 & Sigma bond order & H.S S.S S.Zn Zn.Zn \\ & p\_bo2 & 2 & Sigma bond order & H.S S.S S.Zn Zn.Zn \\ & p\_ovun1 & 11a & Over-coordination penalty & H.S S.S S.Zn Zn.Zn \\ \hline \hline \end{tabular} \end{table} Table 2: Initial naive parameter selection. Equation numbers and parameter names refer to notation used in [58]. 2. the optimized geometry, charges, H-S bond scan, and H-S-H angle scan for H\({}_{2}\)S; 3. the energy of formation, bond lengths, charges, and angles of a periodic zincblende surface; 4. the forces of a distorted zincblende surface; 5. the charges, bond length and adsorption energy of H\({}_{2}\)S adsorbed on zincblende. Details of the training set are included in Section F of the supplementary information. Default ParAMS values are used for \(\boldsymbol{\sigma}\). Individual energies, angles, distances, charges and torsions, and force groups are equally weighted in the initial training set. The full training set, job collection and initial parameter interface are available in the supplementary information. ## 5 Results and discussion Our workflow proceeds as follows: **reparameterization of the initial force field**: based on the naive parameter selection (done purely for comparative purposes); **running of the sensitivity calculation**: to determine a parameter ordering (entirely independent of the previous step); **rerunning the reparameterization**: using 10, 20, 33 and 43 of the least and most sensitive parameters, as determined by the sensitivity analysis; **running validation tests**: using the initial parameterization and some of the best parameterizations found. Our aim is to show that: 1. the HSIC sensitivity method correctly identifies the most sensitive parameters; 2. reparameterizations within a lower dimensional space of very sensitive parameters can find force fields of similar quality in a shorter amount of time; 3. reparameterizations in higher dimension run the risk of overfitting. ### Initial optimizations An initial set of sixteen reparameterizations were performed using the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES) [60] which has been shown to work well on ReaxFF reparameterizations [4, 13]. All the optimizers were started from the initial force field values described in Section 4.1 with a wide initial sampling distribution (\(\sigma_{0}=0.5\)). All parameters were scaled between zero and one according to their bounds which are automatically suggested by ParAMS. Optimizers were stopped if: 1. within the last 2000 function evaluations their lowest value had not improved, and their explored values were very similar; or 2. any of the default internal CMA-ES convergence criteria were triggered. The optimizers were run in parallel and the entire optimization was stopped after using a cumulative total of \(300\,000\) function evaluations. These conditions are all quite strict and provided all the optimizers a very long time to find the best possible minimum. Figure 2 shows the optimizer trajectories for these reparameterizations, and shows the original loss of \(20\,662\) being improved to between \(410\) and \(308\). These results will be discussed in more detail in Section 5.3 when comparing the results to other optimizations. Detailed results for every optimizer can be found in the Section A of the supplementary information. ### Sensitivity analysis A total of \(10\,000\) uniformly randomly generated samples of the original optimization space were collected for the sensitivity analysis. The sampling procedure took a total of \(142\,\mathrm{min}\) on a 64 core node (AMD EPYC 7513 @ \(2.6\,\mathrm{GHz}\)). This sampling took place once, and this same set of samples was used in all the sensitivity calculations. The distribution of sampled loss values (and their transformed values, \(\tilde{y}\)) is shown in Figure 3; the lowest value is 2651. Approximately half of the samples are actually worse than the initial loss value, and the set clearly does not include the good minima found through the original optimizations in Section 5.1. We will demonstrate that having such minima in the sample set is not a requirement for having a good estimation of parameter sensitivity. Instead of using all of the samples at once in a single HSIC calculation, we repeated it 10 times using a bootstrapping method. Each bootstrap used 2000 random sub-samples from the original \(10\,000\) sample set. Using fewer samples per calculation significantly speeds up Figure 2: Optimizer trajectories for the sixteen CMA-ES reparameterizations of the initial force field using the naive parameter selection. Figure 3: Distribution of loss values (top), and loss values scaled according to Equation 20 (bottom), gathered during the sampling procedure and used during the sensitivity analyses. calculation time, and the spread of the results gives us an indication of the error. Unless otherwise mentioned, the sensitivity values discussed below refer to the average value across the 10 repeats. Algorithm 1 provides a pseudocode of the calculation where we have abandoned notational precision for brevity and clarity. ``` 1:SSE, \(k\), \(\ell\) 2:randomly generate \(\mathbf{X}\in\mathbb{R}^{10\,000\times 53}\) 3:\(\mathbf{y}\in\mathbb{R}^{10\,000}\leftarrow\text{SSE}(\mathbf{X})\)\(\triangleright\) Evaluate loss 4:\(\tilde{\mathbf{y}}\leftarrow\tilde{y}(\mathbf{y})\)\(\triangleright\) Scale loss values 5:for\(i=1,\ldots,10\)do\(\triangleright\) Repeat the calculation 6:\(\mathbf{X}^{\text{sub}}\leftarrow\) 2000 random rows of \(\mathbf{X}\) 7:\(\tilde{\mathbf{y}}^{\text{sub}}\leftarrow\) corresponding 2000 loss values 8:\(\tilde{\mathbf{L}}\leftarrow\ell(\tilde{\mathbf{y}}^{\text{sub}})\) 9:for\(\omega=1,\ldots,53\)do\(\triangleright\) Every parameter 10:\(\tilde{\mathbf{K}}\gets k(\mathbf{X}^{\text{sub}}_{\omega})\) 11:\(\mathbf{H}_{i\omega}\leftarrow\text{HSIC}(\tilde{\mathbf{K}},\tilde{\mathbf{ L}})\) 12:\(\mathbf{h}\leftarrow\text{Mean}(\mathbf{H}_{1},\ldots,\mathbf{H}_{10})\)\(\triangleright\) Average repeats 13:\(s_{j}\leftarrow\frac{h_{j}}{\sum_{i}h_{i}}\)\(\triangleright\) Calculate sensitivities ``` **Algorithm 1** Sensitivity calculation workflow. The following kernels were applied to the loss values: CG kernel (\(\gamma=0.3\)), Gaussian kernel (\(\sigma=0.3\)), linear kernel and threshold kernel (\(\alpha=0.4\)). In all cases a Gaussian kernel (\(\sigma=0.3\)) was applied to parameter values. Since all values are scaled between 0 and 1, a value of \(\sigma=0.3\) approximates the median distance between all samples in a uniform distribution between these values. This has repeatedly been reported as an appropriate value.[17, 61] The average time needed to perform the sensitivity calculation (regardless of loss-kernel and including the repeats) was approximately 60 s. The optimizations require several days to complete, thus the inclusion of these sampling and sensitivity steps in a reparameterization workflow would not be the limiting step and, as will be shown below, come with several advantages. We note that we also repeated the calculations using a sample pool of 30 000 samples, and 10 000 sub-samples per calculation. These results produce tighter distributions between the repeats at the cost of more time, but the average sensitivity values are essentially the same. In other words, they were not worth the effort and are not presented below. The scaling of the sensitivity calculation is \(\mathcal{O}(rdn^{2})\) where \(r\) are the number of repeats, \(d\) is the number of parameters, and \(n\) is the number of sub-samples used per calculation. Increasing the size of the matrices has the largest impact on the calculation time, so being able to get robust parameter orderings from repeated calculations using small sub-samples is very advantageous. Figure 4 compares the sensitivity values determined by each calculation for each parameter. Despite the large differences between loss-kernels, the ordering of parameters is very robust for the most sensitive parameters. Significant deviations only appear for the least sensitive parameters which are all close to one another numerically (note the logarithmic axis). This demonstrates that the HSIC approach we have taken is robust, and not overly dependent on the choice of loss-kernel or hyperparameters. The loss-kernel which struggles the most with differentiating sensitivities is the threshold kernel. We believe that this is because the kernel is discontinuous and provides very little information to the calculation because values are toggled to zero or one. Our investigation of this kernel was motivated by the work of Spagnol et al. [17]. They demonstrated the necessity of this kernel to focus the calculation on small loss values (near the minima of interest), and negate the effects of large order-of-magnitude differences. As discussed in Section 2.4, our implementation always takes the logarithm of the loss values and then scales the result; it appears to address the order-of-magnitude problem. Random sampling will also generally not contain very good minima (true in this case, as discussed above), making the second advantage of the threshold kernel less important. The continuity provided by the other kernels seems to make converging the sensitivity values for the parameters easier. We have also explored the effect of using different kernel parameters in Section B of the supplementary information. This has a minor effect on the sensitivity results, provided a reasonable kernel parameter value is selected so that the kernel values are not all very close Figure 4: Average sensitivity values (over ten runs each) determined for each parameter using different kernels applied to the loss values. Parameters are ordered by the results of the CG kernel which is used in the later reparameterizations. Black lines group the 10, 20, 33 and 43 most sensitive parameters. to zero or one. Given the similarities in orderings obtained by the various loss-kernels we will continue discussing only the results obtained when using the CG kernel (\(\gamma=0.3\)). This is a somewhat arbitrary choice, but it can be argued that it has the best theoretical foundation by emphasizing the sensitivities for low minima. Figure 5 shows the sensitivities obtained grouped by a) parameter group, and b) parameter name. For example, 'S.Zn' refers to all the bond parameters associated with sulphur-zinc bonds, and p_bo2 refers to all the p_bo2 bond parameters regardless of atoms involved. p_bo2 appears in the calculation of \(\sigma\)-bond orders, and is an exponential term. D_e^sigma is a linear parameter which is multiplied by the \(\sigma\)-bond orders to determine the bond energy contribution to the overall potential (see Equation 6). The identification of these \(\sigma\)-bond parameters, particularly for sulphur-zinc and zinc-zinc bonds are quite reasonable given the composition of the training set. The degree to which the sensitivity is dominated by only these parameters, however, may be surprising. Nevertheless, the results seem intuitive. Figure 5: Grouped sensitivities as determined by the HSIC calculation using the CG kernel. Grouped by parameter group (left) and parameter name (right). ### Reparameterizations with sensitive parameters only Based on the sensitivities determined above, we ran reconfigured reparameterizations using: the most sensitive 10, 20, 33 and 43 parameters, and the least sensitive 10, 20, 33 and 43 parameters. The most-sensitive groupings are shown in Figure 4. We made these selections so that the set of 10 most sensitive parameters, and the set of 43 least sensitive parameters are complementary; together they account for the original 53 parameters. Similarly, for other combinations. Each of the reparameterizations was conducted in the same way as the original described in Section 5.1. In a more practical setting we do not advocate blindly activating only the most sensitive \(n\) parameters; we do so here only to avoid introducing human decision-making into the results. In practice, one should use the sensitivities as a guide to better understand how the training data effects the loss function, and use some human intuition when selecting parameters. For the sake of clarity we introduce the following nomenclature to refer to the different optimizations: **O53**: reparameterizations using the original 53 active parameters; **M#**: reparameterizations using the most sensitive # parameters; **L#**: reparameterizations using the least sensitive # parameters. Figure 6 shows the running best loss value seen by any of the sixteen parallel optimizers for each of the optimization configurations (Section A of the supplementary information contains the results for individual optimizers). Table 3 shows the difference between the lowest loss value and initial loss value as a fraction of the initial loss value. In other words, the percentage of the initial loss which was'removed' during the optimization. Unsurprisingly, O53 produces the lowest loss because it had the most degrees-of-freedom. Any reduction in the number of active parameters can be expected to worsen achievable loss values. Indeed, all the other optimizations find worse minima than the original. However, the difference between using the most sensitive and least sensitive parameters is marked. Figure 6: Running best value seen by any of the sixteen parallel optimizers for each of the optimization configurations. \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{Most Sensitive (\%)} & Least Sensitive (\%) \\ \hline 10 & 94.2 & 3.61 \\ 20 & 97.7 & 10.9 \\ 33 & 98.2 & 29.4 \\ 43 & 98.4 & 87.4 \\ 53 & 98.5 & \\ \hline \hline \end{tabular} \end{table} Table 3: Fraction of the initial loss value which removed during optimization. In all cases using the most sensitive parameters allows us to remove \(>\)93 % of the original error. In fact, some M33 and M43 optimizers find better minima than some O53 optimizers. In contrast, using the same number of least-sensitive parameters produces very poor optimizations which are unable to meaningfully reduce the loss value. Especially noteworthy is that M10 is able to locate better minima than its complement L43. Similarly, L10 is only able to reduce the loss by 4 % compared to 94 % achieved by M10. It is also important to note that M10 and M20 are able to converge long before the 300 000 function evaluation limit, and use approximately one-third and two-thirds of the time used by O53. This represents significant time savings as O53 took approximately two days to complete, more than the time required to run the sensitivity analysis and M10 or M20. These results show that our proposed HSIC sensitivity method is able to quickly identify the most sensitive parameters for reparameterization, and that a parameter selection guided by it can produce good minima in a shorter time. ### Force field comparison and validation In order to verify the quality of the new force fields, and demonstrate the risks of overfitting, we perform several validation tests using the initial force field, and the force fields with lowest error produced by O53, M10, and M20 across the sixteen optimizations each one performed. We consider: 1. the error on the training set items; 2. an MD simulation of H\({}_{2}\)S adsorption on zincblende and wurtzite slabs; 3. an MD simulation of zincblende bulk crystal; 4. the adsorption energy of an H\({}_{2}\)S molecule on a (110) zincblende surface; and 5. the surface energies of (110) zincblende and (11\(\bar{2}\)0) wurtzite. We compare the results to reference DFT calculations, as well as literature values. For calculations involving crystal surfaces, we use the (110) face of zincblende and (11\(\bar{2}\)0) face of wurtzite ZnS as they have been found to be the most energetically favorable [52, 62]. Slabs were always constructed from pre-optimized lattice parameters calculated with BAND using the PBEsol/DZP level of theory. #### 5.4.1 Training set errors We start our analysis by decomposing the overall loss values for each of the force fields into their individual contributions. Figures 7 and 8 compare force field predictions to reference values for each of the training set items. Figure 7 shows training set items which can be grouped together into PES scans along some coordinate. With the exception of the H\({}_{2}\)S angle scan, the original force field performs extremely poorly for these items, and in all cases the optimized force fields produce significantly improved results. The O53 force field is the most accurate for all the scans, but M20 is generally quite competitive and has similar errors. The one exception to this is the H-S bond scan in the H\({}_{2}\)S molecule which M20 did not replicate well. Figure 8 shows the remaining training set items where we compare reference values to predictions made by the selected force fields. Although angles and forces were improved, significant errors remained in most cases and a large fraction of the original forces error remains in the final fields. Energies, which form a large part of the overall loss value, actually appear generally well-fitted and most of the error comes from a single item; this item is the energy difference between a distorted and regular (110) zincblende surface. We can also see that charges were effectively unimproved during the reparameterization. This is because charge related parameters were not activated in our parameter selection. Interatomic distances were the most improved class of training set items. This can be explained by the fact that the original sulphur related parameters were actually trained for oxygen, thus the initial force field bond lengths were too short to accommodate the larger sulfur atoms and had to be lengthened. In most cases the M10 errors are substantially worse then the M20 ones. On the other hand, M20 errors are often very similar to (or in the case of angles) better than the O53 errors. In comparison to O53, M20 performs worst for forces and energies. However, as mentioned above, the bulk of the energy error is concentrated in one item, and no force field appears to predict forces very well. From these observations, there does not appear to be a strong signal that O53 is a significantly better force field than M20. Figure 7: Training set items which form part of various PES scans. Reference values shown as black dots and loss values for each force field are shown in parentheses in the legends. Figure 8: Comparison of reference to predicted values for each of the training set items and force fields, grouped by item type. Loss values for a group and force field are included in parentheses in the figure legends. #### 5.4.2 MD simulations Adsorption to crystal surfacesOur first validation test is an MD simulation of the adsorption of H\({}_{2}\)S molecules on ZnS slabs. We are guided by literature in constructing a realistic simulation scenario. Zhang et al. [62] showed computationally and experimentally that cubic ZnS with a particle size smaller than \(\sim\)7 nm is not stable at room temperature and can easily convert to the hexagonal polymorph. Experimentally, Dloczik et al. [51] were able to produce stable ZnS columns with wall thicknesses between 10 nm to 30 nm where both cubic and wurtzite phases were detected. Qi et al. [55] later studied the adsorption of H\({}_{2}\)S on ZnS surfaces at room temperature. Using these sources, we create MD simulations for both morphologies using slabs \(\sim\)10 nm thick. The temperature is maintained at 298 K, 100 H\({}_{2}\)S molecules are randomly positioned above and below the slabs, and the simulations are conducted for 300 000 time steps of 0.25 fs each. Figures 9 and 10 show the final surface geometries for the wurtzite and zincblende simulations respectively. Unsurprisingly, the initial force field performs poorly as the slabs immediately collapse. The M10 and M20 simulations, however, perform better. The slabs maintain their crystalline shapes, and adsorb the H\({}_{2}\)S molecules to the slab surfaces. The O53 simulations do not behave in the same way. These slabs struggle to maintain their surfaces. In the wurtzite case the surface layers have become almost amorphous, although the bulk crystal is relatively well-maintained, the extent of the deformation extends several layers deep. The zincblende surface is more ordered, but it is not as rigidly maintained as M10 or M20, and it appears that some H\({}_{2}\)S molecules are being chemically absorbed into the structure. As discussed above, literature suggests that these slabs should be stable rigid solids under these conditions, making the behavior of the O53 force field surprising. We will discuss the reasons for this behavior in Section 5.4.3. Figure 9: One surface in the final frames of 75 ps MD simulations of 100 H\({}_{2}\)S gas molecules above and below a 10 nm thick wurtzite slab with (11\(\bar{2}\)0) surfaces using a) the initial, b) O53, c) M10, and d) M20 force fields. Sulfur atoms in yellow, zinc in silver, hydrogen in green. Figure 10: One surface in the final frames of 75 ps MD simulations of 100 H\({}_{2}\)S gas molecules above and below a 10 nm thick zincblende slab with (110) surfaces using a) the initial, b) O53, c) M10, and d) M20 force fields. Sulfur atoms in yellow, zinc in silver, hydrogen in green. Bulk crystalThe above simulation provides a qualitative indication that the lower loss value of the O53 force field does not guarantee better performance. However, we would like to demonstrate quantitatively that M20 parameterizations produce more robust results. To do this we ran a second MD simulation of a bulk zincblende crystal (64 atoms total) at 500 K for 10 ps using a time step of 0.5 fs. Two simulations were run: one used the best force fields produced by each of the sixteen M20 optimizers, and the other used the best force fields produced by each of the sixteen O53 optimizers. In each simulation the force fields were used as a committee. In other words, each force field was applied to the same crystal geometries at each time step. The forces of these results were then averaged to update the atomic positions for the next time step. Figure 11 shows the standard deviation of the forces across the sixteen force fields. We are not interested in the results at particular times, for particular atoms, or in particular directions, because the forces always average out to zero. Thus, the histograms simply show all the standard deviations regardless of the above distinctions. The sixteen M20 force fields show significantly more agreement than the O53 ones. This suggests that the lower dimensional optimization is much more robust, and produces force fields with a tighter distribution of results. The higher dimensional optimization has many more minima in which optimizers might be trapped. Since these extra minima are created by the presence of insensitive parameters, they are more likely to represent overfittings than genuine attractors driven by important parameters. Interestingly, the favorable spread of results seen here for M20, could not be assumed from the spread in loss values amongst its sixteen force fields which was 98 compared to 33 for O53. #### 5.4.3 Adsorption and surface energies Our more quantitative validation tests are: 1. the calculation of the adsorption energy (\(E_{\mathrm{ads}}\)) of an H\({}_{2}\)S molecule on the (110) surface of zincblende; and 2. surface energy (\(E_{\rm surf}\)) calculations for the (110) surface of zincblende and (11\(\bar{2}\)0) surface of wurtzite. Details of these calculations are included in Section C and Section D of the supplementary information, respectively. In this case we use the best force fields from every optimizer in the O53 and M20 sets to analyse the distribution of results. We do not consider all sixteen M10 results as they perform poorly. The results are shown in Figure 12 as a function of the force fields' associated SSE. Detailed results for individual optimizers can be found in Section E of the supplementary information. In these plots, we have also included DFT, initial force field, and best M10 force field results for comparison. The initial and M10 results are not shown with their corresponding loss values as they are too large and would make the plots illegible. In the discussions that follow, we will continue to use 'best' to refer to the force field with the lowest loss value (SSE), it does not mean the force field with the most accurate validation result. Figure 11: Standard deviation on the forces felt by all atoms, in all directions, at all time steps in bulk simulations of zincblende crystal using committee simulations of the sixteen M20 and O53 force fields. Adsorption energyWe begin our comments by considering the adsorption energy results. None of the force fields are very accurate in this regard in comparison to our DFT reference, however the best M20 force field is more accurate than that from O53. Interestingly, the initial force field produces a better prediction than M10 and is only slightly worse than the best O53 force field. More broadly, however, all the M20 force fields predict adsorption energies in a narrow range from \(-0.89\,\mathrm{eV}\) to \(-0.81\,\mathrm{eV}\), which is much more reproducible than the O53 results which range from \(-1.14\,\mathrm{eV}\) to \(-0.43\,\mathrm{eV}\). This variability might be explained by the odd behavior seen in several O53 geometry optimizations. Figure 13 shows the final frames the adsorbed molecule geometry optimizations using DFT and the best O53 force field. The H\({}_{2}\)S molecule is initially placed close to the zinc atom highlighted in blue, and settles into this position using DFT. However, using the O53 force field, the H\({}_{2}\)S molecule moves away from this atom towards a row of sulfur atoms which are, in turn, repelled by it. The geometry optimization is converged, which means that this desorbed position with deformed surface is more energetically favorable than an adsorbed position on zinc--which is incorrect. It is possible that the strange surface effects seen in Section 5.4.2 may be associated with this. Figure 12: a) H\({}_{2}\)S adsorption energy, b) wurtzite (\(11\bar{2}0\)) surface energy, and c) zincblende (\(110\)) surface energy versus SSE for the best force fields found by each of the sixteen optimizers run in the O53 and M20 reparameterizations. Reference DFT results shown with horizontal yellow line. Figure 13: Final frame of a geometry optimization of an H\({}_{2}\)S molecule adsorbed on a (110) zincblende surface using a) DFT and b) the best O53 force field. The molecule is originally positioned near the zinc atom in blue. Surface energyIf we consider the surface energy results, the distributions of the O53 force fields are consistently larger than those of the M20 results. On average, the O53 results are similarly accurate to M20 for the wurtzite surface. For the zincblende surface, one of the M20 clusters is actually quite accurate. Overall, none of the force fields performed generally well across all the validation tests, but they are improvements to the initial force field. If more accuracy is desired, then more attention needs to be given to the composition of the training set, however, this is not the focus of this article. Correlations with loss valuesThe most interesting conclusions from these results, come from comparing them to the force fields' associated SSE. All sixteen O53 optimizers found SSE values tightly clustered between 308 and 410, much lower and less variable than the M20 optimizers that are clustered around two values between 485 and 684. If one only considers the loss values, then one might expect that predictions made by the O53 force fields will be similarly more precise than those of M20, however, we see that this is not the case. O53 validation results are significantly more variable than those of M20 and, on average, seem to be only slightly more accurate. Most crucially, the high variability of the O53 validations are not correlated with the SSE. In other words, a better prediction of the training set items is not correlated with a more accurate prediction of the validation tests. We believe that these facts, and the unusual behavior of the O53 MD simulations in Section 5.4.2, can be attributed to overfitting. Interestingly, the M20 results seem to be _inversely_ correlated with the loss values, i.e., a higher loss leads to a more accurate validation result. This might suggest that M20 is also seeing some overfitting effects, however, there is not enough data to verify this because all optimizers fall near one of only two SSE values. Overfitting is a common occurrence during ReaxFF reparameterization [9, 13, 50] and becomes more likely as more parameters are allowed to change. These validation tests highlight the dangers of activating too many dimensions during a reparameterization as more degrees of freedom create opportunities for overfitting. We present two dangers which occur when too many parameters are used. First, it is important to emphasize the difference between a parameter's _sensitivity_ to some loss function (which is a function of a user-generated training set), and its true _importance_ in the potential function. If a training set does not include enough items to capture the importance of a parameter, then it will register as insensitive. If an important, but insensitive, parameter is activated during reparameterization then the optimizer will change it quasi-randomly. This can result in force fields with very low loss values, that perform very poorly in production runs; sensitive parameters have been set correctly, but other important ones have been changed incorrectly. A second danger is the fact that parameters can have a compensatory effect. If many parameters are active, then it is possible that multiple parameter sets can satisfy the training data, but the potential will perform poorly during simulation. By using too many parameters the optimizers have an opportunity to move insensitive ones to reduce the loss rather than identifying the 'correct' values for a small number of appropriate parameters. In our example, the 53 dimension reparameterizations seem to have activated too many dimensions. Although they produced the lowest overall loss values, they did not achieve the lowest loss for every class of items, nor did they always make the most accurate or precise predictions in our validation tests. In the MD simulations we see odd behavior which, along with the precision problems discussed previously, we believe to be evidence of overfitting. Conversely, the reparameterizations using only the 10 most sensitive parameters seem to not have had enough degrees of freedom. These force fields produce significant training set errors compared to O53 and M20, and perform poorly in the validation tests. Using 20 of the most sensitive parameters produces good quality force fields which perform better than the O53 force fields in some tests, despite having marginally worse training set losses. The results of the validation tests were also consistently more precise than those of O53, and of comparable accuracy. The MD simulation aligned with expectations, and the force fields were produced in a shorter time than that required for the original optimization. ## 6 Outlook We believe that there are several areas available which warrant further study. First, the training set used here could be used as a starting point to produce a better quality force field for Zn/S/H. This would require a significant expansion of the training set, and involve all of the well-known and thorny issues associated with training set design.[50] Second, a parameter's sensitivity is a strong function of the range in which it is allowed to vary. In this work, we have not needed to give this much consideration since the ParAMS package provides recommended ranges. However, it is not certain that these are always appropriate. A formalisation of appropriate ranges, or a robust mechanism to determine them would go a long way to ensuring that sensitivities are being appropriately determined. Third, we have identified sampling as the limiting step of the sensitivity procedure. It may be interesting to investigate the extraction of samples from optimizer trajectories. However, since the HSIC requires uniformly distributed random samples, one would need to extract samples carefully from the trajectories. One possibility is the Kennard-Stone algorithm,[63] but this is slow and sequential. Nevertheless, 'closing the loop' and allowing users to extract sensitivities from optimization results seems like an appealing prospect. ## 7 Conclusions As an introductory work, we have demonstrated that an HSIC sensitivity analysis applied to a ReaxFF reparameterization can successfully identify the most sensitive parameters. We have further shown that using only the most sensitive parameters during optimization leads to faster convergence and a reduced chance of overfitting. Even qualitatively, the use of such a sensitivity analysis can provide valuable insights for the user into the composition of the training set. Overall, we believe that the HSIC sensitivity analysis is a useful, robust, and easy to use tool which has the potential to greatly aid in the reparameterization of ReaxFF force fields. ## Author Contributions MG designed and implemented the sensitivity test, ran the test work, and wrote this article. MH designed the training set and guided some of the validation tests. TV provided supervision, revisions, guidance, and advice. ## Acknowledgement The authors thank the Flemish Supercomputing Centre (VSC), funded by Ghent University, FWO and the Flemish Government for use of their computational resources. Funding for the project was provided by the European Union's Horizon 2020 research and innovation program under grant agreement No. 814143. TV and MG are also supported by the Research Board of Ghent University (BOF) under award No. 01IT2322. ## Supporting Information Available The following items are included in the Supplementary Information: si.pdf full optimization results, calculation methodologies for adsorption and surface energies, full adsorption and surface energy results, training set composition; job_collection.yaml training set geometries and job calculation settings; engine_collection.yaml task settings for the input geometries; training_set.yaml reference values, weights, and sigma values for training data; initial_parameters.yaml original ReaxFF parameter values and ranges (O53 parameters set as active). The YAML files are human-readable and used by ParAMS [14]. This information is available free of charge via the Internet at [http://pubs.acs.org](http://pubs.acs.org).
2307.13374
Classical radiation fields for scalar, electromagnetic, and gravitational waves with spacetime-symmetry breaking
An effective field theory framework is used to investigate some Lorentz-violating effects on the generation of electromagnetic and gravitational waves, complementing previous work on propagation. Specifically we find solutions to a modified, anisotropic wave equation, sourced by charge or fluid matter. We derive the radiation fields for scalars, classical electromagnetic radiation, and partial results for gravitational radiation. For gravitational waves, the results show longitudinal and breathing polarizations proportional to coefficients for spacetime-symmetry breaking.
Quentin G. Bailey, Alexander S. Gard, Nils A. Nilsson, Rui Xu, Lijing Shao
2023-07-25T09:47:22Z
http://arxiv.org/abs/2307.13374v2
Classical radiation fields for scalar, electromagnetic, and gravitational waves with spacetime-symmetry breaking ###### Abstract An effective field theory framework is used to investigate some Lorentz-violating effects on the generation of electromagnetic and gravitational waves, complementing previous work on propagation. Specifically we find solutions to a modified, anisotropic wave equation, sourced by charged or fluid matter. We derive the radiation fields for scalars, classical electromagnetic radiation, and partial results for gravitational radiation. For gravitational waves, the results show longitudinal and trace polarizations proportional to coefficients for spacetime-symmetry breaking. ## 1 Introduction Presently, interest in tests of foundations of General Relativity (GR) is high, including both theory and experiment. Motivation for these studies include the possibility that some aspects of foundations of GR may be modified in a unified theory of physics that incorporates quantum gravity. In particular, suggestions that spacetime-symmetry foundations of GR, like local Lorentz symmetry, could be broken in small but potentially detectable ways [1; 2] has motivated a plethora of theoretical studies and analyses [3; 4; 5; 6; 7; 8; 9; 10]. Much theoretical work has been accomplished within effective-field theory (EFT) descriptions of spacetime-symmetry breaking, as well as with specific models. This includes extensive literature on the effects for electromagnetic waves and gravitational waves propagating in the vacuum [11; 12; 13]. Also, studies using non-EFT approaches abound in the literature [14; 15]. Accomplishments in Quantum Field Theory studies of spacetime-symmetry breaking are now prolific [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]. Much of the latter work relies on solutions to the field equations in momentum space, which is what is needed for QFT applications [28; 29]. Relatively few works have developed classical position-space solutions for the Green functions [30; 31], in particular, classical radiation multipole expansions seem to be scant [32; 33], in the EFT description of spacetime symmetry breaking. The purpose of this article is to obtain general position-space solutions and study wave generation in the context of spacetime-symmetry breaking described by an EFT [2; 34; 35]. Rather than a comprehensive study, we focus on minimal terms in the EFT and use a coordinate transformation trick to find the exact Green function for a modified wave equation. Our results are then applied to scalar fields, the electromagnetic sector, and the gravitational sector with some intriguing partial results on gravitational wave polarizations. We also compare briefly to perturbative approaches. Except when we discuss some results for gravitational waves, most of this work is in flat spacetime with the metric signature \(-+++\), we use Greek letters for spacetime indices, and latin letters for spatial indices. For the notation we follow conventions of other references on spacetime-symmetry breaking [35; 36; 37]. ## 2 Green function with modified wave operator In effective-field theory descriptions of spacetime-symmetry breaking, one encounters Lagrange densities of the schematic form \(\mathcal{L}\supset\eta^{\mu\nu}\partial_{\mu}\psi\partial_{\nu}\psi+t^{\mu\nu \lambda...}\partial_{\mu}\psi\partial_{\alpha}\partial_{\lambda}...\psi\), for some field \(\psi\), where \(t^{\mu\nu\lambda...}\) is a generic set of coefficients describing the degree of symmetry breaking for the field [35; 38]. Upon obtaining the field equations, one typically encounters wave-type equations modified from the usual D'Alembertian operator \(\Box=\partial^{\alpha}\partial_{\alpha}=\nabla^{2}-\partial_{t}^{2}\); to solve them, one can seek a Green function solution. For actions with just two derivatives, the typical problem involves finding a Green function \(G(x,x^{\prime})\) satisfying the equation \[(\tilde{g})^{\mu\nu}\partial_{\mu}\partial_{\nu}G(x,x^{\prime})=-\delta^{(4)}( x-x^{\prime}), \tag{1}\] where \(\tilde{g}^{\mu\nu}\) are constants. These constants \(\tilde{g}^{\mu\nu}\) can be chosen so that there is a well-posed hyperbolic partial differential equation for the smooth source case, (i.e., for the underlying equation we are trying to solve \((\tilde{g})^{\mu\nu}\partial_{\mu}\partial_{\nu}\psi=\rho\)) [39]. Specifically, we will assume the following generic form: \[\tilde{g}^{\mu\nu}=\eta^{\mu\nu}+k^{\mu\nu}, \tag{2}\] where \(k^{\mu\nu}\) are a set of constant coefficients assumed to have values in the chosen coordinates sufficiently less than unity, so that \(\tilde{g}^{\mu\nu}\) is guaranteed an inverse. Using Fourier transform methods, the momentum space solution of (1) is relatively trivial, while to date, no exact position space solution has been explicitly written and studied, although results can be found in certain limits [31; 29]. The solution to (1) can be obtained by changing coordinates [31] so that the equation appears with the conventional wave operator. Specifically, we change coordinates \(x^{\mu}=x^{\mu}(\overline{x}^{\nu})\), in a particular way such that under this coordinate change, \[\overline{\tilde{g}}^{\mu\nu}=\frac{\partial\overline{x}^{\mu}}{\partial x^{ \alpha}}\frac{\partial\overline{x}^{\nu}}{\partial x^{\beta}}\tilde{g}^{ \alpha\beta}=\eta^{\mu\nu}, \tag{3}\] so that \(\overline{\tilde{g}}^{\mu\nu}\) takes on the numerical values of the Minkowski metric. Such a transformation can generally be shown to exist with mild assumptions on \(k^{\mu\nu}\), for example, one can write such a transformation using a series \(\overline{x}^{\mu}=x^{\mu}-\frac{1}{2}k^{\mu}_{\ \alpha}x^{\alpha}+...\). Care is required here because the spacetime metric in the \(\overline{x}^{\mu}\) system is **not** Minkowski.1 In the new coordinate system, the equation (1) is Footnote 1: This type of procedure was carried out at leading order in the appendix of Ref. [31] to demonstrate the physical equivalence of having certain forms of Lorentz violation in the photon sector or the matter sector. \[\eta^{\mu\nu}\overline{\partial}_{\mu}\overline{\partial}_{\nu}G(\overline{x },\overline{x}^{\prime})=-\frac{1}{\sqrt{-\tilde{g}}}\delta^{(4)}(\overline{x} -\overline{x}^{\prime}), \tag{4}\] which resembles the standard wave operator Green function equation. The determinant of \(\tilde{g}^{\mu\nu}\) is denoted \(\tilde{g}\). Note that, despite appearances, one cannot generally remove \(k_{\mu\nu}\) from the framework altogether if there is a matter sector [31; 40; 41]. The solution to (4) is a standard one up to a scaling [42], \(G=\delta(\eta_{\mu\nu}(\overline{x}-\overline{x}^{\prime})^{\mu}(\overline{x} -\overline{x}^{\prime})^{\nu})/4\pi\sqrt{-\tilde{g}}\). One then transforms this function back to the original coordinate system: \[\begin{split} G(x,x^{\prime})&=\frac{1}{2\pi\sqrt{- \tilde{g}}}\delta\left(-(\tilde{g}^{-1})_{\mu\nu}(x-x^{\prime})^{\mu}(x-x^{ \prime})^{\nu}\right),\\ &=\frac{1}{4\pi\sqrt{-\tilde{g}}}\frac{1}{\tilde{R}}\delta(t^{ \prime}-\tilde{t}_{R}).\end{split} \tag{5}\] In this expression we use a modified retarded time \(\tilde{t}_{R}\) and modified distance \(\tilde{R}\): \[\begin{split}\tilde{t}_{R}&=t-\frac{\tilde{R}+( \tilde{g}^{-1})_{0i}R^{i}}{(\tilde{g}^{-1})_{00}},\\ \tilde{R}&=\sqrt{-(\tilde{g}^{-1})_{00}(\tilde{g}^{-1 })_{ij}R^{i}R^{j}+((\tilde{g}^{-1})_{0i}R^{i})^{2}},\end{split} \tag{6}\] where \(R^{i}=(x-x^{\prime})^{i}\). The first line of (5) forces an evaluation along a skewed light cone \(-(\tilde{g}^{-1})_{\mu\nu}(x-x^{\prime})^{\mu}(x-x^{\prime})^{\nu}=0\). The second line breaks up the delta function, and the choice of retarded boundary conditions is made. This result will be used for the scalar, vector and tensor examples to follow. ## 3 Scalar Example ### Exact solution We apply the results of the Green function (5) to the case of a real scalar field with generic source function. Thus we solve the equation \[(\eta^{\mu\nu}+k^{\mu\nu})\partial_{\mu}\partial_{\nu}\psi=-\rho, \tag{7}\] where \(\rho\) stands for a generic source density for the scalar. Using the general Green function results above, we obtain, \[\psi=\frac{1}{4\pi\sqrt{-g}}\int d^{3}r^{\prime}\frac{\rho(\tilde{t}_{R}, \vec{r}^{\prime})}{\tilde{R}}. \tag{8}\] For calculations of "wave zone" results, we will use an expansion similar to that in Ref. [43], wherein the authors construct a systematic wave zone and near zone expansion. We start by assuming that the field point \(\vec{r}\) is located far outside of the source region where \(\rho\neq 0\); thus the source \(\rho\) has "compact support". If this is the case, then we may use a series expansion assuming \(r>>r^{\prime}\). To expand the time argument of \(\rho\), we must also assume the characteristic wavelength of \(\psi\) is larger than the scale of the source \(\lambda>r^{\prime}\). It will be useful to use the following quantities, obtained by evaluating the expressions (6) above when \(\vec{r}^{\prime}=0\): \[\tilde{r}=\sqrt{-(\tilde{g}^{-1})_{00}(\tilde{g}^{-1})_{ij}r^{i}r^{j}+(( \tilde{g}^{-1})_{0i}r^{i})^{2}},\quad\tilde{t}_{r}=t-\frac{\tilde{r}+(\tilde{g }^{-1})_{0i}r^{i}}{-(\tilde{g}^{-1})_{00}}. \tag{9}\] Following parallel steps to Ref. [43] (Section 6.3), we arrive at the series: \[\psi=\frac{1}{4\pi\sqrt{-\tilde{g}}}\sum_{l=0}^{\infty}\frac{(-1)^{l}}{l!} \partial_{L}\left(\frac{1}{\tilde{r}}\int d^{3}r^{\prime}\rho(\tilde{t}_{r}, \vec{r}^{\prime})r^{\prime L}\right). \tag{10}\] In this expression we use the index abbreviation \(L=i_{1}i_{2}i_{3}...i_{l}\). For what follows we define a tangent vector \(N_{j}=-\partial_{j}\tilde{t}_{r}\), which reduces to \(\hat{n}^{j}=r^{j}/r\) when \(k_{\mu\nu}\to 0\), and represents the direction of wave propagation. It is useful to note some results that are leading order in the coefficients \(k^{\mu\nu}\). Using the definition (2), we have for the inverse metric, modified retarded time \(\tilde{t}_{r}\), and the tangent vector \(N_{j}\), respectively \[\begin{split}(\tilde{g}^{-1})_{\mu\nu}&=\eta_{\mu \nu}-k_{\mu\nu},\\ \tilde{t}_{r}&=t-r(1-\frac{1}{2}k_{00}-\frac{1}{2}k_ {ij}n^{i}n^{j})+k_{0i}r^{i},\\ N_{i}&=n_{i}(1-\frac{1}{2}k_{00}+\frac{1}{2}k_{jk}n ^{j}n^{k})-k_{ij}n^{j}-k_{0i}.\end{split} \tag{11}\] to first order in \(k_{\mu\nu}\). Using these approximations we obtain the first 3 terms of the series (10) in the wave zone (keeping only terms with \(1/r\) falloff): \[\begin{split}\psi&=\frac{1}{4\pi r}\Big{(}Q[1- \tfrac{1}{2}k_{00}+\tfrac{1}{2}k_{ij}n^{i}n^{j}]+\dot{P}^{i}[n_{i}(1-k_{00}+k_ {jk}n^{j}n^{k})-k_{ij}n^{j}-k_{0i}]\\ &\qquad+\frac{1}{2}\tilde{T}^{ij}[n_{i}n_{j}(1-\tfrac{3}{2}k_{00 }+\tfrac{3}{2}k_{lm}n^{l}n^{m})-2n_{i}k_{jk}n^{k}-2k_{0i}n_{j}]+...\Big{)}. \end{split} \tag{12}\] Here \(Q\) is the total "charge", \(\vec{P}\) is the dipole moment and \(I^{ij}\) is the inertia tensor associated with the source density \(\rho\). The terms on the right-hand side of (12) are to be evaluated at the modified retarded time in (11), which has a deformed dependence on the space and time coordinates of the field point. We include here plots of how to visualize the propagation of the wave from the source point (taken as the origin) to the field point \(t,x\); these can be seen in Figure 1 ### Blending with perturbative solutions It is useful to compare the methods above with other methods that involve approximate solutions. This has been carried out successfully in slow motion, weak field scenarios where wave behavior is not dealt with directly [37; 3]. When the full effects of time derivatives is included, and we are looking for complete "inhomogeneous" solutions (not just vacuum propagation), subtleties arise as we point out here. To illustrate, we focus on the scalar wave equation case in (7). The philosophy behind perturbative approaches is to seek solutions in powers of the small coefficients \(k_{\mu\nu}\). For instance we assume the solution can be written \(\psi=\psi^{(0)}+\psi^{(1)}+...\), with \((n)\) indicating order in powers of \(k_{\mu\nu}\). To zeroth and first order in \(k_{\mu\nu}\), we have the two equations to solve, \[\Box\psi^{(0)}=-\rho,\quad\Box\psi^{(1)}=-k^{\mu\nu}\partial_{\mu}\partial_{ \nu}\psi^{(0)}. \tag{13}\] The formal (particular) solutions, with the retarded time Green function, are \[\psi^{(0)}=\int d^{3}r^{\prime}\frac{\rho(t-R,\vec{r}^{\prime})}{R},\quad\psi^ {(1)}=\int d^{3}r^{\prime}\frac{k^{\mu\nu}\partial_{\mu}^{\prime}\partial_{ \nu}^{\prime}\psi^{(0)}}{R} \tag{14}\] The first solution is the conventional scalar one, and the second of these equations involves a field on the right-hand side that can be nonzero over all regions of space, and so does not have compact support. Even in GR, integration of the formal wave solution also involves source terms composed of nonzero fields far from the source [44; 43]. Such terms can be evaluated in GR and form part of the complete causal and properly behaved solution [45; 43; 46]. It is not immediately clear for equation (13) if this program works. Rather than solve (14) directly we present an alternative that more rapidly provides a match between pertubative approaches and "exact" ones. Applying the \(\Box\) operator to the equation for \(\psi^{(1)}\) in (13), we obtain, \[\Box^{2}\psi^{(1)}=k^{\mu\nu}\partial_{\mu}\partial_{\nu}\rho, \tag{15}\] where now the right-hand side is a source with compact support but the left-hand side is a nonlocal operator. A nonlocal Green function for the operator that solves \(\Box^{2}G=-\delta^{4}(x-x^{\prime})\) takes the form \(G_{nl}(x,x^{\prime})=-(1/16\pi)\text{sgn}(t-t^{\prime}\pm R)\), where \(\text{sgn}(x)=\pm 1\): positive if \(x>1\) and negative if \(x<1\). This result can be derived from standard sources, for example, by taking the Fourier time transform of the relevant position Figure 1: Spacetime diagrams illustrating the modified propagation. space Green functions in Ref. [47].2When derivatives are applied to \(G(x,x^{\prime})\), the light cone delta function emerges. For example, \(\Box G_{nl}(x,x^{\prime})=\delta(t-t^{\prime}\pm R)/(4\pi R)\). Footnote 2: The static limit of this Green function, which is just proportional to the distance \(R\), is used ubiquitously in the literature for various post-Newtonian applications [37; 48; 49; 50]. Green functions for nonlocal operators have been discussed elsewhere, for instance Refs. [51; 52; 3]. The solution to (15) then takes the form \[\psi^{(1)}=-\int d^{4}x^{\prime}G_{nl}(x,x^{\prime})k^{\mu\nu}\partial^{\prime }_{\mu}\partial^{\prime}_{\nu}\rho^{\prime}+\psi^{(1)}_{H}, \tag{16}\] where \(\psi^{(1)}_{H}\) is a homogeneous solution satisfying \(\Box^{2}\psi^{(1)}_{H}=0\). Convergence of the integrals for the infinite domain in (16) depends on the source function \(\rho\) asymptotic properties and the bounding surface of the four-dimensional integral. We assume \(\rho\) is localized in space, vanishing outside some finite radius. The time behavior is another matter. One can always introduce a bounding surface, for example, the volume is the spacetime between two spacelike hypersurfaces at fixed values of time \(t_{2}\) and \(t_{1}\) (see figure 5.3a in Ref. [44]). Alternatively one can introduce an artificial exponential time falloff for the density \(\rho\to\rho e^{-\epsilon|t|}\) to ensure the source vanishes as time approaches \(\pm\infty\). Assuming that a modification is applied to (16), so that it is finite, we proceed with integration by parts with the \(\partial^{\prime}_{\mu}\partial^{\prime}_{\nu}\) derivatives. The surface terms can either be eliminated by a choice of the homogeneous solution \(\psi^{(1)}_{H}\) or they can be shown to vanish on the boundary with mild assumptions. We obtain \[\psi^{(1)}=-\int d^{4}x^{\prime}k^{\mu\nu}\partial^{\prime}_{\mu}\partial^{ \prime}_{\nu}G_{nl}(x,x^{\prime})\rho^{\prime}, \tag{17}\] and now the derivatives of the Green function \(\sim\mathrm{sgn}(t-t^{\prime}\pm R)\) will always involve a delta function along the light cone. The result in (17) is best matched to the "exact" solution (8) by breaking up the summation into space and time components. After evaluating derivatives using standard step and delta function properties, and adding in the zeroth order solution, we collect the terms in a suggestive form: \[\psi^{(0)}+\psi^{(1)}=\int d^{3}r^{\prime}\frac{1}{4\pi R}\big{[}\rho^{\prime} _{r}+\dot{\rho}^{\prime}_{r}\frac{1}{2}\big{(}k_{00}R+2k_{0j}R^{j}+k_{jk}R^{j }\hat{R}^{k}\big{)}+\rho^{\prime}_{r}\frac{1}{2}k_{jk}(\delta^{jk}-\hat{R}^{j} \hat{R}^{k})\big{]}, \tag{18}\] where the subscript on \(\rho\) indicates evaluation at the retarded time \(t-R\). The first term in (18) is the unperturbed solution for when \(k_{\mu\nu}=0\). The second term has an unconventional dependence on the distance \(R\); far from the source the potential has no \(1/r\) suppression. However, the second term can be re-interpreted as the first order term in the Taylor expansion of time argument (6): \(\rho(\tilde{t}_{R})=\rho(t_{R})+\dot{\rho}(\tilde{t}_{R}-t_{R})+...\), given that \(\tilde{t}_{R}-t_{R}=\frac{1}{2}(k_{00}R+2k_{0j}R^{j}+k_{jk}R^{j}\hat{R}^{k})+ O(k^{2})\). The third term has the usual \(1/r\) suppression outside the source region. When comparing to the exact solution (8), the third term can be understood as arising from a series expansion the modified distance \(\tilde{R}\) in (6); \(\tilde{R}=R(1+\frac{1}{2}k_{00}-\frac{1}{2}k_{jk}\hat{R}^{j}\hat{R}^{k}+O(k^{2}))\). To summarize we have shown the match of approximate and exact solutions: \[\psi^{(0)}+\psi^{(1)}=\psi+O(k^{2}), \tag{19}\] but it should be noted that care was required to interpret apparent nonlocal terms. ## 4 Photon sector application We apply the Green function formalism of section 2 to the photon sector of the EFT framework [35; 12]. The field equations from the photon sector action in the "non-birefringence" limit can be written in the form, \[\big{(}\eta^{\mu\kappa}\eta^{\lambda\nu}+\eta^{\mu\kappa}(c_{F})^{\lambda\nu} +(c_{F})^{\mu\kappa}\eta^{\lambda\nu}\big{)}\partial_{\mu}F_{\kappa\lambda}=- j^{\nu}, \tag{20}\] where \((c_{F})^{\mu\nu}\) are 9 coefficients for Lorentz violation (symmetric and assumed traceless), \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the field strength tensor, and \(j^{\nu}\) is the current source [12, 13, 53]. If we define an alternate set of coefficients \(\tilde{C}^{\mu\nu}\) using \(\tilde{C}^{\mu}_{\ \kappa}(\eta^{\kappa}_{\ \nu}+\frac{1}{2}\tilde{C}^{\kappa}_{\ \nu})= \left(c_{F}\right)^{\mu}_{\ \nu}\), then we can write (20) as \[\tilde{g}^{\mu\kappa}\tilde{g}^{\lambda\nu}\partial_{\mu}F_{\kappa\lambda}=-j^ {\nu}, \tag{21}\] where \(\tilde{g}^{\mu\nu}=\eta^{\mu\nu}+\tilde{C}^{\mu\nu}\), similar to the definition in (2). Note that to leading order in small dimensionless coefficients, \(\tilde{C}^{\mu}_{\ \kappa}\approx(c_{F})^{\mu}_{\ \kappa}\). To solve this equation, we change coordinates \(x^{\mu}=x^{\mu}(\overline{x}^{\nu})\), in the same manner as (3). In the new coordinate system the field equations take the form \[\eta^{\mu\kappa}\eta^{\lambda\nu}\overline{\partial}_{\mu}\overline{F}_{ \kappa\lambda}=-\overline{j}^{\nu}, \tag{22}\] with \(\overline{\partial}_{\mu}=\partial/\partial\overline{x}^{\mu}\) and \(\overline{F}_{\mu\nu}=\overline{\partial}_{\mu}\overline{A}_{\nu}-\overline {\partial}_{\nu}\overline{A}_{\mu}\). The field equations resemble that of conventional electrodynamics in the \(\overline{x}^{\mu}\) coordinates with a modified current \(j^{\nu}\). In particular, there remains the usual gauge symmetry of electrodynamics. We adopt the gauge choice \(\eta^{\mu\nu}\overline{\partial}_{\mu}\overline{A}_{\nu}=0\), leaving the field equations as \[\eta^{\mu\nu}\overline{\partial}_{\mu}\overline{\partial}_{\nu}\overline{A}_ {\lambda}=-\overline{j}^{\nu}\eta_{\nu\lambda}, \tag{23}\] where we have multiplied (22) by a Minkowski inverse \(\eta_{\mu\nu}\) on both sides to isolate \(\overline{A}_{\lambda}\). The standard wave operator appears in equation (23) and so the usual inhomogeneous solution can be used, yielding \[\overline{A}_{\lambda}=\frac{1}{2\pi}\int d^{4}\overline{x}^{\prime}\delta \big{(}-\eta_{\alpha\beta}(\overline{x}-\overline{x}^{\prime})^{\alpha}( \overline{x}-\overline{x}^{\prime})^{\beta}\big{)}\overline{j}^{\mu}\eta_{ \mu\lambda}. \tag{24}\] Now we use the coordinate transformation rule \(A_{\lambda}=(\partial\overline{x}^{\mu}/\partial x^{\lambda})\overline{A}_{\mu}\) and change the coordinates within the integral in (24). First, using equation (3), we can show the argument of the delta function in the original \(x^{\mu}\) coordinates takes the form like (5), namely \(-(\tilde{g}^{-1})_{\mu\nu}(x-x^{\prime})^{\mu}(x-x^{\prime})^{\nu}\). The remainder of the transformation follows from standard formulas. The Jacobian of the transformation can be found from (3) and can be written \(|\partial\overline{x}/\partial x|=1/\sqrt{-\tilde{g}}\). The originally sought solution is then \[A_{\lambda}=\frac{1}{2\pi\sqrt{-\tilde{g}}}\int d^{4}x^{\prime}\delta\left(-( \tilde{g}^{-1})_{\alpha\beta}(x-x^{\prime})^{\alpha}(x-x^{\prime})^{\beta} \right)j^{\mu}(\tilde{g}^{-1})_{\mu\lambda}. \tag{25}\] This result can be independently checked by using equation (1) and (5), and using the gauge condition transformed to the original coordinates, namely \(\tilde{g}^{\mu\nu}\partial_{\mu}A_{\nu}=0\), to show that (25) satisfies (21) and hence solves (20). Following steps similar to those for the scalar field in we can write the solution compactly as \[A_{\lambda}=\frac{1}{4\pi\sqrt{-\tilde{g}}}\int d^{3}x^{\prime}\frac{(\tilde{ g}^{-1})_{\mu\lambda}j^{\mu}(\tilde{t}_{R},\vec{r}^{\prime})}{\tilde{R}}, \tag{26}\] with \(\tilde{t}_{R}\) and \(\tilde{R}\) as in equations (6). We specialize (26) to a localized conserved current density \(j^{\mu}=(\rho,\vec{J})\) and expand the solution assuming the field point is far from the source and in the wave zone (\(r\gg\lambda\gg r^{\prime}\)). We follow steps similar to those leading up to (10) and we arrive at \[A_{\lambda}=\frac{(\tilde{g}^{-1})_{\mu\lambda}}{4\pi\sqrt{-\tilde{g}}}\sum_{l =0}^{\infty}\frac{(-1)^{l}}{l!}\partial_{L}\left(\frac{1}{\tilde{r}}\int d^{3}r ^{\prime}j^{\mu}(\tilde{t}_{r},\vec{r}^{\prime})r^{\prime L}\right), \tag{27}\] where \(\tilde{t}_{r}\) and \(\tilde{r}\) are defined in (11). We write out the terms up to \(L=1\), decomposing the current into charge density and current density, and keeping only terms that fall off as \(1/\tilde{r}\), to obtain \[A_{\lambda}=\frac{1}{4\pi\sqrt{-\tilde{g}}\tilde{r}}\int d^{3}r^{\prime}\Big{[} (\tilde{g}^{-1})_{\mu 0}\rho^{\prime}+(\tilde{g}^{-1})_{\mu i}J^{\prime i}+N_{i}( \tilde{g}^{-1})_{\mu 0}\dot{\rho^{\prime}}r^{\prime i}+...\Big{]}. \tag{28}\] The first term is proportional to the constant total charge \(Q\), while the second and third term can be re-expressed in terms of the electric dipole moment \(p^{j}=\int d^{3}r^{\prime}\rho^{\prime}\tau^{\prime j}\), using standard techniques [42]. The higher order terms contribute to the magnetic dipole and quadrupole terms, which we neglect here. The dominant _radiation_ four-potential terms are \[A_{\lambda}=\frac{1}{4\pi\sqrt{-\tilde{g}\tilde{r}}}\Big{[}(\tilde{g}^{-1})_{ \mu i}+(\tilde{g}^{-1})_{\mu 0}N_{i}\Big{]}\dot{p}^{i}. \tag{29}\] The radiation zone electric field, which is gauge independent, is found to be \[F_{i0}=-\frac{1}{4\pi\sqrt{-\tilde{g}\tilde{r}}}\Big{[}(\tilde{g}^{-1})_{0i}N_{ j}+(\tilde{g}^{-1})_{0j}N_{i}+(\tilde{g}^{-1})_{ij}+(\tilde{g}^{-1})_{00}N_{i}N_{ j}\Big{]}\ddot{p}^{j}. \tag{30}\] Adopting a leading order expansion with \((\tilde{g}^{-1})_{\mu\nu}=\eta_{\mu\nu}-(c_{F})_{\mu\nu}\) and using results above such as (11), we can write the electric field as \[F_{i0}=-\frac{1}{4\pi\sqrt{-\tilde{g}\tilde{r}}}\Big{[}P_{ij}+P_{ik}P_{jl}(c_ {F})^{kl}\Big{]}\ddot{p}^{j}, \tag{31}\] where \(P_{ij}=\delta_{ij}-n_{i}n_{j}\) is a projection operator, and the dipole moment is evaluated at the modified retarded time \(\tilde{t}_{r}\). Note that while \(F_{i0}n_{i}=0\), indicating two independent polarizations, \(n_{i}\) is not the true direction of wave propagation. There remains a projection along the true direction of propagation \(N_{i}\) that is not zero, \(F_{0i}N_{i}\neq 0\), indicating that at least one of the two independent modes is not transverse to the wave propagation direction. ## 5 Gravity sector application ### General solution For a starting point, we use the EFT gravity sector field equations for the metric fluctuations \(h_{\mu\nu}\) around a flat background. They can be obtained from a Lagrange density \(\mathcal{L}=\mathcal{L}_{GR}+\frac{1}{4\pi}\overline{s}^{\alpha\beta}h^{\mu\nu} \mathcal{G}_{\alpha\mu\beta\nu}+\mathcal{L}_{M}\), where \(\mathcal{G}_{\alpha\mu\beta\nu}\) is the double dual of the Riemann tensor and \(\kappa=8\pi G_{N}\)[54; 55; 56; 48]. We can write the field equations in the form \[\hat{K}^{\mu\nu\alpha\beta}h_{\alpha\beta}=\kappa\tau^{\mu\nu}, \tag{32}\] where \(\tau^{\mu\nu}\) includes the matter stress energy tensor as well as contributions from higher order terms in \(h\) with coefficients for Lorentz violation. Should the coefficients \(\overline{s}_{\mu\nu}\) arise dynamically, through spontaneous symmetry breaking, the dynamical terms contributing to \(\tau^{\mu\nu}\) can also be included [57]. The operator \(\hat{K}^{\mu\nu\alpha\beta}\) can be written as \[\begin{split}\hat{K}^{\mu\nu\alpha\beta}&=\frac{1}{ 2}\Big{(}\eta^{\alpha(\mu}\eta^{\nu)\beta}\eta^{\gamma\delta}-\eta^{\mu\nu}\eta ^{\alpha\beta}\eta^{\gamma\delta}+\eta^{\mu\nu}\eta^{\alpha\gamma}\eta^{ \beta\delta}+\eta^{\alpha\beta}\eta^{\mu\gamma}\eta^{\nu\delta}-\eta^{\alpha( \mu}\eta^{\nu)\gamma}\eta^{\beta\delta}-\eta^{\beta(\mu}\eta^{\nu)\gamma}\eta^ {\alpha\delta}\Big{)}\partial_{\gamma}\partial_{\delta}\\ &+\hat{K}_{s}^{\mu\nu\alpha\beta},\end{split} \tag{33}\] where \(\hat{K}_{s}^{\mu\nu\alpha\beta}\) is the operator such that \(\hat{K}_{s}^{\mu\nu\alpha\beta}h_{\alpha\beta}=\overline{s}_{\alpha\beta} \mathcal{G}^{\mu\alpha\beta\nu}\). The first line in (33) contains the terms present in standard linearized GR, namely the terms in \(G^{\mu\nu}\), while \(\hat{K}_{s}^{\mu\nu\alpha\beta}h_{\alpha\beta}\) is the leading order corrections from the \(\overline{s}^{\mu\nu}\) coefficients [55; 37]. For the purposes in this work, it is useful to re-express the operator (33) in a simpler form. We define \(\tilde{g}^{\mu\nu}=\eta^{\mu\nu}+\overline{s}^{\mu\nu}\). Then to first order in \(\overline{s}^{\mu\nu}\) it can be shown that \[\hat{K}^{\mu\nu\alpha\beta}=\frac{1}{2}\Big{(}\tilde{g}^{\alpha(\mu}\tilde{g}^ {\nu)\beta}\tilde{g}^{\gamma\delta}-\tilde{g}^{\mu\nu}\tilde{g}^{\alpha\beta} \tilde{g}^{\gamma\delta}+\tilde{g}^{\mu\nu}\tilde{g}^{\alpha\gamma}\tilde{g}^ {\beta\delta}+\tilde{g}^{\alpha\beta}\tilde{g}^{\mu\gamma}\tilde{g}^{\nu\delta} -\tilde{g}^{\alpha(\mu}\tilde{g}^{\nu)\gamma}\tilde{g}^{\beta\delta}-\tilde{g }^{\beta(\mu}\tilde{g}^{\nu)\gamma}\tilde{g}^{\alpha\delta}\Big{)}\partial_{ \gamma}\partial_{\delta}, \tag{34}\] which resembles the standard linearized terms in GR but with an apparent modified background metric \(\tilde{g}^{\mu\nu}\), as pointed out in [40]. We perform a general coordinate transformation as in the scalar and vector case above. We require the coordinate transformation to satisfy (3), with \(\tilde{g}^{\mu\nu}=\eta^{\mu\nu}+\overline{s}^{\mu\nu}\). Treating the quantities in (34) as tensors in a flat background, the field equations in the \(\overline{x}^{\mu}\) coordinates take the form \[\frac{1}{2}\Big{(}\eta^{\alpha(\mu}\eta^{\nu)\beta}\eta^{\gamma\delta}-\eta^{ \mu\nu}\eta^{\alpha\beta}\eta^{\gamma\delta}+\eta^{\mu\nu}\eta^{\alpha\gamma} \eta^{\beta\delta}+\eta^{\alpha\beta}\eta^{\mu\gamma}\eta^{\nu\delta}-\eta^{ \alpha(\mu}\eta^{\nu)\gamma}\eta^{\beta\delta}-\eta^{\beta(\mu}\eta^{\nu) \gamma}\eta^{\alpha\delta}\Big{)}\overline{\partial}_{\gamma}\overline{ \partial}_{\delta}\overline{h}_{\alpha\beta}=\kappa\overline{\tau}^{\mu\nu}. \tag{35}\] Thus in this coordinate system, the field equations appear as conventional linearized GR with a modified source \(\overline{\tau}^{\mu\nu}\).3 Footnote 3: The barred notation indicates the coordinate system and is not to be confused with the common trace-reversed bar notation. Next we exploit the gauge freedom in equation (35) and choose \[\eta^{\alpha\beta}\overline{\partial}_{\alpha}\overline{h}_{\beta\gamma}= \frac{1}{2}\overline{\partial}_{\gamma}(\eta^{\alpha\beta}\overline{h}_{\alpha \beta}). \tag{36}\] Note that \(\overline{\partial}_{\lambda}\eta^{\mu\nu}=0\) holds in the \(\overline{x}^{\mu}\) coordinates. With the gauge choice, the field equations become \[\frac{1}{2}\eta^{\alpha\beta}\overline{\partial}_{\alpha}\overline{\partial}_ {\beta}\Pi^{\mu\nu}=-\kappa\overline{\tau}^{\mu\nu},\quad\text{with}\quad\Pi^ {\mu\nu}=\eta^{\mu\alpha}\eta^{\nu\beta}\overline{h}_{\alpha\beta}-\frac{1}{2} \eta^{\mu\nu}(\eta^{\alpha\beta}\overline{h}_{\alpha\beta}). \tag{37}\] The standard wave operator (in \(\overline{x}^{\mu}\) coordinates) appears in (37), thus we can use the standard wave solution, \[\Pi^{\mu\nu}=\frac{\kappa}{\pi}\int d^{4}\overline{x}^{\prime}\delta\big{(}- \eta_{\alpha\beta}(\overline{x}-\overline{x}^{\prime})^{\alpha}(\overline{x}- \overline{x}^{\prime})^{\beta}\big{)}\overline{\tau}^{\mu\nu}. \tag{38}\] Using the Minkowski metric \(\eta_{\mu\nu}\) we obtain \(\overline{h}_{\mu\nu}\) from (38): \[\overline{h}_{\mu\nu}=\frac{\kappa}{\pi}\int d^{4}\overline{x}^{\prime}\delta \big{(}-\eta_{\alpha\beta}(\overline{x}-\overline{x}^{\prime})^{\alpha}( \overline{x}-\overline{x}^{\prime})^{\beta}\big{)}\left(\eta_{\gamma\mu}\eta_ {\delta\nu}\overline{\tau}^{\gamma\delta}-\frac{1}{2}\eta_{\mu\nu}\eta_{ \gamma\delta}\overline{\tau}^{\gamma\delta}\right). \tag{39}\] Using the coordinate transformation rule \(h_{\kappa\lambda}=(\partial\overline{x}^{\mu}/\partial x^{\kappa})(\partial \overline{x}^{\nu}/\partial x^{\lambda})\overline{h}_{\mu\nu}\), we can find the solution in the original coordinates, similar to the approach for the vector potential in the steps leading to (25). This yields \[\begin{split} h_{\mu\nu}&=\frac{\kappa}{\pi\sqrt{- \tilde{g}}}\int d^{4}x^{\prime}\delta\left(-(\tilde{g}^{-1})_{\alpha\beta}(x- x^{\prime})^{\alpha}(x-x^{\prime})^{\beta}\right)\\ &\qquad\times\left((\tilde{g}^{-1})_{\gamma\mu}(\tilde{g}^{-1})_{ \delta\nu}\tau^{\gamma\delta}-\frac{1}{2}(\tilde{g}^{-1})_{\mu\nu}(\tilde{g}^{ -1})_{\gamma\delta}\tau^{\gamma\delta}\right).\end{split} \tag{40}\] We have also directly verified that this solution (40) solves equation (32) to leading order in \(\,\overline{s}_{\mu\nu}\). Note that the gravitational wave from the source propagates along the modified light cone as in section 3, which is consistent with prior propagation studies [55, 58, 59]. What is new here is that we can calculate directly the effects of a given source on the metric fluctuations and the measured effects in a GW detector. In a leading order approximation, we have \((\tilde{g}^{-1})_{\mu\nu}=\eta_{\mu\nu}-\overline{s}_{\mu\nu}\). If we expand the delta function to integrate over \(t^{p}rime\), as done for the scalar case and vector case above, and we restrict attention to leading order in \(\,\overline{s}_{\mu\nu}\), then we obtain the result \[h_{\mu\nu}=\frac{\kappa}{2\pi\sqrt{-\tilde{g}}}\int d^{3}x^{\prime}\frac{1}{ \tilde{R}}\Big{(}\tau_{\mu\nu}-2\tau^{\alpha}_{\ (\mu}\overline{s}_{\nu)\alpha}-\frac{1}{2}\eta_{\mu\nu}(\tau^{\alpha}_{\ \alpha}-\overline{s}_{\alpha\beta}\tau^{\alpha\beta})+\frac{1}{2}\overline{s}_{ \mu\nu}\tau^{\alpha}_{\ \alpha}\Big{)}(\tilde{t}_{R},\vec{\tau}^{\prime}), \tag{41}\] where \(\tilde{t}_{R}\) and \(\tilde{R}\) are defined in (6), with \(k^{\mu\nu}\rightarrow\overline{s}^{\mu\nu}\). This solution is valid in the gauge \[(\eta^{\mu\nu}+\overline{s}^{\mu\nu})\partial_{\mu}h_{\nu\lambda}=\frac{1}{2} \partial_{\lambda}(\eta^{\mu\nu}+\overline{s}^{\mu\nu})h_{\mu\nu}, \tag{42}\] which is not the usual harmonic gauge unless \(\,\overline{s}_{\mu\nu}=0\)[60]. ### Expansion of solution At this stage we employ the far field expansion, similar to (27). First we abbreviate the terms in parenthesis inside the integral (41) as \(\Theta_{\mu\nu}\). We seek the solution for \(h_{\mu\nu}\) in the far field or wave zone. However, we must integrate over the near zone \(\mathcal{N}\) and wave zone \(\mathcal{W}\) in this case because \(\tau_{\mu\nu}\) does not have compact support and exists in both regions: \[h_{\mu\nu}=\frac{4G}{2\pi\sqrt{-\vec{g}}}\Big{(}\int_{\mathcal{N}}d^{3}x^{ \prime}\frac{\Theta_{\mu\nu}(\tilde{t}_{R},\vec{r}^{\prime})}{\tilde{R}}+\int_ {\mathcal{W}}d^{3}x^{\prime}\frac{\Theta_{\mu\nu}(\tilde{t}_{R},\vec{r}^{ \prime})}{\tilde{R}}\Big{)}. \tag{43}\] In this paper, we attempt only the first integrals, so we seek \((h_{\mathcal{N}})_{\mu\nu}\). The general solution for the \(\mathcal{N}\) zone integrals can be put into an expansion form like (10): \[(h_{\mathcal{N}})_{\mu\nu}=\frac{4G}{\sqrt{-\vec{g}}}\sum_{l=0}^{\infty}\frac{ (-1)^{l}}{l!}\partial_{L}\left(\frac{1}{\tilde{r}}\int_{\mathcal{N}}d^{3}r^{ \prime}\Theta_{\mu\nu}(\tilde{t}_{r},\vec{r}^{\prime})r^{\prime L}\right). \tag{44}\] We proceed to evaluate the first few terms in the series (44) in order to find the leading multipole terms up to the quadrupole. We can use the conservation law \(\partial_{\mu}\tau^{\mu\nu}=0\), to express some of the integrals in (44) in terms of \(\int d^{3}r\tau^{ij}\). This quantity can be re-expressed as a inertial tensor expression \((1/2)d^{2}/dt^{2}\int d^{3}r\tau^{00}r^{i}r^{j}=(1/2)\tilde{I}^{ij}\) up to surface terms (on the surface at radius \(\mathcal{R}\) lying between \(\mathcal{N}\) and \(\mathcal{W}\)) [7]. For the radiation fields \((h_{\mathcal{N}})_{\mu\nu}\) we find, up to surface terms at radius \(\mathcal{R}\), \[\begin{split}(h_{\mathcal{N}})_{00}&=\frac{G}{ \tilde{r}}\big{(}\delta_{jk}+N_{j}N_{k}+\overline{s}_{00}(\delta_{jk}+2N_{j}N_ {k})+2\overline{s}_{0j}N_{k}-\overline{s}_{jk}\big{)}\tilde{I}^{jk},\\ (h_{\mathcal{N}})_{0j}&=\frac{G}{\tilde{r}}\big{(}-2 \delta_{jk}N_{l}(1+\overline{s}_{00})-2\overline{s}_{0k}\delta_{jl}+\overline{ s}_{0j}\delta_{kl}\overline{s}_{0j}N_{k}N_{l}+2\overline{s}_{jk}N_{l}\big{)} \tilde{I}^{kl},\\ (h_{\mathcal{N}})_{jk}&=\frac{G}{\tilde{r}}\big{(}2 \delta_{jl}\delta_{km}-\delta_{jk}\delta_{lm}+\delta_{jk}(1+\overline{s}_{00} )N_{l}N_{m}-4\overline{s}_{l(j}\delta_{k)m}-\overline{s}_{jk}N_{l}N_{m}\\ &-4\overline{s}_{0(j}\delta_{k)m}N_{l}+2\overline{s}_{0m}\delta_ {jk}N_{l}+\delta_{jk}\overline{s}_{lm}+\delta_{lm}\overline{s}_{jk}\big{)} \tilde{I}^{lm},\end{split} \tag{45}\] Since the focus is on the radiation fields, we omit the near zone potentials which can be found in Ref. [37]. The measured curvature in a gravitational wave detector can be taken as the components \(R_{0j0k}=(1/2)(\partial_{0}\partial_{j}h_{0k}+\partial_{0}\partial_{k}h_{0j} -\partial_{j}\partial_{k}h_{00}-\partial_{0}^{2}h_{jk})\)[44]. Normally in GR, in the usual transverse traceless gauge, one can obtain the curvature directly from \(h_{jk}\) alone. The gauge choice made here does not generally allow that; however, the curvature is gauge independent, hence our focus on observable effects. We find the curvature components to be \[\begin{split} R_{0j0k}&=\frac{G}{\tilde{r}}\big{[} \tfrac{1}{2}\delta_{jk}\delta_{lm}-\delta_{l(j}\delta_{k)m}-\tfrac{1}{2}\big{(} \delta_{jk}N_{l}N_{m}+\delta_{lm}N_{j}N_{k}-4\delta_{l(j}N_{k)}N_{m}\big{)}(1+ \overline{s}_{00})\\ &-\tfrac{1}{2}N_{j}N_{k}N_{l}N_{m}(1+2\overline{s}_{00})+2 \overline{s}_{0(j}\delta_{km})N_{l}+\overline{s}_{0m}\delta_{jk}N_{l}+2 \overline{s}_{0m}\delta_{l(j}N_{k)}\\ &-\overline{s}_{0(j}N_{k)}\delta_{lm}-\overline{s}_{0m}N_{j}N_{k} N_{l}-\overline{s}_{0(j}N_{k)}N_{l}N_{m}-\tfrac{1}{2}\big{(}\delta_{jk} \overline{s}_{lm}+\delta_{lm}\overline{s}_{jk}\\ &-4\overline{s}_{l(j}\delta_{k)m}-\overline{s}_{jk}N_{l}N_{m}- \overline{s}_{lm}N_{j}N_{k}+4\overline{s}_{l(j}N_{k)}N_{m}\big{)}\big{]}( \overset{(\ref{eq:2})}{I}^{lm}\end{split} \tag{46}\] The reader is reminded that time-dependent quantities on the right hand side are evaluated at the modified retarded time \(\tilde{t}_{r}\). In general metric models of gravity beyond GR, there are up to six possible polarizations for gravitational waves [50]. In the presence of Lorentz violation in (46), five of the six polarizations show up. We can also establish the question of their independence, and the number of degrees of freedom. We will identify the polarizations by taking the trace and projections of \(R_{0i0j}\). Since the wave travels in a direction along \(N_{i}\) we will adopt a spatial basis \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{N}/\sqrt{\mathbf{N^{i}N_{i}}}\}\), where the basis vectors \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) span the plane perpendicular to \(N_{i}\). Note that, due to the coefficients in (11), \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) are not perpendicular to \(\hat{n}\), except at zeroth order in the coefficients. First we calculate the trace of the curvature tensor \(R_{0j0k}\delta^{jk}\). It will be convenient to introduce a traceless \((\overline{s}_{tr})_{ij}=\overline{s}_{ij}-(1/2)\delta_{ij}\overline{s}_{00}\), where we use the assumption \(\overline{s}^{\mu}_{\ \ \mu}=\overline{s}_{jj}-\overline{s}_{00}=0\). The trace can be simplified to \[R_{0j0j}=\frac{G}{\tilde{r}}\left[(\overline{s}_{tr})_{\perp ij}+\frac{1}{2}( \overline{s}_{tr})_{nn}(\delta_{ij}-n_{i}n_{j})\right](\stackrel{{ (4)}}{{I}})^{ij}, \tag{47}\] where projections of quantities along \(\hat{n}\) are denoted with the index \(n\) and \(\perp\) indicates a projection of a tensor perpendicular to \(\hat{n}\) like \((V_{\perp})_{i}=V_{i}-n_{i}V_{j}n^{j}=V_{i}-n_{i}V_{n}\). Next we find the double projection of the curvature along the wave propagation direction \(N_{i}N_{j}R_{0i0j}\). We find \[N^{i}N^{j}R_{0i0j}=0+O(\overline{s}^{2}), \tag{48}\] thus there is no leading order polarization along this projection. However, the components \(R_{0i0j}N^{i}(e_{a})^{j}\) do not vanish (where \(a=1,2\)). They are given by \[R_{0i0j}N^{i}(e_{a})^{j}=\frac{G}{\tilde{r}}\big{[}\tfrac{1}{2}\big{(}( \overline{s}_{tr})_{an}+\overline{s}_{0a}\big{)}(\delta_{ij}-n_{i}n_{j})+(e_{ a})_{j}\big{(}(\overline{s}_{tr})_{nk_{\perp}}+\overline{s}_{0k_{\perp}}\big{)} \big{]}(\stackrel{{(4)}}{{I}})^{ij}. \tag{49}\] Finally, we display projections along the transverse directions \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), the ones that normally are called "plus" and "cross". They are given by \[\begin{split} R_{0202}-R_{0101}&=\frac{G}{\tilde{r }}\big{[}(e_{1i}e_{1j}-e_{2i}e_{2j})(1-\tfrac{2}{3}\overline{s}_{00})-2(( \overline{s}_{tr})_{1i}e_{1j}-(\overline{s}_{tr})_{2i}e_{2j})\\ &-2(\overline{s}_{01}e_{1i}n_{j}-\overline{s}_{02}e_{2i}n_{j})+ \tfrac{1}{2}((\overline{s}_{tr})_{11}-(\overline{s}_{tr})_{22})(\delta_{ij}-n _{i}n_{j})\big{]}(\stackrel{{(4)}}{{I}})^{ij},\\ R_{0102}&=\frac{G}{\tilde{r}}\big{[}-(e_{1})_{i}(e _{2})_{j}(1-\tfrac{2}{3}s_{00})+(\overline{s}_{tr})_{1i}(e_{2})_{j}+(\overline {s}_{tr})_{2i}(e_{1})_{j}\\ &-\tfrac{1}{2}(\overline{s}_{tr})_{12}(\delta_{ij}-n_{i}n_{j})+ \overline{s}_{01}(e_{2})_{i}n_{j}+\overline{s}_{02}(e_{1})_{i}n_{j}\big{]}( \stackrel{{(4)}}{{I}})^{ij}\end{split} \tag{50}\] where the subscripts \(1\) and \(2\) imply projection with the corresponding unit vectors. It should be noted that the results in (50) could also receive \(\overline{s}_{\mu\nu}\) terms from the inertia tensor \(I^{ij}\) itself. Such terms could arise due to orbital effects from \(\overline{s}_{\mu\nu}\) on a binary source, for example [37]. A self-gravitating system was shown to be affected in this manner [61]. For brevity, a study of these effects is omitted here. In GR, all projections but \(R_{0202}-R_{0101}\) and \(R_{0102}\) vanish (when \(\hat{n}\) is the \(3\) direction), as can be seen by setting all \(\overline{s}_{\mu\nu}\) coefficients to zero. In the presence of the coefficients it appears \(3\) additional polarizations arise. The results above indicate that the coefficients \(\overline{s}_{\mu\nu}\), in addition to showing up in weak-field gravity scenarios like solar system tests [62], and affecting the speed of gravitational waves [55], can also affect the observed polarization content in a GW detector. The additional polarizations are of order \(\overline{s}\). Given the sensitivity of the current detectors to the strength of the GW signals above noise level of a couple orders of magnitude, it seems that these additional effects could be observed if \(\overline{s}\sim 10^{-2}\). Constraints on all nine coefficients \(\overline{s}_{\mu\nu}\) already exist below parts in \(10\) billion (e.g., from lunar laser ranging [63]), so we do not expect observable effects via this method. However, we have not studied the effects of higher order terms in the action [55], and many of these coefficients are not well constrained, or not constrained at all. While we do not discuss details, the nonzero projections found are equivalent to some of the Newman-Penrose projections of the curvature tensor [64, 50]. Specifically we have \[\begin{split} R_{0j0j}=-2\Phi_{22},\qquad\qquad\qquad\qquad N^{i} N^{j}R_{0i0j}=-6\Psi_{2}=0,\\ R_{0i0j}N^{i}(e_{1})^{j}=-2\sqrt{2}Re\Psi_{3},\qquad R_{0i0j}N^{i}( e_{2})^{j}=2\sqrt{2}Im\Psi_{3},\\ R_{0202}-R_{0101}=2Re\Psi_{4},\qquad\qquad\qquad\qquad R_{0102}=Im \Psi_{4}.\end{split} \tag{51}\] The reader can refer to depictions of the effect of these modes on a sphere of test masses in Refs. [50, 65]. Finally, we comment regarding the number of independent degrees of freedom indicated by the five curvature polarizations. It can be shown that three of them, the beyond-GR projections in (47) and (49) can be written as linear combinations of the "plus" and "cross" polarizations in (50). This holds to first order in the coefficients, \(\overline{s}_{\mu\nu}\). Therefore we can say that at leading order in small Lorentz violation, only two propagating degrees of freedom remain, which is consistent with other results [66; 67]. ## 6 Summary In this letter we found the classical radiation fields for modified wave equations that occur in descriptions of spacetime-symmetry breaking. The main results of the paper include the generic Green function solution (5), which can be applied to several cases. In the presence of minimal forms of Lorentz violation, we found the general solution for retarded boundary conditions for a scalar field (8), the vector potential (26), and the metric fluctuations (41). These results were studied in a radiation zone expansion, with scalar results in (12), the modified dipolar electric field (30), and spacetime curvature from a gravitational wave source (46). We found that Lorentz violation modifies the electric field so that the two polarizations are not transverse. For the partial solution we obtained for gravitational wave generation, there are 3 extra polarizations beyond GR. Results can be further studied in various ways. For gravitational waves, one needs a complete evaluation of (41) including the contributions from the wave zone integrals \((h_{\mathcal{W}})_{\mu\nu}\). Note that we have not considered in detail the effects of the Nambu-Goldstone and massive modes that may occur from a spontaneous symmetry breaking scenario [68; 69; 70]. A general description of the dynamical terms for the \(\overline{s}_{\mu\nu}\) coefficients, when they arise as a vacuum expectation value of a dynamical tensor \(s_{\mu\nu}\) has been published, but not yet studied in the GW context [57]. A study of the multipole radiation expansion results in section (4) could be carried out, for example looking for new possible observables for Lorentz violation in experiments and observation complementing prior work [12]. Results can also be extended to the nonminimal terms in the EFT framework [71]. ## 7 Acknowledgments This work was supported by the National Science Foundation under grant number 2207734. We wish to thank Jay D. Tasson for valuable comments. NAN was financed by CNES and acknowledges support by PSL/Observatoire de Paris.
2306.04521
On large regular (1,1,k)-mixed graphs
An $(r,z,k)$-mixed graph $G$ has every vertex with undirected degree $r$, directed in- and out-degree $z$, and diameter $k$. In this paper, we study the case $r=z=1$, proposing some new constructions of $(1,1,k)$-mixed graphs with a large number of vertices $N$. Our study is based on computer techniques for small values of $k$ and the use of graphs on alphabets for general $k$. In the former case, the constructions are either Cayley or lift graphs. In the latter case, some infinite families of $(1,1,k)$-mixed graphs are proposed with diameter of the order of $2\log_2 N$.
C. Dalfó, G. Erskine, G. Exoo, M. A. Fiol, N. López, A. Messegué, J. Tuite
2023-06-07T15:30:46Z
http://arxiv.org/abs/2306.04521v1
# On large regular \((1,1,k)\)-mixed graphs ###### Abstract An \((r,z,k)\)-mixed graph \(G\) has every vertex with undirected degree \(r\), directed in- and out-degree \(z\), and diameter \(k\). In this paper, we study the case \(r=z=1\), proposing some new constructions of \((1,1,k)\)-mixed graphs with a large number of vertices \(N\). Our study is based on computer techniques for small values of \(k\) and the use of graphs on alphabets for general \(k\). In the former case, the constructions are either Cayley or lift graphs. In the latter case, some infinite families of \((1,1,k)\)-mixed graphs are proposed with diameter of the order of \(2\log_{2}N\). _Keywords:_ Mixed graph, Moore bound, Cayley graph, Lift graph. _Mathematics Subject Classification:_ 05C50, 05C20, 15A18, 20C30. ## 1 Introduction The relationship between vertices or nodes in interconnection networks can be undirected or directed depending on whether the communication between nodes is two-way or only one-way. Mixed graphs arise in this case and in many other practical situations, where both kinds of connections are needed. Urban street networks are perhaps the most popular ones. Thus, a _mixed graph_ \(G=(V,E,A)\) has a set \(V=V(G)=\{u_{1},u_{2},\ldots\}\) of vertices, a set \(E=E(G)\) of edges or unordered pairs of vertices \(\{u,v\}\), for \(u,v\in V\), and a set \(A=A(G)\) of arcs, directed edges, or ordered pair of vertices \(uv\equiv(u,v)\). For a given vertex \(u\), its _undirected degree_\(r(u)\) is the number of edges incident to vertex \(u\). Moreover, its _out-degree_\(z^{+}(u)\) is the number of arcs emanating from \(u\), whereas its _in-degree_\(z^{-}(u)\) is the number of arcs going to \(u\). If \(z^{+}(u)=z^{-}(u)=z\) and \(r(u)=r\), for all \(u\in V\), then \(G\) is said to be a _totally regular_\((r,z)\)-mixed graph with _whole degree_\(d=r+z\). The distance from vertex \(u\) to vertex \(v\) is denoted by \(\operatorname{dist}(u,v)\). Notice that, when the out-degree \(z\) is not zero, the distance \(\operatorname{dist}(u,v)\) is not necessarily equal to the distance \(\operatorname{dist}(v,u)\). If the mixed graph \(G\) has diameter \(k\), its _distance matrix_\(\mathbf{A}_{i}\), for \(i=0,1,\ldots,k\), has entries \((\mathbf{A}_{i})_{w}=1\) if \(\operatorname{dist}(u,v)=i\), and \((\mathbf{A}_{i})_{uv}=0\) otherwise. So, \(\mathbf{A}_{0}=\mathbf{I}\) (the identity matrix) and \(\mathbf{A}_{1}=\mathbf{A}\) (the adjacency matrix of \(G\)). Mixed graphs were first considered in the context of the degree/diameter problem by Bosak [2]. The _degree/diameter problem_ for mixed graphs reads as follows: Given three natural numbers \(r\), \(z\), and \(k\), find the largest possible number of vertices \(N(r,z,k)\) in a mixed graph \(G\) with maximum undirected degree \(r\), maximum directed out-degree \(z\), and diameter \(k\). For mixed graphs, an upper bound for \(N(r,z,k)\), known as a _Moore_(_-like_) _bound_\(M(r,z,k)\), was obtained by Buset, El Amiri, Erskine, Miller, and Perez-Roses [3] (also by Dalfo, Fiol, and Lopez [8] with an alternative computation). **Theorem 1.1** (Buset, El Amiri, Erskine, Miller, and Perez-Roses [3]).: _The Moore bound for an \((r,z)\)-mixed graph with diameter \(k\) is_ \[M(r,z,k)=A\frac{u_{1}^{k+1}-1}{u_{1}-1}+B\frac{u_{2}^{k+1}-1}{u_{2}-1}, \tag{1}\] _where_ \[u_{1} =\frac{z+r-1-\sqrt{v}}{2},\qquad u_{2}=\frac{z+r-1+\sqrt{v}}{2},\] \[A =\frac{\sqrt{v}-(z+r+1)}{2\sqrt{v}},\qquad B=\frac{\sqrt{v}+(z+r+ 1)}{2\sqrt{v}},\] \[v =(z+r)^{2}+2(z-r)+1.\] This bound applies whether or not \(G\) is totally regular, but it is elementary to show that a Moore mixed graph must be totally regular. Thus, a Moore \((r,z,k)\)-mixed graph is a graph with diameter \(k\), maximum undirected degree \(r\geq 1\), maximum out-degree \(z\geq 1\), and order given by \(M(r,z,k)\). An example of a Moore \((3,1,2)\)-mixed graph is the Bosak graph [2], see Figure 1. Bosak [2] gave a necessary condition for the existence of a mixed Moore graph with diameter \(k=2\). Such graphs have the property that for any ordered pair \((u,v)\) of vertices, there is a _unique walk of length at most \(2\)_ between them. In general, there are infinitely many pairs \((r,z)\) satisfying Bosak necessary condition for which the existence of a mixed Moore graph is not known yet. Nguyen, Miller, and Gimbert [24] proved the existence and unicity of some Moore mixed graphs of diameter \(2\). Lopez, Miret, and Fernandez, [20] proved that there is no Moore \((r,z,2)\)-mixed graph when the pair \((r,z)\) equals \((3,3)\), \((3,4)\), or \((7,2)\). For diameter \(k\geq 3\), it was proved that mixed Moore graphs do not exist, see Nguyen, Miller, and Gimbert [24]. In the case of total regularity, this result also follows from the improved bound in Dalfo, Fiol, and Lopez [8], where it was shown that the order \(N\) of an \((r,z)\)-regular mixed graph G with diameter \(k\geq 3\) satisfies \[N\leq M(r,z,k)-r, \tag{2}\] where \(M(r,z,k)\) is given by (1). In general, a mixed graph with maximum undirected degree \(r\), maximum directed out-degree \(z\), diameter \(k\), and order \(N=M(r,z,k)-\delta\) is said to have _defect_\(\delta\). A mixed graph with defect one is called an _almost mixed Moore graph_. Thus, the result in (2) can be rephrased by saying that \(r\) is a lower bound for the defect of the mixed graph. In the case \(r=z=1\), such a result was drastically improved by Tuite and Erskine [27] by showing that a lower bound \(\delta(k)\) for the defect of a \((1,1)\)-regular mixed graph with diameter \(k\geq 1\) satisfies the recurrence \[\delta(k+6)=\delta(k)+f_{k-1}+f_{k+4}, \tag{3}\] where the initial values of \(\delta(k)\), for \(k=1,\ldots,6\), are \(0,1,1,2,3,5\), and \(f_{k}\) are the Fibonacci numbers starting from \(f_{0}=f_{1}=1\), namely, \(1,1,2,3,5,8,13,21,\ldots\) Alternatively, starting from \(\delta(1)=0\) and \(\delta(2)=1\), we have \(\delta(k+2)=\delta(k+1)+\delta(k)\) if \(k+2\not\equiv 1,2\ (\mbox{mod}\ 6)\), and \(\delta(k+2)=\delta(k+1)+\delta(k)+1\), otherwise. For more results on degree/diameter problem for graphs, digraphs, and mixed graphs, see the comprehensive survey by Miller and Siran [23]. For more results on mixed graphs, see Buset, Lopez, and Miret [4], Dalfo [5], Dalfo, Fiol, and Lopez [7, 8, 9], Erskine [13], Jorgensen [18], Lopez, Perez-Roses, and Pujolas [21], Nguyen, Miller, and Gimbert [24], and Tuite and Erskine [26]. In this paper, we deal with \((1,1,k)\)-mixed graphs, that is, mixed graphs with undirected degree \(r=1\), directed out-degree \(z=1\), and with diameter \(k\). Our study is based on computer techniques for small values of \(k\), and the use of graphs Figure 1: The Bosák \((3,1)\)-graph with diameter \(k=2\) and \(N=18\) vertices. on alphabets for general \(k\). In the former case, the constructions are either Cayley or lift graphs. In the latter case, some infinite families of \((1,1,k)\)-mixed graphs are proposed with \(N\) vertices and diameter \(k\) of the order of \(2\log_{2}N\). Most of the proposed constructions are closely related to line digraphs. Given a digraph \(G\), its line digraph \(LG\) has vertices representing the arcs of \(G\), and vertex \(x_{1}x_{2}\) is adjacent to vertex \(y_{1}y_{2}\) in \(LG\) if the arc \((x_{1},x_{2})\) is adjacent to the arc \((y_{1},y_{2})\) in \(G\), that is, if \(y_{1}=x_{2}\). The \(k\)-iterated line digraph is defined recursively as \(L^{k}G=L^{k-1}(LG)\). Let \(K_{d}^{+}\) be the complete symmetric digraph with \(d\) vertices with loops, and \(K_{d+1}\) the complete symmetric digraph on \(d+1\) vertices (in these complete graphs each edge is seen as a digon, or pair of opposite arcs). Then, two well know families of iterated line digraphs are the De Bruijn digraphs \(B(d,k)=L^{k}(K_{d}^{+})\), and the Kautz digraphs \(K(d,k)=L^{k}(K_{d+1})\). Both \(B(d,k)\) and \(K(d,k)\) have diameter \(k\) but De Bruijn digraphs have \(d^{k}\) vertices, whereas Kautz digraphs have \(d^{k}+d^{k-1}\) vertices. See, for instance, Fiol, Yebra, and Alegre [15], and Miller and Siran [23]. ## 2 Some infinite families of \((1,1,k)\)-mixed graphs In this section, we propose some infinite families of \((1,1,k)\)-mixed graphs with exponential order. All of them have vertices with out-degree \(z=1\). When, moreover, all the vertices have in-degree \(1\), we refer to them as \((1,1,z)\)-regular mixed graphs. If we denote by \(f(r,z,k)\) the order of a largest \((r,z,k)\)-mixed graph, which is upper bounded by the (exponential) Moore bound \(M(r,z,k)\), all the described graphs provide exponential lower bounds for \(f(1,1,k)\). Let us first give some basic properties of \((1,1,k)\)-mixed graphs. It is readily seen that the Moore bound satisfies the Fibonacci-type recurrence \[M(1,1,k)=M(1,1,k-1)+M(1,1,k-2)-2,\] starting from \(M(1,1,0)=1\) and \(M(1,1,1)=3\). From this, or just applying (1), we obtain that the corresponding Moore bound is \[M(1,1,k)=\left(1-\frac{2}{\sqrt{5}}\right)\left(\frac{1-\sqrt{5}}{2}\right)^{ k+1}+\left(1+\frac{2}{\sqrt{5}}\right)\left(\frac{1+\sqrt{5}}{2}\right)^{k+1}-2. \tag{4}\] The obtained values for \(k=2,\ldots,16\) are shown in Table 4. Then, for large values of \(k\), the Moore bound \(M(1,1,k)\) is of the order of \[M(1,1,k)\sim\left(1+\frac{2}{\sqrt{5}}\right)\left(\frac{1+\sqrt{5}}{2}\right) ^{k+1}\approx 1.8944\cdot 1.6180^{k+1}.\] ### The mixed graphs \(E(n)\) The first construction is the simplest one. Given \(n\geq 2\), the graph \(E(n)\) is defined as follows. As before, label the Fibonacci numbers so that \(f_{0}=f_{1}=1\). Consider a Moore tree of radius \(n\) with base vertex \(u_{0}\). The set of vertices at distance \(i\) from \(u_{0}\) is referred to as the vertices at _level_\(i\). There are \(f_{i+1}\) vertices at level \(i\). We can partition these vertices into two sets: \(V_{i}\) contains the \(f_{i}\) vertices at level \(i\) incident through an arc from level \(i-1\), and \(W_{i}\) contains the \(f_{i-1}\) vertices at level \(i\) incident through an edge from level \(i-1\). To complete the graph, we must consider two cases, depending on whether \(f_{n}\) is even or odd. If \(f_{n}\) is even, then we add a matching among the vertices of \(V_{n}\) and add an arc from each vertex in level \(n\) to \(u_{0}\). In this case, the diameter is \(2n\). Note that the maximum distance occurs from a level-1 vertex to a level-t vertex on the opposite edge (where the two edges are based on the two level-1 vertices). If \(f_{n}\) is odd, then we must modify the construction slightly. In this case, when we add a matching among the vertices of \(V_{n}\), there is one vertex \(v_{1}\) of \(V_{n}\) missed by the matching. So, we must add another vertex \(v_{2}\), join this vertex to \(v_{1}\) by an edge, and then add an arc from \(v_{2}\) to the base vertex \(u_{0}\). All other vertices at level \(n\) have arcs directly to \(u_{0}\). In this case, the diameter is \(2n+1\), where the maximum distance occurs from a level-1 vertex (in the edge _not_ containing \(v_{1}\)) to \(v_{2}\). So, the graph \(E(n)\) has diameter \(2n\) or \(2n+1\), and order \(M(1,1,n)\) or \(M(1,1,n)+1\). This bound is very weak for small diameters but at least it gives a first explicit construction that gives an exponential lower bound. In the following subsections, we show that we can do it better. ### The mixed graphs \(F(n)\) Given \(n\geq 2\), the \((1,1,k)\)-mixed graph \(F(n)\) has vertices labeled with \(a:x_{1}\ldots x_{n}\), where \(a\in\{+1,-1\}\), with \(x_{i}\in\mathbb{Z}_{3}\), and \(x_{i+1}\neq x_{i}\) for \(i=1,\ldots,n-1\). The adjacencies are as follows: * \(a:x_{1}x_{2}\ldots x_{n}\,\sim\,-a:x_{1}x_{2}\ldots x_{n}\) (edges); * \(a:x_{1}x_{2}\ldots x_{n}\,\to\,a:x_{2}x_{3}\ldots x_{n}(x_{n}+a)\) (arcs). Thus, \(F(n)\) has \(3\cdot 2^{n}\) vertices, it is an out-regular graph but not in-regular since vertices \(a:x_{1}x_{2}\ldots x_{n}\) and \(a:x_{1}^{\prime}x_{2}\ldots x_{n}\) are both adjacent to \(a:x_{2}\ldots(x_{n}+a)\). The mixed graph \(F(3)\) is shown in Figure 2. It is easily checked that the mapping \[a|x_{1}x_{2}\ldots x_{n}\quad\mapsto\quad-a|\overline{x_{1}}\,\overline{x_{2} }\ldots\overline{x_{n}}, \tag{5}\] where \(\overline{0}=1\), \(\overline{1}=0\), and \(\overline{2}=2\), is an automorphism of \(F(n)\). This is because \(\overline{x_{n}+a}=\overline{x_{n}}-a\). **Proposition 2.1**.: _The diameter of the mixed graph \(F(n)\) is \(k=2n\)._ Proof.: Let us see that there is a path of length at most \(2n\) from vertex \(\boldsymbol{x}=a|x_{1}\ldots x_{n}\) to vertex \(\boldsymbol{y}=b|y_{1}\ldots y_{n}\). Taking into account the automorphism in (5), we can assume that \(a=+1\). Notice that, with at most two steps, depending on the values of \(a\), \(x_{n}\), and \(y_{1}\) (at the beginning) or \(a\), \(y_{i}\), and \(y_{i+1}\) (in the sequel), we can add a new digit of \(\mathbf{y}\). Thus, in principle, we would need at most \(2n\) steps but, possibly, one last step to fix the first digit to the one of \(\mathbf{y}\) (for example, \(b\)). However, in what follows, we show that the first two digits \(y_{1}\) and \(y_{2}\) can be 'placed', so reaching a vertex of the form \(a^{\prime}|\ldots y_{1}y_{2}\), with at most 3 steps. * If \(y_{1}=x_{n}\), the first step is not necessary. * If \(y_{1}=x_{n}+1\), go through the arc \(+1|x_{1}\ldots x_{n}\to+1|x_{2}\ldots x_{n}y_{1}\). * If \(y_{1}=x_{n}-1\) and \(y_{2}=y_{1}-1\), go through the edge and two arcs \[+1|x_{1}\ldots x_{n}\sim-1|x_{1}x_{2}\ldots x_{n}\to-1|x_{2}\ldots x_{n}y_{1} \to-1|x_{3}\ldots x_{n}y_{1}y_{2}.\] * If \(y_{1}=x_{n}-1(=x_{n}+2)\) and \(y_{2}=y_{1}+1\), go through the three arcs \[+1|x_{1}\ldots x_{n}\to+1|x_{2}\ldots x_{n}x_{n}+1\to+1|x_{3} \ldots x_{n}+1x_{n}+2\] \[=+1|x_{3}\ldots x_{n}+1y_{1}\to+1|x_{4}\ldots x_{n}y_{1}y_{2}.\] Thus, to reach \(\mathbf{y}\), we need at most \(3+2(n-2)+1=2n\) steps. Finally, it is not difficult to find vertices that are at distance \(2n\). For instance, for \(n\) odd, go from \(\mathbf{x}=+1|0101\ldots 0\) to \(\mathbf{y}=-1|2020\ldots 2\); and, for \(n\) even, go from \(\mathbf{x}=+1|1010\ldots 0\) to \(\mathbf{y}=+1|2020\ldots 0\), #### 2.2.1 A numeric construction An alternative presentation \(F[n]\) of \(F(n)\) is as follows: Given \(n\geq 1\), let \(N^{\prime}=3\cdot 2^{n-1}\) so that the number of vertices of \(F[n]\) is \(2N^{\prime}\). The vertices of \(F[n]\) are labeled as Figure 2: The mixed graph \(F(3)\). \(\alpha|i\), where \(\alpha\in\{1,2\}\), and \(i\in\mathbb{Z}_{N^{\prime}}\). Let \(\overline{1}=2\) and \(\overline{2}=1\). Then, the adjacencies of \(F[n]\) defining the same mixed graph as those in \((i)\) and \((ii)\) are: \[\alpha|i \,\sim\,\overline{\alpha}|i\ (\text{edges}); \tag{6}\] \[\alpha|i \,\to\,\alpha|-2i+\alpha\ (\text{arcs}). \tag{7}\] To show that both constructions give the same mixed graph, \(F[n]\cong F(n)\), define first the mapping \(\pi\) from the two digits \(x_{1}x_{2}\) to \(\mathbb{Z}_{6}\) as follows \[\pi(01)=0,\ \pi(10)=1,\ \pi(12)=2,\ \pi(21)=3,\ \pi(20)=4,\pi(02)=5.\] Then, it is easy to check that, for \(n=2\), the mapping \(\psi\) from the vertices of \(F(2)\) to the vertices of \(F[2]\) defined as \[\psi(a|x_{1}x_{2})=\alpha(a)|\pi(x_{1}x_{2}),\] where \(\alpha(a)=\frac{a+3}{2}\), is an isomorphism from \(F(2)\) to \(F[2]\). (Note that \(\alpha(-1)=1\) and \(\alpha(+1)=2\)). From this, we can use induction. First, let us assume that \(\psi^{\prime}\) is an isomorphism from \(F(n-1)\) to \(F[n-1]\) of the form \[\psi^{\prime}(a|x_{1}x_{2}\ldots x_{n-1})=\alpha(a)|\pi^{\prime}(x_{1}x_{2} \ldots x_{n-1}),\] where the linear mapping \(\alpha\) is defined as above, and \(\pi^{\prime}\) is a mapping from the sequences \(x_{1}x_{2}\ldots x_{n-1}\) to the elements of \(\mathbb{Z}_{N^{\prime}}\), with \(N^{\prime}=3\cdot 2^{n-1}\). Then, we claim that the mapping \(\psi\) from the vertices of \(F(n)\) to the vertices of \(F[n]\) defined as \[\psi(a|x_{1}x_{2}\ldots x_{n})=\alpha(a)|-2\cdot\pi^{\prime}(x_{1}x_{2}\ldots x _{n-1})+\alpha(x_{n}-x_{n-1})\ (\text{mod}\ N), \tag{8}\] where \(N=3\cdot 2^{n}\), is an isomorphism from \(F(n)\) to \(F[n]\). Indeed, since \(\psi^{\prime}\) is an isomorphism from \(F(n-1)\) to \(F[n]\), we have that \(\psi^{\prime}\Gamma=\Gamma\psi^{\prime}\) and \(\psi^{\prime}\Gamma^{+}=\Gamma^{+}\psi^{\prime}\), where \(\Gamma\) and \(\Gamma^{+}\) denote undirected and directed adjacency, respectively. Thus, from \[\psi^{\prime}\Gamma(a|x1\ldots x_{n-1}) =\psi^{\prime}(-a|x_{1}\ldots x_{n-1})=\alpha(-a)|\pi^{\prime}(x_ {1}\ldots x_{n-1}),\] \[\Gamma\psi^{\prime}(a|x_{1}\ldots x_{n-1}) =\Gamma(\alpha(a)|\pi^{\prime}(x_{1}\ldots x_{n-1})=\overline{ \alpha(a)}|\pi^{\prime}(x_{1}\ldots x_{n-1}),\] and \[\psi^{\prime}\Gamma^{+}(a|x_{1}\ldots x_{n-1}) =\psi^{\prime}(a|x_{2}\ldots x_{n-1}x_{n-1}+a)=\alpha(a)|\pi^{ \prime}(x_{2}\ldots x_{n-1}x_{n-1}+a),\] \[\Gamma^{+}\psi^{\prime}(a|x_{1}\ldots x_{n-1}) =\Gamma^{+}(\alpha(a)|\pi^{\prime}(x_{1}\ldots x_{n-1}))=\alpha(a) |-2\cdot\pi^{\prime}(x_{1}\ldots x_{n-1})+\alpha(a),\] we conclude that \(\alpha(-a)=\overline{\alpha(a)}\) for every \(a\in\{+1,-1\}\) (as it is immediate to check), and \[\pi^{\prime}(x_{2}\ldots x_{n-1}(x_{n-1}+a))=-2\cdot\pi^{\prime}(x_{1}\ldots x _{n-1})+\alpha(a). \tag{9}\] Now, we can assume that \(a=+1\) (because of the automorphism (5)), and let \(a^{\prime}=x_{n}-x_{n-1}\). Then, since clearly \(\psi\Gamma=\Gamma\psi\), edges map to edges, we focus on proving that the same holds for the arcs, that is, \(\psi^{+}\Gamma=\Gamma\psi^{+}\). With this aim, we need to prove that the following two calculations, where we use (8), give the same result: \[\psi\Gamma^{+}(+1|x_{1}\ldots x_{n}) =\psi(+1|x_{2}\ldots x_{n}x_{n}+1)\] \[=2|-2\cdot\pi^{\prime}(x_{2}\ldots x_{n})+2, \tag{10}\] \[\Gamma^{+}\psi(+1|x_{1}\ldots x_{n}) =\Gamma^{+}(-2\cdot\pi^{\prime}(x_{1}\ldots x_{n-1})+\alpha(a^{ \prime})\] \[=2|4\cdot\pi^{\prime}(x_{1}\ldots x_{n-1})-2\alpha(a^{\prime})+2. \tag{11}\] The required equality follows since, from (9) with \(a^{\prime}\) instead of \(a\), we have \[-2\cdot\pi^{\prime}(x_{2}\ldots x_{n}) =2\cdot\pi^{\prime}(x_{2}\ldots x_{n-1}(x_{n-1}+a^{\prime}))=-2[- 2\cdot\pi^{\prime}(x_{1}\ldots x_{n-1})+\alpha(a^{\prime})]\] \[=4\cdot\pi^{\prime}(x_{1}\ldots x_{n-1})-2\alpha(a^{\prime}).\] In Figure 2, every vertex has been labeled according to both presentations. Using this presentation, we extend (and again prove) Proposition 2.1. **Proposition 2.2**.: _The diameter of \(F(n)\) is \(k=2n\). More precisely, there is a path of length \(n\) or \(n-1\) between any pair of edges \(\alpha|i-\overline{\alpha}|i\) and \(\alpha^{\prime}|i^{\prime}-\overline{\alpha^{\prime}}|i^{\prime}\). Moreover, there is a path of length between \(n-1\) and \(2n\) between any pair of vertices._ Proof.: Let us consider a tree rooted at a pair of vertices of an edge, \(\boldsymbol{u}_{1}=1|i\) and \(\boldsymbol{u}_{2}=2|i\), and suppose the \(n=2r+1\) is odd (the case of even \(n\) is similar). Then, * The vertices at distances \(1,2\) of \(\boldsymbol{u}_{1}\) or \(\boldsymbol{u}_{2}\) are \(\alpha|-2i+1\), \(\alpha|-2i+2\) with \(\alpha=1,2\). * The vertices at distances \(3,4\) of \(\boldsymbol{u}_{1}\) or \(\boldsymbol{u}_{2}\) are \(\alpha|4i\), \(\alpha|4i-1\), \(\alpha|4i-2\) and \(\alpha|4i-3\) with \(\alpha=1,2\). * The vertices at distances \(5,6\) of \(\boldsymbol{u}_{1}\) or \(\boldsymbol{u}_{2}\) are \(\alpha|-8i+1\), \(\alpha|-8i+2\),..., \(\alpha|-8i+8\) with \(\alpha=1,2\). \[\vdots\] * The vertices at distances \(2n-3,2n-2\) of \(\boldsymbol{u}_{1}\) or \(\boldsymbol{u}_{2}\) are \(\alpha|2^{n-1}+r\) with \(r=0,-1,\ldots,-2^{n-1}+1\) and \(\alpha=1,2\). * The vertices at distances \(2n-1,2n\) of \(\boldsymbol{u}_{1}\) or \(\boldsymbol{u}_{2}\) are \(\alpha|-2^{n}+r\) with \(r=1,2,\ldots,2^{n}\) and \(\alpha=1,2\). See Figure 3 for the case of \(F(3)\), which has 24 vertices. Note that, from the pair of vertices \(1|i\) and \(2|i\), the 3-rd and 4-th columns contain all the 'consecutive' vertices of \(F(3)\) from \(\alpha|4i-3\) to \(a|4i+8\), with \(\alpha=1,2\). More precisely, from vertex \(2|i\) (we can fix \(\alpha\) because of the automorphism), we reach all of such vertices with at most 6 steps, except \(2|4i+1\) (in boldface, on the top of the 4-th column), which would require the 7 adjacencies '\(-\to-\to-\to-\)'. But this vertex is reached following the path '\(\to\to\to-\to-\)' (in boldface, in the 5-th column). In general, using the notation \(f(\alpha|i)=\alpha|-2i+\alpha\) and \(g(\alpha|i)=\overline{\alpha}|i\), we have the following: Let \(N^{\prime}=3\cdot 2^{n-1}\). Then, * If \(n\) is even, then the exception vertex is \(g(fg)^{n}(2|i)(\mbox{mod }N)=1|2^{n}i\)\((2n+1\) steps) but \((gf)^{n-1}f^{2}(2|i)(\mbox{mod }N)=1|2^{n}i\)\((2n\) steps). * If \(n\) is odd, then the exception vertex is \(g(fg)^{n}(2|i)(\mbox{mod }N)=2|2^{n-1}i+1\)\((2n+1\) steps) but \((gf)^{n-1}f^{2}(2|i)(\mbox{mod }N)=2|2^{n-1}i+1\)\((2n\) steps). ### The mixed graphs \(F^{*}(n)\) A variation of the mixed graphs \(F(n)\) allows us to obtain \((1,1,k)\)-regular mixed graphs that we denote \(F^{*}(n)\). Given \(n\geq 2\), the \((1,1,k)\)-regular mixed graph \(F^{*}(n)\) has vertices labeled as those of \(F(n)\). That is, \(a|x_{1}\ldots x_{n}\), where \(a\in\{+1,-1\}\) and Figure 3: The paths of length at most 6 in \(F(3)\) from the vertices of the edge \(\{1|i,2|i\}\). \(x_{i}\in\mathbb{Z}_{3}\). Now the adjacencies are as follows: \[a|x_{1}x_{2}\ldots x_{n} \,\sim\,-a|x_{1}x_{2}\ldots x_{n}\text{ (edges)}; \tag{12}\] \[a|x_{1}x_{2}\ldots x_{n} \,\to\,a|x_{2}x_{3}\ldots x_{n}(x_{n}+a(x_{2}-x_{1}))\text{ (arcs)}, \tag{13}\] where, when computed modulo \(3\), we take \(x_{2}-x_{1}\in\{+1,-1\}\). Hence, the vertices \(a|x_{1}x_{2}\ldots x_{n}\) and \(a|x_{1}^{\prime}x_{2}\ldots x_{n}\), with \(x_{1}^{\prime}\neq x_{1}\), are adjacent to different vertices of the form \(a|x_{2}\ldots(x_{n}\pm 1)\). For example, the mixed graph \(F^{*}(3)\) is shown in Figure 4. #### 2.3.1 An alternative presentation To study some properties of \(F^{*}(n)\), it is useful to work with the following equivalent presentation: The vertices are now labeled as \(a|b:a_{1}\ldots a_{n-1}\), where \(a,a_{i}\in\{+1,-1\}\) for \(i=1,\ldots,n-1\), and \(b\in\mathbb{Z}_{3}\). Then, the adjacencies (12) and (13) become \[a|b:a_{1}a_{2}\ldots a_{n-1} \,\sim\,-a|b:a_{1}a_{2}\ldots a_{n-1}\text{ (edges)}; \tag{14}\] \[a|b:a_{1}a_{2}\ldots a_{n-1} \,\to\,a|b+a_{1}:a_{2}a_{3}\ldots a_{n-1}\,aa_{1}\text{ (arcs)}. \tag{15}\] Notice that a vertex \(a|x_{1}x_{2}\ldots x_{n}\) with the old presentation is now labeled as \(a|b:a_{1}\ldots a_{n-1}\) with \(b=x_{1}\) and \(a_{i}=x_{i+1}-x_{i}\) for \(i=1,\ldots,n-1\). From this, it is readily checked that the 'new' adjacencies are as mentioned. **Proposition 2.3**.: _The group of automorphisms of \(F^{*}(n)\) is isomorphic to the dihedral group \(D_{3}\)._ Proof.: Using the new notation, let us first show that the following mappings, \(\Phi\) and \(\Psi\), are automorphisms of \(F^{*}(n)\): \[\Phi(a|b:a_{1}a_{2}\ldots a_{n-1}) =a|\phi(b):\overline{a_{1}}\,\overline{a_{2}}\ldots\overline{a_{ n-1}}; \tag{16}\] \[\Psi(a|b:a_{1}a_{2}\ldots a_{n-1}) =a|b+1:a_{1}a_{2}\ldots a_{n-1}, \tag{17}\] Figure 4: The mixed graph \(F^{*}(3)\). where \(\phi(0)=1\), \(\phi(1)=0\), \(\phi(2)=2\), and \(\overline{a}_{i}=-a_{i}\) for \(i=1,\ldots,n-1\). To prove that \(\Phi\) is an automorphism of \(F^{*}(n)\), observe that the vertex in (16) is adjacent, through an edge, to \[\overline{a}|\phi(b):\overline{a_{1}}\,\overline{a_{2}}\ldots\overline{a_{n- 1}}=\Phi(\overline{a}|b:a_{1}a_{2}\ldots a_{n-1}),\] and, through an arc, to \[a|\phi(b)+\overline{a_{1}}:\overline{a_{2}}\ldots\overline{a_{n-1}}\,a \overline{a_{1}}=\Phi(a|b+a_{1}:a_{2}a_{3}\ldots a_{n-1}\,aa_{1}),\] where the last equality holds since \(\phi(b+a_{1})=\phi(b)+\overline{a_{1}}\), and \(a\overline{a_{1}}=\overline{aa_{1}}\) for every \(b\in\mathbb{Z}_{3}\) and \(a,a_{1}\in\{+1,-1\}\). Similarly, we can prove that \(\Psi\) is also an automorphism of \(F^{*}(n)\). Clearly, \(\Phi\) is involutive, and \(\Psi\) has order three. Moreover, \((\Phi\Psi)^{2}=\mathrm{id}\) (the identity). Then, the automorphism group \(\mathrm{Aut}(F^{*}(n))\) must contain the subgroup \(\langle\Phi,\Psi\rangle=D_{3}\). It is easy to see that the graph \(F^{*}(n)\) has exactly three digons between pairs of vertices of the form \(-1:xyxy\ldots xy\) and \(-1:yxyx\ldots yx\) when \(n\) is even, or \(+1:xyxy\ldots x\) and \(+1:yxyx\ldots y\) when \(n\) is odd; see again Figure 4. Thus, any automorphism of \(F^{*}(n)\) must interchange these digons; hence, the automorphism group has at most \(3!=6\) elements. Consequently, \(\mathrm{Aut}(F^{*}(n))\cong D_{3}\cong S_{3}\), as claimed. Before giving the diameter of \(F^{*}(n)\), we show that, for every vertex \(\boldsymbol{u}\), there is only a possible vertex \(\boldsymbol{v}\) at distance \(2n+1\) from \(\boldsymbol{u}\). Suppose first that \(n\) is even (the case of odd \(n\) is similar). It is clear that, excepting possibly one case, from vertex \(\boldsymbol{u}=a|b:a_{1}a_{2}\ldots a_{n-1}\) to vertex \(\boldsymbol{v}=a^{\prime}|b^{\prime}:y_{1}y_{2}\ldots y_{n-1}\), there is a path with at most \(2n\) steps of the form \(-\to-\to\cdots-\to\), where '\(-\)' stands for '\(\sim\)' (edge) or '\(\emptyset\)' (nothing), and '\(\to\)' represents an arc. The exception occurs when all the edges of the path are necessary. That is: * If \(b^{\prime}=b+\Sigma+\overline{a}\) (where \(\Sigma=\sum_{i=1}^{n-1}a_{i}\), and so that \(b^{\prime}\neq b+\Sigma\)), then the first two steps are \[a|b:a_{1}a_{2}\ldots a_{n-1}\ \sim\ \overline{a}|b:a_{1}a_{2}\ldots a_{n-1}\ \to\ \overline{a}|b+a_{1}:a_{2}a_{3}\ldots a_{n-1}\,\overline{a}a_{1}.\] * If \(y_{1}=aa_{2}\), then the next two steps are \[\overline{a}|b+a_{1}:a_{2}a_{3}\ldots a_{n-1}\,\overline{a}a_{1} \sim\ a|b+a_{1}:a_{2}a_{3}\ldots a_{n-1}\,\overline{a}a_{1}\] \[\to\ a|b+a_{1}+a_{2}:a_{3}\ldots a_{n-1}\,\overline{a}a_{1}\,aa_{ 2}.\] * If \(y_{1}=\overline{a}a_{3}\), then the next two steps are \[a|b+a_{1}+a_{2}:a_{3}\ldots a_{n-1}\,\overline{a}a_{1}\,aa_{2} \sim\ \overline{a}|b+a_{1}+a_{2}:a_{3}\ldots a_{n-1}\,\overline{a}a_{1}\,aa_{2}\] \[\to\ \overline{a}|b+a_{1}+a_{2}+a_{3}:a_{4}\ldots a_{n-1}\, \overline{a}a_{1}\,aa_{2}\,\overline{a}a_{3}.\] \(\vdots\) * If \(y_{n-1}=a\overline{a}a_{1}=-a_{1}\), then the last two steps are \[\overline{a}|b+\Sigma:\overline{a}a_{1}\,aa_{2}\,\overline{a}a_{3} \ldots\overline{a}a_{n-1} \sim\ a|b+\Sigma:\overline{a}a_{1}\,aa_{2}\ldots\overline{a}a_{n-1}\] \[\to\ a|b+\Sigma+\overline{a}a_{1}:aa_{2}\ldots\overline{a}a_{n-1} \,\overline{a_{1}}\] \[=a|b^{\prime}:y_{1}y_{2}\ldots y_{n-1}.\] Thus, if \(a\neq a^{\prime}\) (\(a^{\prime}=\overline{a}\)), the vertex \[\boldsymbol{v}=\overline{a}|b+\Sigma+\overline{a}a_{1}:aa_{2}\,\overline{a}a_ {3}\ldots\overline{a}a_{n-1}\,\overline{a_{1}}\] is not reached from \(\boldsymbol{u}\) in this way. Similarly, if \(n\) is odd, the exception is the vertex \[\boldsymbol{v}=a|b+\Sigma+\overline{a}a_{1}:aa_{2}\,\overline{a}a_{3}\ldots aa _{n-1}\,\overline{a_{1}}.\] ### The mixed graphs \(F^{\prime}(n)\) If necessary, the three digons of \(F^{*}(n)\) can be removed and replaced by three new edges of the form \[+1:xyxy\ldots xy\ \sim\ +1:yxyx\ldots yx (n\ \text{even}),\] \[-1:xyxy\ldots yx\ \sim\ -1:yxyx\ldots xy (n\ \text{odd}).\] So, we obtain the new mixed graph \(F^{\prime}(n)\), with \(N=3\cdot 2^{n}-6\) vertices and diameter \(k\leq 2n\). More precisely, \(F^{\prime}(2)\) is isomorphic to the Kautz digraph \(K(2,2)\) with \(N=6\) vertices and diameter \(k=2\); and when \(n\in\{3,4\}\), the mixed graph \(F^{\prime}(n)\) has diameter \(k=2n-1\). For instance, the mixed graph \(F^{\prime}(4)\), with \(N=42\) vertices and diameter \(k=7\), is shown in Figure 5. In all the other cases, when \(n\geq 5\), computational results seem to show that the diameter of \(F^{\prime}(n)\) is always \(k=2n\). ### The mixed graphs \(G(n)\) We define a \((1,1,k)\)-regular mixed graph \(G(n)\), for \(n\geq 2\), as follows: the vertices are of the form \(x_{0}|x_{1}\ldots x_{n}\), where \(x_{i}\in\mathbb{Z}_{2}\) for \(i=0,1,\ldots,n\). More precisely, the vertices are: * For any \(n\): \(1|00\ldots 0\) and \(1|11\ldots 1\); * For odd \(n\): \(0|0101\ldots 0\) and \(0|1010\ldots 1\); * For even \(n\): \(1|0101\ldots 01\) and \(1|1010\ldots 10\); * For the other vertices, \(0|x_{1}\ldots x_{n}\) and \(1|x_{1}\ldots x_{n}\), with \(x_{i}\in\mathbb{Z}_{2}\). So, the number of vertices of \(G(n)\) is \(2^{n+1}-4\). The adjacencies (with arithmetic modulo 2) through edges are: * For any \(n\): \(1|00\ldots 0\,\sim\,1|11\ldots 1\); * For odd \(n\): \(1|0101\ldots 0\,\sim\,1|1010\ldots 1\); * For even \(n\): \(0|0101\ldots 01\,\sim\,0|1010\ldots 10\); * For the other vertices, \(x_{0}|x_{1}\ldots x_{n}\,\sim\,(x_{0}+1)|x_{1}\ldots x_{n}\). The adjacencies through arcs are: * \(x_{0}|x_{1}\ldots x_{n}\,\to\,x_{0}|x_{2}\ldots x_{n}(x_{1}+x_{0})\). The graph \(G(n)\) is an in- and out-regular mixed graph with \(r=z=1\). Its only nontrivial automorphism is the one that sends \(x_{0}|\boldsymbol{x}=x_{0}|x_{1}x_{2}x_{3}\ldots\) to \(x_{0}|\overline{\boldsymbol{x}}=x_{0}|\overline{x_{1}}\,\overline{x_{2}}\, \overline{x_{3}}\ldots\), where \(\overline{x_{i}}=x_{i}+1\) for \(i=1,2,3,\ldots\) In Figure 6, we show the mixed graph \(G(3)\). Looking at the results for \(n\leq 12\) obtained by computer, we are led to conjecture that the diameter of \(G(n)\) is \(k=2n-1\). At first sight, the proof of this result seems to be involved, although we managed to prove the following. **Proposition 2.4**.: _The diameter of \(G(n)\) is at most \(2n\)._ Proof.: Consider the digraph \(G^{+}(n)\) defined by considering **all**\(2^{n+1}\) vertices of the form \(0|x_{1}\ldots x_{n}\) and \(1|x_{1}\ldots x_{n}\), with \(x_{i}\in\mathbb{Z}_{2}\), with undirected adjacencies as in \((iv)\), and directed adjacencies as in \((v)\). Then, \(G^{+}(n)\) has the self-loops at vertices \(0|00\ldots 0\) and \(0|11\ldots 1\) and one digon (or two opposite arcs) between \(0|0101\ldots 01\) and \(0|1010\ldots 10\) for even \(n\), and \(0|0101\ldots 0\) and \(0|1010\ldots 1\) for odd \(n\). In fact, Figure 5: The mixed graph \(F^{\prime}(4)\) with \(42\) vertices and diameter \(7\). if every edge of \(G^{+}(n)\) is 'contracted' to a vertex, what remains is the De Bruijn digraph \(B(2,n)\), with \(2^{n}\) vertices and diameter \(n\). Moreover, notice that \(G(n)\) is obtained by removing the above four vertices and adding the edges in \((i)\), \((ii)\), and \((iii)\). By way of examples, Figure 7 shows the graph \(G^{+}(2)\), whereas Figure 8 shows the mixed graph \(G^{+}(3)\) 'hanging' from a vertex with eccentricity \(2n=6\). Consequently, since the diameter of \(G(n)\) is upper bounded by the diameter of \(G^{+}(n)\), we concentrate on proving that the diameter of \(G^{+}(n)\) is \(2n\) for \(n>1\) (\(G(1)\) has diameter \(3\)). The proof is constructive because we show a walk of length at most \(2n\) between any pair of vertices. To this end, we take the following steps: 1. There is a walk of length at most \(2n\) from vertex \(x_{0}|\mathbf{x}=x_{0}|x_{1}x_{2}\ldots x_{n}\) to vertex \((x_{n}+y_{n})|y_{1}y_{2}\ldots y_{n}\). Indeed, as \(x_{i}+x_{i}=0\) for any value of \(x_{i}\), we get \[x_{0}|x_{1}x_{2}x_{3}\ldots x_{n} \sim (x_{1}+y_{1})|x_{1}x_{2}x_{3}\ldots x_{n}\to(x_{1}+y_{1})|x_{2}x_ {3}\ldots x_{n}y_{1}\] (18) \[\sim (x_{2}+y_{2})|x_{2}x_{3}\ldots x_{n}y_{1}\to(x_{2}+y_{2})|x_{3} \ldots x_{n}y_{1}y_{2}\] \[\vdots\] \[\sim (x_{n}+y_{n})|x_{n}y_{1}y_{2}y_{3}\ldots y_{n-1}\to(x_{n}+y_{n}) |y_{1}y_{2}\ldots y_{n}.\] Thus, the initial vertex \(x_{0}|x\) and the _step pattern_\(`\sim\to\sim 2. Some of the following equalities hold: \(x_{0}=x_{1}+y_{1}\), or \(x_{i}+y_{i}=x_{i+1}+y_{i+1}\) for some \(i=1,\ldots,n-1\). In this case, some steps '\(\sim\)' are absent. More precisely, if either both equalities \(x_{i}=y_{i}\) and \(x_{i+1}=y_{i+1}\) (or both inequalities \(x_{i}\neq y_{i}\) and \(x_{i+1}\neq y_{i+1}\)) hold, then the step '\(\sim\)' through an edge leading to \((x_{i+1}+y_{i+1})|x_{i+1}\ldots x_{n}y_{1}\ldots y_{i}\ldots\) is absent. So, we save 1 step. Thus, if we can save some steps, one last step \((x_{n}+y_{n})|\boldsymbol{y}\sim(\overline{x_{n}+y_{n}})|\boldsymbol{y}\) assures a walk of length at most \(2n\) from \(x_{0}|\boldsymbol{x}\) to \(y_{0}|\boldsymbol{y}\) for any \(y_{0}\in\{0,1\}\). 3. In the 'worst case', the walk in (18) consists of exactly \(2n\) steps (vertices at maximum distance) if \(|\boldsymbol{x}\cap\boldsymbol{y}|=0\) and none of the equalities in (b) holds. Assuming first that \(x_{0}=0\) (the case \(x_{0}=1\) is similar), the latter occurs when \(x_{1}+y_{1}=1\Rightarrow y_{1}=\overline{x_{1}}\), \(x_{2}+y_{2}=0\Rightarrow y_{2}=x_{2}\), \(x_{3}+y_{3}=1\Rightarrow y_{3}=\overline{x_{3}}\), and so on. Consequently, starting from \(0|\boldsymbol{x}=0|x_{1}x_{2}x_{3}\ldots x_{n}\), we only need to test the destiny vertices of the form \(1|\boldsymbol{y}=1|\overline{x_{1}}x_{2}\overline{x_{3}}\ldots x_{n}\) (\(n\) even), and \(0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0| \boldsymbol{z}=0|\boldsymbol{z}=0|\boldsymbol{z}=0|\ \(0|\overline{x_{1}}x_{2}\overline{x_{3}}\ldots\overline{x_{n}}\) (\(n\) odd), with the additional constraints \(|\mathbf{x}\cap\mathbf{y}|=|\mathbf{x}\cap\mathbf{z}|=0\). 4. For these cases, the strategy is to put first the last digit of destiny. Namely, if \(n\) is even, \[0|x_{1}x_{2}x_{3}\ldots x_{n} \to 0|x_{2}x_{3}\ldots x_{n}x_{1}\] (19) \[\sim (x_{2}+\overline{x_{1}})|x_{2}x_{3}\ldots x_{n}x_{1}\to(x_{2}+ \overline{x_{1}})|x_{3}x_{4}\ldots x_{n}x_{1}\overline{x_{1}}\] \[\sim (x_{3}+x_{2})|x_{3}x_{4}\ldots x_{n}x_{1}\overline{x_{1}}\,\to(x _{3}+x_{2})|x_{4}x_{5}\ldots x_{n}x_{1}\overline{x_{1}}\,x_{2}\] \[\sim (x_{4}+\overline{x_{3}})|x_{4}x_{5}\ldots x_{n}x_{1}\overline{x _{1}}\,x_{2}\to(x_{4}+\overline{x_{3}})|x_{5}\ldots x_{n}x_{1}\overline{x_{1 }}\,x_{2}\overline{x_{3}}\] \[\vdots\] \[\sim (x_{n}+\overline{x_{n-1}})|x_{n}x_{1}\overline{x_{1}}x_{2}\ldots \overline{x_{n-1}}x_{n-2}\] \[\to (x_{n}+\overline{x_{n-1}})|x_{1}\overline{x_{1}}x_{2}\ldots x_{n -2}\overline{x_{n-1}}\] \[\sim (x_{1}+x_{n})|x_{1}\overline{x_{1}}x_{2}\ldots x_{n-2}\overline{ x_{n-1}}\to(x_{1}+x_{n})|\overline{x_{1}}x_{2}\overline{x_{3}}\ldots x_{n}\] \[\sim 1|(x_{1}+x_{n})|\overline{x_{1}}x_{2}\overline{x_{3}}\ldots x_{n}.\] This walk can have \(2n+2\) steps whenever all steps '\(\sim\)' through edges are present. This is the case when \(x_{2}+\overline{x_{1}}\neq 0\), \(x_{3}+x_{2}\neq x_{2}+\overline{x_{1}}\), \(x_{4}+\overline{x_{3}}\neq x_{3}+x_{2}\),..., \(x_{1}+x_{n}\neq x_{n}+\overline{x_{n-1}}\), and \(x_{1}+x_{n}\neq 1\). In turn, this implies the \(n+1\) equalities \[x_{1}=x_{3},\ x_{3}=x_{5},\ \ldots,\ x_{n-3}=x_{n-1},\ x_{n-1}=x_{1},\] (20) \[x_{1}=x_{2},\ x_{2}=x_{4},\ x_{4}=x_{6},\ \ldots,\ x_{n-2}=x_{n},\ x_{n}=x_{1}.\] (21) Note that these sequences of equalities form two cycles (with odd and even subscripts) rooted at \(x_{1}\). Thus, the number of inequalities, if any, must be at least 2. In this case, at least 2 steps '\(\sim\)' are absent in (19), and we have a walk of length at most \(2n\) between the vertices considered. Otherwise, if **all** the equalities (20)-(21) hold, the initial vertex must be \(0|000\stackrel{{(n)}}{{\cdot}}00\) (the first digit \(x_{1}\) can be fixed to 0 since the mixed graph has an automorphism that sends \(x_{0}|x_{1}x_{2}\ldots x_{n}\) to \(x_{0}|\overline{x_{1}}\,\overline{x_{2}}\,\ldots\,\overline{x_{n}}\)), and the destiny vertex is \(0|1010\stackrel{{(n)}}{{\cdot}}10\). The same reasoning for \(n\) odd leads that, in the worst case (walk in (19) of length \(2n+2\)), the initial vertex is \(0|000\ldots 0\) and the final vertex \(0|1010\ldots 1\). In such cases, we have a particular walk of the desired length. 5. There is a walk of length \(2n\) from \(0|000\ldots 0\) to \(1|1010\ldots 10\) (\(n\) even) or to \(0|1010\ldots 01\) (\(n\) odd) by using the following step pattern \[\sim \to \to \sim \to \sim \to \stackrel{{(2n)}}{{\cdots}}\cdots\sim \to \to \.\] For instance, for \(n=6\), we get \[0|000000 \sim 1|000000\to 1|000001\to 1|000011\] \[\sim 0|000011\to 0|000110\] \[\sim 1|000110\to 1|001101\] \[\sim 0|001101\to 0|011010\] \[\sim 1|011010\to 1|110101\to 1|101010,\] and, for \(n=7\), \[0|0000000 \,\sim\,1|0000000\to 1|0000001\to 1|0000011\] \[\,\sim\,0|0000011\to 0|0000110\] \[\,\sim\,1|0000110\to 1|0001101\] \[\,\sim\,1|0011010\to 1|0110101\] \[\,\sim\,0|0110101\to 0|110101\] \[\,\sim\,0|0110101\to 0|1101010\to 0|1010101.\] 6. The case \(x=1\) is similar, and we only mention the main facts. Now, the 'worst case' (\(2n\) steps) in the walk in (18) (\(2n\) steps) occurs when, starting from \(1|\boldsymbol{x}=1|x_{1}x_{2}x_{3}\ldots x_{n}\), we want to reach the destiny vertices of the form \(0|\boldsymbol{y}=0|x_{1}\overline{x_{2}}x_{3}\overline{x_{4}}\ldots\overline{x _{n}}\) (\(n\) even), or \(1|\boldsymbol{z}=1|x_{1}\overline{x_{2}}x_{3}\overline{x_{4}}\ldots x_{n}\) (\(n\) odd), with the additional constraints \(|\boldsymbol{x}\cap\boldsymbol{y}|=|\boldsymbol{x}\cap\boldsymbol{z}|=0\). Now, following the same strategy as in step 4 above, it turns out that for the case of \(2n+2\) steps, the following conditions must hold (assuming \(n\) odd, the even case is similar): \[x_{1}=x_{2},\ x_{2}=x_{3},\ \ldots,\ x_{n-1}=x_{n},\ x_{n}\neq x_{1},\] (22) which are clearly incompatible, and at least there must be another inequality (the last one in (22) is forced since the final vertex has \(x_{0}=1\)). Again, at least \(2\) steps '\(\sim\)' are absent in (19), and we have a walk of length at most \(2n\) between the vertices considered. For example, for \(n=5\), and assuming that \(x_{4}\neq x_{5}\) and \(x_{1}=0\), the walk of \(10\) steps from \(1|00001\) to \(1|01011\) is: \[1|00001 \,\to\,1|00011\] \[\,\sim\,0|00011\to 0|00110\] \[\,\sim\,1|00110\to 1|01101\] \[\,\sim\,0|01101\to 0|11010\] \[\,\to\,0|10101\to 0|01011\sim 1|01011.\] This completes the proof. In fact, we implicitly proved the following. **Lemma 2.5**.: _For every \(n>1\), the mixed graph \(G^{+}(n)\) satisfies the following._ * _The vertices_ \(0|00\ldots 0\) _and_ \(0|11\ldots 1\) _have maximum eccentricity_ \(2n\)_._ * _The vertices_ \(1|00\ldots 0\) _and_ \(1|11\ldots 1\) _have eccentricity_ \(2n-1\)_._ * _If_ \(n\geq 5\)_, the vertices_ \(1|00\ldots 01\) _and_ \(1|11\ldots 10\) _have eccentricity_ \(2n-2\)_._ Proof.: \((i)\) and \((ii)\) follow from the previous reasoning. To prove \((iii)\), we only need to check the distance from \(1|00\ldots 01\) to \(0|00\ldots 0\). A shortest path between these two vertices is \(1|00\ldots 01\sim 0|00\ldots 01\to 0|0\ldots 010\to\cdots\to 0|10\ldots 00\sim 1|10\ldots 00\to 1|00\ldots 00\sim 0|00\ldots 0\) of length \(n+3\leq 2n-2\) if \(n\geq 5\) Let \(\Psi_{0}\) and \(\Psi_{1}\) be the functions that map a vertex \(x|\mathbf{x}\) to its adjacent vertex from an edge or an arc, respectively. That is, \[\Psi_{0}(x_{0}|x_{1}x_{2}\ldots x_{n}) =\overline{x_{0}}|x_{1}x_{2}\ldots x_{n},\] \[\Psi_{1}(x_{0}|x_{1}x_{2}\ldots x_{n}) =x_{0}|x_{1}x_{2}\ldots(x_{0}+x_{1}).\] Let \(\Phi=(\phi_{1},\phi_{2},\ldots,\phi_{n})\) be the function that maps every \(x_{i}\) to either \(x_{i}\) or \(\overline{x_{i}}\), for \(i=1,2,\ldots,n\). **Lemma 2.6**.: _For any fixed functions \(\Psi_{j}\) and \(\Phi\), and first digit \(x_{0}=0,1\), we have_ \[\Psi_{j}(x_{0}|\Phi(\mathbf{x}))=\Phi(\Psi_{j}(x_{0}|\mathbf{x})),\] _where \(\Phi\) only acts on the digits \(x_{1},x_{2},\ldots,x_{n}\)._ Proof.: \[\Psi_{0}(x_{0}|\Phi(\mathbf{x})) =\Psi_{0}(x_{0}|\phi_{1}(x_{1})\phi_{2}(x_{2})\ldots\phi_{n}(x_{n }))=\overline{x_{0}}|\phi_{1}(x_{1})\phi_{2}(x_{2})\ldots\phi_{n}(x_{n})\] \[=\Phi(\Psi_{0}(x_{0}|\mathbf{x})).\] \[\Psi_{1}(x_{0}|\Phi(\mathbf{x})) =\Psi_{1}(x_{0}|\phi_{1}(x_{1})\phi_{2}(x_{2})\ldots\phi_{n}(x_{n }))=x_{0}|\phi_{2}(x_{2})\ldots\phi_{n}(x_{n})(x_{0}\phi_{1}(x_{1}))\] \[=\Phi(\Psi_{1}(x_{0}|\mathbf{x})).\] Another property of the mixed graph \(G^{+}(n)\) for \(n>1\) is that from every pair of (not necessarily distinct) vertices \(u\) and \(v\), there is at least a walk of length \(2n\) from \(u\) to \(v\). For instance, for \(n=2\), fixing as before \(x_{1}=0\) and setting \(y=x_{0}+x_{2}\), we have the following walks of length \(4\) from \(x_{0}|0x_{2}\) to every vertex of \(G^{+}(2)\). \[x_{0}|0x_{2} \sim\ \overline{x_{0}}|0x_{2}\sim\ x_{0}|0x_{2}\to x_{0}|x_{2}x_{0}\to x _{0}|x_{0}(x_{0}+x_{2}) =x_{0}|x_{0}y\] \[\to\ x_{0}|x_{2}x_{0}\sim\ \overline{x_{0}}|x_{2}x_{0}\to \overline{x_{0}}|x_{0}(\overline{x_{0}}+x_{2})\sim\ x_{0}|x_{0}(\overline{x_{0 }}+x_{2}) =x_{0}|x_{0}\overline{y}\] \[\sim\ \overline{x_{0}}|0x_{2}\to\overline{x_{0}}|x_{2}\overline{x_ {0}}\sim\ x_{0}|x_{2}\overline{x_{0}}\to x_{0}|\overline{x_{0}}(x_{0}+x_{2}) =x_{0}|\overline{x_{0}}y\] \[\sim\ \overline{x_{0}}|0x_{2}\to\overline{x_{0}}|x_{2}\overline{x_ {0}}\to\overline{x_{0}}|\overline{x_{0}}(\overline{x_{0}}+x_{2})\sim x_{0}| \overline{x_{0}}(\overline{x_{0}}+x_{2}) =x_{0}|\overline{x_{0}}y\] \[\to\ x_{0}|x_{2}x_{0}\to x_{0}|x_{0}(x_{0}+x_{2})\to x_{0}|(x_{0}+x _{2})0\sim\overline{x_{0}}|(x_{0}+x_{2})0\quad=\overline{x_{0}}|y0\] \[\to\ x_{0}|x_{2}x_{0}\to x_{0}|x_{0}(x_{0}+x_{2})\sim\overline{x_ {0}}|(x_{0}+x_{2})\to\overline{x_{0}}|(x_{0}+x_{2})1 =\overline{x_{0}}|y1\] \[\sim\ \overline{x_{0}}|0x_{2}\to\overline{x_{0}}|x_{2}\overline{x_ {0}}\to\overline{x_{0}}|\overline{x_{0}}(\overline{x_{0}}+x_{2})\to\overline{x_ {0}}|(\overline{x_{0}}+x_{2})0 =\overline{x_{0}}|\overline{y}0\] \[\to\ x_{0}|x_{2}x_{0}\sim\overline{x_{0}}|x_{2}x_{0}\to\overline{x_ {0}}|x_{0}(\overline{x_{0}}+x_{2})0\to\overline{x_{0}}|(\overline{x_{0}}+x_{2 })1 =\overline{x_{0}}|\overline{y}1.\] Working with the adjacency matrix \(\mathbf{A}\) of \(G^{+}(2)\) (indexed according to Figure 7), the above property is apparent when we look at the power \(\mathbf{A}^{4}\). \[\mathbf{A}=\left(\begin{array}{cccccccc}1&1&0&0&0&0&0&0\\ 1&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&1&0\\ 0&0&1&0&1&0&0\\ 0&0&0&1&0&1&0&0\\ 0&1&0&0&1&0&0\\ 0&0&0&0&0&1&0&1\\ 0&0&0&0&0&1&1\end{array}\right),\quad\quad\mathbf{A}^{4}=\left(\begin{array}{ ccccc}5&3&3&1&1&1&1&1\\ 3&3&1&3&1&3&1\\ 1&1&3&1&3&3&1&3\\ 1&1&5&1&3&3&1\\ 1&3&3&1&5&1&1&1\\ 3&1&3&3&1&3&1&1\\ 1&3&1&1&3&3&5\end{array}\right).\] ### The \(n\)-line mixed graphs Let \(G=(V,A)\) be a \(2\)-regular digraph with a given \(1\)-factorization, that is, containing two arc-disjoint spanning \(1\)-regular digraphs \(H_{1}\) and \(H_{2}\). Assuming that the arcs of \(H_{1}\) have color blue and the arcs of \(H_{2}\) have color red, we can also think about a (proper) arc-coloring \(\gamma\) of \(G\). Then, if \(xy\) represents an arc of \(G\), we denote its color as \(\gamma(xy)\). Given an integer \(n\geq 3\), the vertices of the \(n\)-line mixed graph \(H(n)=H_{n}(G)\) are the set of \(n\)-walks in \(G\), \(x_{1}x_{2}\ldots x_{n-1}x_{n}\), with \(x_{i}\in V\) and \(x_{i}x_{i+1}\in A\), for \(i=1,\ldots,n-1\). The adjacencies of \(H(n)\) are as follows: \[x_{1}x_{2}\ldots x_{n-1}x_{n}\sim\ y_{1}x_{2}\ldots x_{n-1}x_{n}\ (\text{edges}), \tag{23}\] where \(\gamma(y_{1}x_{2})\neq\gamma(x_{1}x_{2})\); and \[x_{1}x_{2}\ldots x_{n-1}x_{n}\rightarrow\ x_{2}\ldots x_{n-1}x_{n}y_{n+1}\ ( \text{arcs}), \tag{24}\] where \(\gamma(x_{n}y_{n+1})=\)red if \(\gamma(x_{1}x_{2})=\gamma(x_{n-1}x_{n})\), and \(\gamma(x_{n}y_{n+1})=\)blue if \(\gamma(x_{1}x_{2})\neq\gamma(x_{n-1}x_{n})\). The reason for the name of \(H_{n}(G)\) is because when we contract all its edges, so identifying the vertices in (23), the resulting digraph is the \((n-1)\)-iterated line digraph \(L^{n-1}(G)\) of \(G\), see Fiol, Yebra, and Alegre [15]. Indeed, under such an operation, each pair of vertices in (23) becomes a vertex that can be represented by the sequence \(x_{2}x_{3}\ldots x_{n}\), which, according to (24), is adjacent to the two vertices \(x_{3}\ldots x_{n}y_{n+1}\) with \(y_{n+1}\in\Gamma^{+}(x_{n})\) in \(G\). In the following result, we describe other basic properties of \(H_{n}(G)\). **Proposition 2.7**.: _Let \(G=(V,A)\) be a digraph with \(r\) vertices and diameter \(s\), having a \(1\)-factorization. For a given \(n\geq 3\), the following holds._ * _The mixed graph_ \(H_{n}=H_{n}(G)\) _has_ \(N=r\cdot 2^{n-1}\) _vertices, and it is totally_ \((1,1)\)_-regular with no digons._ * _The diameter of_ \(H_{n}\) _satisfies_ \(k\leq 2(s+n)-3\)_._ Proof.: \((i)\) Every vertex \(x_{1}\ldots x_{n}\) of \(H_{n}\) corresponds to a walk of \(G\) with first vertex \(x_{1}\), which gives \(r\) possibilities and, since \(G\) is \(2\)-regular, for every other \(x_{i}\), \(i=2,\ldots,n\), we have \(2\) possible options. This provides the value of \(N\). To show total \((1,1)\) regularity, it is enough to prove that \(H_{n}\) is \(1\)-in-regular. Indeed, any vertex adjacent to \(x_{1}x_{2}\ldots x_{n}\), with \(\gamma(x_{n-1}x_{n})\)=blue (respectively, \(\gamma(x_{n-1}x_{n})=\) red) must be of the form \(yx_{1}\ldots x_{n-2}x_{n-1}\) with \(\gamma(yx_{1})\neq\gamma(x_{n-2}x_{n-1})\) (respectively, with \(\gamma(yx_{1})=\gamma(x_{n-2}x_{n-1})\)). But, in both cases, there is only one possible choice for vertex \(y\). With respect to the absence of digons, notice that a vertex \(\boldsymbol{u}=x_{1}x_{2}\ldots x_{n-1}x_{n}\) belongs to a digon if, after two steps, we come back to \(\boldsymbol{u}\), which means that \(x_{1}x_{2}\ldots x_{n-1}x_{n}=x_{3}x_{4}\ldots x_{n}y_{n+1}y_{n+2}\) and, hence, \(x_{i}=x_{3}=\cdots\) and \(x_{2}=x_{4}=\cdots\). In other words, vertex \(\boldsymbol{u}\) must be of the form \(xyxy\cdots xy\) (\(n\) even) or \(xyxy\cdots x\) (\(n\) odd), and \(G\) itself must have a digon between vertices \(x\) and \(y\). Assuming that \(n\) is even and \(\gamma(xy)\)=blue (the other cases are similar), the digon should be \[\boldsymbol{u}=xyxy\ldots xy\quad\rightarrow\quad\boldsymbol{v}=yxyx\ldots yx \quad\rightarrow\quad\boldsymbol{u}.\] But the last adjacency is not possible since both the first and last arcs of \(\boldsymbol{v}\) would have color \(\gamma(yx)=\)red and, hence, so should be the color of \(xy\), a contradiction. \((ii)\) Given both vertices \(x_{1}x_{2}\ldots x_{n-1}x_{n}\) and \(y_{1}y_{2}\ldots y_{n-1}y_{n}\), let us consider a shortest path in \(G\) of length at most \(s\) from \(x_{n}\) to \(y_{2}\). Then, using both types of adjacencies, we can go from \(x_{1}x_{2}\ldots x_{n-1}x_{n}\) to a vertex of the form \(z_{1}\ldots y_{2}\). From this vertex, we now reach the vertex \(yy_{2}\ldots y_{n}\) in at most \(2(n-2)\) steps. Finally, if necessary, we can change \(y\) by \(y_{1}\). In total, we use \(k\leq 2s+2(n-2)+1=2(s+n)-3\) steps, as claimed. For example, if \(G\) is the complete symmetric digraph \(K_{3}\) (edges seen as digons) with vertices in \(\mathbb{Z}_{3}\), blue arcs \(i\to i+1\) and red arcs \(i\to i-1\) for \(i=0,1,2\), the adjacencies of \(H_{n}(K_{3})\), with \(3\cdot 2^{n-1}\) vertices, are \[x_{1}x_{2}\ldots x_{n-1}x_{n}\sim\ y_{1}x_{2}\ldots x_{n-1}x_{n},\quad y_{1}\neq x_{1},x_{2},\] \[x_{1}x_{2}\ldots x_{n-1}x_{n}\rightarrow\ x_{2}x_{3}\ldots x_{n}y _{n+1},\quad y_{n+1}=x_{n}-(x_{2}-x_{1})(x_{n}-x_{n-1}).\] Thus, the \((1,1)\)-regular mixed graphs \(H_{3}(K_{3})\) and \(H_{4}(K_{3})\), with diameter \(k=5\) and \(k=6\), respectively, are shown in Figure 9. In this case, when we contract all the edges of \(H_{n}(K_{3})\), we obtain the \((n-1)\)-iterated line digraph of \(K_{3}\), which, as commented in Introduction, is isomorphic to the Kautz digraph \(K(2,n-1)\). ## 3 A first computational approach: The \((1,1,k)\)-mixed graphs with diameter at most 6 The Moore bound \(M(1,1,k)\) coincides with the number of binary words of length \(\ell\leq k\) without consecutive zeroes. In this sense, the corresponding Moore tree can be rooted to a vertex labeled with the empty word. Every vertex labeled with a word \(\omega\) (of length \(\ell\)) with the last symbol different from \(0\) is joined by an edge to a vertex labeled \(\omega 0\) (of length \(\ell+1\)), for all \(0\leq\ell\leq k-1\). Moreover, the arcs are defined by \(\omega\rightarrow\omega 1\) (see an example in Figure 10). This new description of the Moore tree is very useful for performing an exhaustive computational search of the largest mixed graphs for some small values of the diameter \(k\). Let \(a(\ell)\) be the number of vertices at distance \(\ell\) from the root in the Moore tree. Using the above-mentioned labeling, it is easy to see that \(a(\ell)\) satisfies the recurrence equation \[a(\ell)=a(\ell-1)+a(\ell-2), \tag{25}\] with initial conditions \(a(0)=1\) and \(a(1)=2\). Indeed, \(a(\ell)\) is the number of words of length \(\ell\) (whose symbols are in the alphabet \(\Sigma=\{0,1\}\)) without consecutive zeroes. The words of length \(\ell\) non-ending with \(0\) are constructed by a word of length \(\ell-1\) by adding \(0\). This gives \(a(\ell-1)\). Moreover, the words of length \(\ell\) ending with \(0\) are constructed by adding \(1\). This gives \(a(\ell-2)=b(\ell)\), where \(b(\ell)\) is the number of vertices at distance \(\ell\) from the root joined by an edge to a vertex at distance \(\ell-1\). So \(b(\ell)\) satisfies the same recurrence relation as \(a(\ell)\) but with initial conditions \(b(0)=0\) and \(b(1)=1\). Finally, let \(c(\ell)=a(\ell)-b(\ell)=a(\ell-1)\), that is, the number of vertices at distance \(\ell\) from the root pointed by an arc from a vertex at distance \(\ell-1\). Again, \(c(\ell)\) satisfies the same type of recurrence relation but with initial conditions \(c(0)=1\) and \(c(1)=1\). Thus, \(a(\ell)\), \(b(\ell)\), and \(c(\ell)\) are all Fibonnaci-like numbers. For instance, \(a(\ell)\) equals the following closed formula \[a(\ell)=\frac{5+3\sqrt{5}}{10}\left(\frac{1+\sqrt{5}}{2}\right)^{\ell}+\frac{5 -3\sqrt{5}}{10}\left(\frac{1-\sqrt{5}}{2}\right)^{\ell}.\] Note that the sequence obtained from \(a(\ell)\) corresponds to the Fibonacci numbers starting with \(a(0)=1\) and \(a(1)=2\) (see the sequence A000045 in [25]). Similar formulas can be obtained for \(b(\ell)\) and \(c(\ell)\). Now, we can perform an algorithmic exhaustive search to find all the largest \((1,1,k)\)-mixed graphs with order close to the Moore bound. For instance, in the case of almost mixed Moore graphs (with diameter \(k\) and order \(M(1,1,k)-1\)), the number of different cases of mixed graphs to analyze is bounded by \(\mathcal{N}(k)\), where \(\mathcal{N}(k)\) is computed next. Figure 9: The mixed graphs \(H_{3}(K_{3})\) and \(H_{4}(K_{3})\). 1. We remove a vertex in the Moore tree at distance \(k\) from the empty word. Notice there are \(a(k)\) different choices for this vertex. 2. Now, we count the number \(\mathcal{N}_{1}\) of possibilities to complete the undirected part of the mixed graph. We recall that the number of perfect matchings in a complete graph of even order \(n\) is \((n-1)!!\) This number \(\mathcal{N}_{1}\) depends on what vertex has been removed in the previous step. If the removed vertex has a label ending with \(0\), that is, it is a vertex hanging from an edge, then there are \(c(k)+1\) vertices in the graph without an incident edge. So, \(\mathcal{N}_{1}=c(k)!!\) Otherwise, there are \(c(k)-1\) vertices in the graph without an incident edge, so \(\mathcal{N}_{1}=(c(k)-2)!!\) 3. The number of possibilities to complete the directed part of the graph is upper bounded by the number of mappings from the set of words of length \(k\) without fixing points. This is precisely the number of derangements \(D_{a(k)}\). Notice that mappings, including assignations from a word of length \(k\) to its predecessor, are not valid. Putting all together, \(\mathcal{N}(k)<D_{a(k)}\left(b(k)c(k)!!+c(k)(c(k)-2)!!\right)\). Of course, \(\mathcal{N}(k)\) grows very fast with \(k\) but the number of cases to analyze for \(k\leq 4\) is reasonable (see Table 1). As a consequence, computing the diameter of the \(889980\) putative almost Moore \((1,1)\)-mixed graphs with diameter \(k=4\), we have the following result. (In fact, this calculation is easily done by using the result by Tuite and Erskhine [27] that such graphs are not totally regular.) **Proposition 3.1**.: _There is no almost \((1,1,4)\)-mixed Moore graph._ \begin{table} \begin{tabular}{|l||r|r|r|r|} \hline \(k\) & 3 & 4 & 5 & 6 \\ \hline \(N(k)\) & 396 & 889980 & 0 & \(2\cdot 10^{25}\) \\ \hline \end{tabular} \end{table} Table 1: Values of \(\mathcal{N}(k)\). Notice that \(\mathcal{N}(5)=0\) because \(c(5)\) is an even number. Figure 10: The Moore tree with parameters \(r=z=1\) and depth \(4\) labeled with binary words without consecutive zeros. A similar method can be implemented to perform an exhaustive search for orders \(M(1,1,k)-\delta\) for small \(\delta\). In these cases, the removal of \(\delta\) different vertices of the Moore tree (step 1) has many more choices but the number of operations in steps 2 and 3 sometimes is reduced. This is precisely what we do for \(n=M(1,1,4)-3=16\), where there are two cases to take into account: * The removal of three distinct words \(\omega_{1},\omega_{2},\omega_{3}\) of length 4 (corresponding to three distinct vertices at distance 4 from the root of the Moore tree). * Given any word \(\omega\) of length 3, the deletion of either the set of words \(\{\omega,\omega 1,\omega^{\prime}\}\) (when \(\omega\) ends in 0) or \(\{\omega,\omega 0,\omega 1\}\) (when \(\omega\) ends in 1), where \(\omega^{\prime}\neq\omega 1\) is any word of length 4. It remains to add the corresponding edges and arcs in the pruned Moore tree. The computational exhaustive search shows there is no \((1,1)\)-mixed Moore graph of diameter 4 and order 16. Now the maximum order becomes \(n=14\) for a mixed graph with parameters \(r=z=1\) and \(k=4\). There are many more possibilities to prune the Moore tree, so we decide to implement a direct method to perform an exhaustive search in this case: taking the perfect matching with a set of vertices \(V=\{0,1,\ldots,13\}\) and where \(i\sim i+1\) for all even \(i\), we add the three arcs \((0,2),(1,5)\) and \((5,7)\). Looking at vertex 0 as the root of the Moore tree, the existence of these three arcs in the mixed graph is given because \(\delta=5\) in this case. Now we proceed with the exhaustive search by adding the remaining arcs in the graph. There are 11! possibilities but excluding avoided permutations (those permutations with elements of order at most 2 or including edges of the perfect matching) significantly reduces the number of cases to analyze. After computing the diameter of all these mixed graphs and keeping those non-isomorphic mixed graphs with diameter \(k=4\), we have the following result. **Proposition 3.2**.: _The maximum order for a \((1,1)\)-mixed regular graph of diameter \(k=4\) is \(14\). There are \(27\) of such mixed graphs (see Table 2), and only one of them is a Cayley graph. Namely, that of the dihedral group \(D_{7}\) with generators \(r\) and \(s\), and presentation \(\langle r,s\,|\,r^{7}\!=\!s^{2}\!=\!(rs)^{2}\!=\!1\rangle\), also obtained as the line digraph of \(C_{7}\), see the mixed graph at the top left in Figure 11._ The spectra of all 27 mixed graphs with the largest order can be described with the help of the (complex) roots \(\alpha_{ij}\) of the irreducible polynomials \(p_{i}(x)\in\mathbb{Q}(x)\) given below: \[\begin{array}{l}p_{1}(x)=x^{4}+x^{3}-2x^{2}-x+2,\\ p_{2}(x)=x^{3}+x^{2}-2x-1,\\ p_{3}(x)=x^{3}-x+1,\\ p_{4}(x)=x^{3}+2x^{2}-x-3,\\ p_{5}(x)=x^{4}+x^{3}-x^{2}-x+1,\\ p_{6}(x)=x^{6}+x^{5}-3x^{4}-x^{3}+5x^{2}-4,\\ p_{7}(x)=x^{9}+3x^{8}-6x^{6}+2x^{5}+11x^{4}-3x^{3}-9x^{2}+3x+3,\\ p_{8}(x)=x^{6}+x^{5}-3x^{4}-2x^{3}+5x^{2}+2x-3,\\ p_{9}(x)=x^{6}+x^{5}-x^{4}+3x^{2}-1.\end{array}\] \begin{table} \begin{tabular}{|c|l|l|} \hline Number of graphs & Spectra \\ \hline 9 & \(\{2^{1},0^{6},\alpha^{1}_{1j},\alpha^{1}_{2s}\}\) for \(j=1,2,3,4\) and \(s=1,2,3\) \\ 6 & \(\{2^{1},-1^{1},1^{1},0^{5},\alpha^{1}_{3j},\alpha^{1}_{4j}\}\) for \(j=1,2,3\) \\ 5 & \(\{2^{1},0^{7},\alpha^{2}_{2j}\}\) for \(j=1,2,3\) \\ 4 & \(\{2^{1},0^{3},\alpha^{1}_{5j},\alpha^{1}_{6k}\}\) for \(j=1,2,3,4\) and \(k=1,\ldots,6\) \\ 2 & \(\{2^{1},1^{1},0^{3},\alpha^{1}_{7j}\}\) for \(j=1,\ldots,9\) \\ 1 & \(\{2^{1},0^{1},\alpha^{1}_{8j},\alpha^{1}_{9j}\}\) for \(j=1,\ldots,6\) \\ \hline \end{tabular} \end{table} Table 3: Classification of the largest mixed graphs for \(r=z=1\) and diameter \(k=4\) according to their spectra. The third row gives the spectrum of the 5 cospectral graphs in Figure 11. \begin{table} \begin{tabular}{|c|l|l|} \hline Number of graphs & Spectra \\ \hline 9 & \(\{2^{1},0^{6},\alpha^{1}_{1j},\alpha^{1}_{2s}\}\) for \(j=1,2,3,4\) and \(s=1,2,3\) \\ 6 & \(\{2^{1},-1^{1},1^{1},0^{5},\alpha^{1}_{3j},\alpha^{1}_{4j}\}\) for \(j=1,2,3\) \\ 5 & \(\{2^{1},0^{7},\alpha^{2}_{2j}\}\) for \(j=1,2,3\) \\ 4 & \(\{2^{1},0^{3},\alpha^{1}_{5j},\alpha^{1}_{6k}\}\) for \(j=1,2,3,4\) and \(k=1,\ldots,6\) \\ 2 & \(\{2^{1},1^{1},0^{3},\alpha^{1}_{7j}\}\) for \(j=1,\ldots,9\) \\ 1 & \(\{2^{1},0^{1},\alpha^{1}_{8j},\alpha^{1}_{9j}\}\) for \(j=1,\ldots,6\) \\ \hline \end{tabular} \end{table} Table 3: Classification of the largest mixed graphs for \(r=z=1\) and diameter \(k=4\) according to their spectra. The third row gives the spectrum of the 5 cospectral graphs in Figure 11. The results obtained by computer search are shown in Table 4, see next section. In what follows, we comment upon some of the cases. Notice that for diameter \(k=2,3,4\), the known \((1,1,k)\)-mixed graphs have the maximum possible order. The mixed graph of diameter \(k=2\) is the Kautz digraph \(K(2,2)\). The graph with \(k=3\) is isomorphic to the line digraph of the cycle \(C_{5}\). Some of the maximal graphs with diameter \(k=4\) were already shown in Figure 11. Two maximal graphs of diameter \(k=5\) are shown in Figure 13. The graph of order 72 listed in the table for \(k=8\) is a lift graph using the dihedral group \(D_{18}\) of order 18. This group consists of the 18 symmetries of the nonagon. To describe our graph, we consider a regular nonagon whose vertices are labeled 0 to 8 in clockwise order. Label the elements of \(D_{18}\) as follows. There are nine counter-clockwise rotations, each through an angle \(2\pi k/9\) and denoted Rot(k), for \(0\leq k<9\). Finally, there are the nine reflections Ref(k) about the line through vertex \(k\) and the midpoint of the opposite side. This notation is used to specify the voltages on the edges and arcs of the base graph shown in Figure 14. The graph of order 544 for \(k=13\) is a lift of the base graph shown in Figure 15 with voltages in the group \(\mathbb{Z}_{17}:\mathbb{Z}_{8}\). The remaining graphs are partially identified as notes following Table 4. Where a graph is identified as a lift using a voltage group of order half the order of the graph, the base graph is an undirected edge together with a directed loop at each vertex. A complete description of such larger graphs, espec Figure 11: Some cospectral largest mixed graphs with diameter \(k=4\) and degrees \(r\)=\(z\)=1. unfamiliar groups, would take a lot of pages. The interested reader can address the third author to request more information. ## 5 Table of large \((1,1,k)\)-mixed graphs A summary of the results for a \((1,1)\)-regular mixed graphs with diameter \(k\) at most \(16\) is shown in Table 4, where the lower bounds come from the mentioned constructions. Moreover, the upper bounds follow by Proposition 3.2 (\(k=4\)), a computer exploration (\(k=5\)), and the numbers \(M(1,1,k)-\delta(k)\) with \(\delta(k)\) given in (3) and adjusted even parity (since \(r=1\), the graph contains a perfect matching and, so, it must have even order), see Tuite and Erskine [27]. 1. Cayley graph on SmallGroup(54,6): \(\mathbb{Z}_{9}:\mathbb{Z}_{6}\). 2. Lift group is the dihedral group of order \(18\). Figure 12: A base graph with voltages on the symmetric group \(S_{3}\), and its lift graph. Figure 13: Two \((1,1,5)\)-mixed graphs with defect \(8\) and order \(24\). 3. Lift group is \(AGL(1,8)=(\boldsymbol{Z}_{2}^{3}):\boldsymbol{Z}_{7}\). 4. Cayley graph on SmallGroup(144,182). 5. Lift group is \(A_{5}\times\mathbb{Z}_{2}\). 6. Cayley graph on \(PSL(2,7):\mathbb{Z}_{2}\). 7. Lift group is \(\mathbb{Z}_{17}:\mathbb{Z}_{8}\). 8. Cayley graph on SmallGroup(800,1191). 9. Lift group is SmallGroup(512,1727). 10. Lift group is SmallGroup(800,1191). Figure 14: The base graph of the \((1,1,8)\)-mixed graph of order 72. Figure 15: The base graph of the \((1,1,13)\)-mixed graph of order 544. ## 6 Statements & Declarations ### Funding The research of C. Dalfo, M. A. Fiol, N. Lopez, and A. Messegue has been supported by AGAUR from the Catalan Government under project 2021SGR00434 and MICINN from the Spanish Government under project PID2020-115442RB-I00. The research of M. A. Fiol was also supported by a grant from the Universitat Politecnica de Catalunya with references AGRUPS-2022 and AGRUPS-2023. J. Tuite was supported by EPSRC grant EP/W522338/1. ### Competing Interests The authors have no relevant financial or non-financial interests to disclose. ### Author Contributions All authors contributed to the study's conception and design. Material preparation, data collection, and analysis were performed by all the authors, after much work was done. All authors contributed to the first draft of the manuscript, which was improved by all of them. All authors read and approved the final manuscript. \begin{table} \begin{tabular}{|r||r|r|r|r|} \hline \(k\) & Lower bound & Upper bound & Moore \(M(1,1,k)\) & Notes \\ \hline \hline 2 & 6 & 6 & 6 & \\ \hline 3 & 10 & 10 & 11 & \\ \hline 4 & 14 & 14 & 19 & \\ \hline 5 & 24 & 26 & 32 & \\ \hline 6 & 34 & 48 & 53 & \\ \hline 7 & 54 & 78 & 87 & Cayley\({}^{1}\) \\ \hline 8 & 72 & 126 & 142 & Lift\({}^{2}\) \\ \hline 9 & 112 & 206 & 231 & Lift\({}^{3}\) \\ \hline 10 & 144 & 336 & 375 & Cayley\({}^{4}\) \\ \hline 11 & 240 & 544 & 608 & Lift\({}^{5}\) \\ \hline 12 & 336 & 882 & 985 & Lift\({}^{6}\) \\ \hline 13 & 544 & 1428 & 1595 & Lift\({}^{7}\) \\ \hline 14 & 800 & 2312 & 2582 & Cayley\({}^{8}\) \\ \hline 15 & 1024 & 3744 & 4179 & Lift\({}^{9}\) \\ \hline 16 & 1600 & 6058 & 6763 & Lift\({}^{10}\) \\ \hline \end{tabular} \end{table} Table 4: Bounds for mixed graphs with \((r,z,k)=(1,1,k)\). ### Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
2308.15180
Small Area Estimation with Random Forests and the LASSO
We consider random forests and LASSO methods for model-based small area estimation when the number of areas with sampled data is a small fraction of the total areas for which estimates are required. Abundant auxiliary information is available for the sampled areas, from the survey, and for all areas, from an exterior source, and the goal is to use auxiliary variables to predict the outcome of interest. We compare areal-level random forests and LASSO approaches to a frequentist forward variable selection approach and a Bayesian shrinkage method. Further, to measure the uncertainty of estimates obtained from random forests and the LASSO, we propose a modification of the split conformal procedure that relaxes the assumption of identically distributed data. This work is motivated by Ghanaian data available from the sixth Living Standard Survey (GLSS) and the 2010 Population and Housing Census. We estimate the areal mean household log consumption using both datasets. The outcome variable is measured only in the GLSS for 3\% of all the areas (136 out of 5019) and more than 170 potential covariates are available from both datasets. Among the four modelling methods considered, the Bayesian shrinkage performed the best in terms of bias, MSE and prediction interval coverages and scores, as assessed through a cross-validation study. We find substantial between-area variation, the log consumption areal point estimates showing a 1.3-fold variation across the GAMA region. The western areas are the poorest while the Accra Metropolitan Area district gathers the richest areas.
Victoire Michal, Jon Wakefield, Alexandra M. Schmidt, Alicia Cavanaugh, Brian Robinson, Jill Baumgartner
2023-08-29T10:02:10Z
http://arxiv.org/abs/2308.15180v1
# Small Area Estimation with Random Forests and the LASSO ###### Abstract We consider random forests and LASSO methods for model-based small area estimation when the number of areas with sampled data is a small fraction of the total areas for which estimates are required. Abundant auxiliary information is available for the sampled areas, from the survey, and for all areas, from an exterior source, and the goal is to use auxiliary variables to predict the outcome of interest. We compare areal-level random forests and LASSO approaches to a frequentist forward variable selection approach and a Bayesian shrinkage method. Further, to measure the uncertainty of estimates obtained from random forests and the LASSO, we propose a modification of the split conformal procedure that relaxes the assumption of identically distributed data. This work is motivated by Ghanaian data available from the sixth Living Standard Survey (GLSS) and the 2010 Population and Housing Census. We estimate the areal mean household log consumption using both datasets. The outcome variable is measured only in the GLSS for 3% of all the areas (136 out of 5019) and more than 170 potential covariates are available from both datasets. Among the four modelling methods considered, the Bayesian shrinkage performed the best in terms of bias, MSE and prediction interval coverages and scores, as assessed through a cross-validation study. We find substantial between-area variation, the log consumption areal point estimates showing a 1.3-fold variation across the GAMA region. The western areas are the poorest while the Accra Metropolitan Area district gathers the richest areas. **Keywords**: Model-based inference, Model selection, High-dimensional auxiliary information, Prediction intervals, Split conformal inference. ## 1 Motivation In 2015, the United Nations (UN) released their 2030 agenda for sustainable development goals (SDGs) consisting of 17 goals, the first of which was to end poverty worldwide (Resolution, General Assembly and others, 2015). For their first SDG, the UN made seven guidelines explicit, including the implementation of "poverty eradication policies" at a disaggregated level. To that end, producing reliable and fine-grained pictures of socioeconomic status and income inequality is fundamental to help decision makers prioritise and target certain areas. These detailed maps help local communities understand their situation compared to their neighbours, which also helps when planning interventions (Bedi et al., 2007). In Ghana, household surveys are collected every few years to measure the living conditions of households across Ghanaian regions and districts and to monitor poverty. To keep track of the Ghanaian population wealth, the equivalised consumption is recorded for the sampled households. Although the household income is not directly measured, the equivalised expenditure is an alternative that allows decision makers to assess a household's standard of living (Johnson et al., 2005). This measure corresponds to the household consumption scaled by a weight based on the number of members in the household. We aim to estimate the equivalised consumption at a disaggregated level, to help policymakers better understand the distribution of the households' living standard in Ghana, in order to prioritise certain areas when implementing poverty eradication policies. The sixth Ghana Living Standards Survey (GLSS), conducted in 2012-2013, was the last household survey carried out prior to the new UN SDGs agenda. The fifth GLSS had shown that inequalities had increased since 2006. In particular, although the overall poverty decreased nation-wide, the wealthiest decile of the population consumed 6.8 times more than the poorest (Cooke et al., 2016). A downside of these household surveys is that the sampling design is stratified two-stage, which only allows for reliable survey sampling estimates at the district level, at best. Ghana is divided into 10 regions, which are formed by 170 districts or, at a finer level, around 38,000 enumeration areas (EAs). Producing reliable estimates at the EA level would further help the authorities in their policy decisions. We analyse data from the sixth GLSS for the Greater Accra Metropolitan Area (GAMA), which consists of 8 districts. The GLSS used a stratified two-stage cluster sample in which strata are defined by an urban or rural indicator. Then, the clusters, which correspond to the EAs, were sampled following proportional to size sampling. Within the sampled EAs, 15 households were systematically sampled. For each sampled household, we have detailed assessment of consumption and their level of education, employment, assets, with, in total, 174 auxiliary variables. This gives a sample of 136 EAs out of the 5019, in this Ghanaian region. The sampled EAs are anonymised, which means it is unknown which 136 EAs of the 5019 EAs are represented in the survey. Additionally, we have data available from the 2010 Ghanaian census for all EAs in the GAMA. Among others, the same 174 variables are measured in this census and in the sixth GLSS. The aim of this work is to produce estimates with uncertainty of the mean log household consumption at the EA level in the GAMA. In this paper, to deal with the higher number of auxiliary variables compared to the number of sampled EAs, we assess the performance of random forests and the LASSO (which performs variable selection) to estimate the mean household log consumption at the EA level in the GAMA. For the sake of comparison, we also consider a forward variable selection approach in the frequentist framework and a Bayesian shrinkage method. For all four approaches, we adopt EA-level models. Further, we propose a modification of the split conformal procedure to compute prediction intervals for the random forest and LASSO predictions while relaxing the assumption of identically distributed responses, which is necessary due to the complex sampling design. This paper is organised as follows. Section 1.1 briefly reviews the literature on small area estimation (SAE) and variable selection in the frequentist, Bayesian and machine learning frameworks. Section 2 describes the four methods that will be compared and the proposed procedure to produce prediction intervals for estimates obtained through random forests and the LASSO. Section 3 shows the results from two simulation studies. First, Section 3.1 presents a comparison between the proposed modified split conformal and the original split conformal procedures. Then, Section 3.2 provides a comparison between the four methods that perform variable selection. Section 4 discusses the results from the four methods applied to the Ghanaian datasets. Finally, Section 5 concludes the paper with a discussion. ### Literature review SAE concerns estimation of area-level summaries when data is sparse or non-existent in the areas (Rao and Molina, 2015). This area of research in survey sampling has greatly evolved in the last 50 years (Pfeffermann, 2002, 2013; Rao and Molina, 2015; Ghosh, 2020). Tzavidis et al. (2018) points out that the use of SAE by national statistical institutes (NSIs) and other organisations to produce official statistics exhibits this increasing popularity; e.g., the povmap software developed by the World Bank (Elbers et al., 2003; World Bank, 2015) and the Small Area Income and Poverty Estimates project carried out by the US Census Bureau (Census Bureau, 2018). In survey sampling, the design-based framework may be distinguished from the model-based framework. Design-based methods, also called randomisation methods, assume the variable of interest to be fixed in the finite population while the randomness comes from the sampling process. Direct (weighted) estimators have favourable design-based properties in large samples and rely only on the sampling weights and the recorded responses within each sampled area to produce areal estimates. Hence, estimates for non-sampled areas are missing. Additionally, data sparsity yields imprecise direct estimates at the areal level. Similarly, data sparsity within areas may lead to imprecise model-assisted estimates. These latter approaches also fall under the umbrella of design-based inference. Model-assisted methods are design-based approaches which model the responses to gain precision but are still design consistent (Sarndal et al., 2003). An alternative is to use model-based approaches. In this setting, the responses are no longer assumed fixed but treated as random variables which are modelled using auxiliary information and/or random effects. In model-based methods for SAE, it is common to use exterior sources of information to augment the auxiliary information from the sample to the entire finite population; for example, using information obtained from censuses. Tzavidis et al. (2018) describe a two-step approach to produce model-based small area estimates. First, a model is fitted using the survey responses and survey auxiliary variables. Then, the outcome is predicted for the entire finite population according to the estimated model parameters and finite population auxiliary information. Abundant auxiliary information may be measured in the sample, for the sampled areas, and through exterior sources, for all the areas of the region of interest. It may therefore be necessary to select a subset of covariates to model the response variable, in the presence of high-dimensional auxiliary information. In this way, precision can be increased as unnecessary auxiliary variables are not included. The inference procedure for model-based approaches can be performed under the frequentist or Bayesian frameworks, or with flexible parametric models via machine learning techniques. Machine learning methods are becoming more and more popular in the survey sampling community; see for example, Wang et al. (2014) and Breidt and Opsomer (2017). However, it is not straightforward to perform inference and assess the estimates' uncertainty under these approaches. For example, the bootstrap does not work for non-smooth targets (Dezeure et al., 2015). Among machine learning methods, random forests (Breiman, 2001) can be fitted to unit-level or area-level data for a flexible approach. Random forests are a collection of regression trees that recursively partition the responses into increasingly homogeneous subgroups (nodes), based on covariate splits. Random forests, which present the benefit of accommodating non-linear relationships and complex interactions, naturally select variables through these covariate splits. Each individual regression tree is fitted on a bootstrap sample of the original dataset. There is a growing literature on methods to measure the uncertainty of random forest point estimates. For instance, different Jackknife approaches have been proposed (Steinberger and Leeb, 2016; Wager et al., 2014; Wager and Athey, 2018). However, these procedures present drawbacks, such as their computational overheads. Additionally, it is unclear how they apply to survey data. Recently, Zhang et al. (2019) proposed the so-called out-of-bag (OOB) prediction intervals, which are computed based on quantiles of the random forest out-of-bag prediction errors. These denote the difference between a data point's outcome and its point estimate, obtained from a random forest grown without said data point. In simulation studies, Zhang et al. (2019) show that their proposed method performs similarly to the split conformal (SC) approach proposed by Lei et al. (2018). The SC approach may be used to compute prediction intervals for any modelling method (e.g., linear models or random forests). To compute prediction intervals for random forest estimates through the SC method, the original dataset is first split into two datasets. A random forest is trained on one subsample, and point estimates and their associated prediction errors are obtained for the other subsample. Then, the intervals are computed based on the empirical quantiles of the prediction errors from the second subsample. Note that while the OOB method proposed by Zhang et al. (2019) only estimates prediction intervals for random forest forests, the SC method can be applied to any modelling procedure used to obtain point estimates. A common feature of all these prediction interval methods is that the data are assumed to be independently and identically distributed. This is a strong assumption and is not usually true for data gathered from a complex survey design. Inference procedures for model-based approaches can also follow the frequentist or the Bayesian paradigms. In these frameworks, variable selection is an important yet contentious research topic. In the frequentist framework, two-step procedures are common. Variables are first iteratively selected (forward selection) or removed (backward elimination) to model the outcome, based on the optimisation of some criterion (e.g., AIC, BIC, \(R^{2}\)). Then, a final model that includes only the selected covariates is fitted to the data. In SAE, it is common to select variables by comparing models through some criterion (for example, AIC or BIC, or survey sampling adjusted versions); see e.g., Han (2013); Rao and Molina (2015) and Lahiri and Suntornchost (2015). In the frequentist framework, regularisation methods have also been proposed in the literature, such as ridge regression and the LASSO (Tibshirani, 1996, 2011; McConville et al., 2017). These methods apply constraints to the regression parameters. However, in the case of the LASSO, these constraints yield estimates of the model parameters whose uncertainty estimation is difficult, especially in a survey setting. In a simulation study, Lei et al. (2018) show that their proposed SC method performs well in computing prediction intervals for predictions obtained through the LASSO, when the data are independently and identically distributed. In the Bayesian framework, variable selection is conducted by imposing informative priors on the model parameters. Multiple shrinkage priors have been proposed in the literature, for example, Bayesian ridge regression and the Bayesian LASSO (Hans, 2010). In the former, a Gaussian prior is assigned to the regression parameters, while a double-exponential distribution is used for the latter. It can be shown that, under the respective priors, computing the maxima _a posteriori_ to estimate the parameters results exactly in ridge-type and LASSO-type estimators (Reich and Ghosh, 2019). A more recent popular approach (Car valho et al., 2010) is the use of the horseshoe prior, which imposes _a priori_ a heavier weight towards \(0\) than a normal or double-exponential distribution. ## 2 Methods Let a region be divided into \(M\) non-overlapping areas, \(A_{c},\ c=1,\ldots,M\). Denote by \(N_{c}\) the number of units in \(A_{c}\), with outcomes \(y_{ck},\ k=1,\ldots,N_{c}\). The main goal is to estimate the areal mean \(\overline{y}_{c}=(1/N_{c})\sum_{k=1}^{N_{c}}y_{ck}\) for all areas \(c=1,\ldots,M\), using a sample of \(n_{c}\) units taken from \(c=1,\ldots,m\) areas. Denote by \(s\) the set of area and household indices included in the sample and denote by \(s_{c},\ c=1,\ldots,M\), the set of sampled units in the \(c\)-th area. Let \(f_{c}=n_{c}/N_{c}\) be the sampling fraction within each area. For any variable \(a\), let \(\overline{a}_{c}=(1/N_{c})\sum_{k=1}^{N_{c}}a_{ck},\ c=1,\ldots,M\), be the population areal mean, and \(\overline{a}_{c}^{(s)}=(1/n_{c})\sum_{k\in s_{c}}a_{ck},\ c=1,\ldots,m\) and \(\overline{a}_{c}^{(ns)}=(1/(N_{c}-n_{c}))\sum_{k\notin s_{c}}a_{ck}=\left( \overline{a}_{c}-f_{c}\overline{a}_{c}^{(s)}\right)/(1-f_{c}),\ c=1,\ldots,M\), the areal means for the sampled (subscript \((s)\)) and non-sampled (subscript \((ns)\)) units, respectively. For all \(M\) areas, the estimation target may be decomposed as follows \[\overline{y}_{c}=\frac{1}{N_{c}}\left(\sum_{k\in s_{c}}y_{ck}+\sum_{k\notin s_ {c}}y_{ck}\right)=f_{c}\overline{y}_{c}^{(s)}+(1-f_{c})\overline{y}_{c}^{(ns)},\ c=1,\ldots,M. \tag{1}\] To estimate \(\overline{y}_{c}\), the non-sampled mean, \(\overline{y}_{c}^{(ns)}\), remains to be estimated for all \(M\) areas. Let \(\widehat{\overline{Y}}_{c}^{(ns)},\ c=1,\ldots,M\), be the estimator of \(\overline{y}_{c}^{(ns)},\ c=1,\ldots,M\). The prediction approach estimator (Lohr, 2021) for the target of inference is \[\widehat{\overline{Y}}_{c}=f_{c}\overline{y}_{c}^{(s)}+(1-f_{c})\widehat{ \overline{Y}}_{c}^{(ns)},\ c=1,\ldots,M. \tag{2}\] The uncertainty of \(\widehat{\widehat{Y}}_{c}\) may be measured using prediction intervals of level \((1-\alpha)\%\), PI\({}_{(1-\alpha)}\), of the form \[\text{PI}_{(1-\alpha)\%}\left[\widehat{\widehat{Y}}_{c}\right]=f_{c}\overline{ y}_{c}^{(s)}+(1-f_{c})\text{PI}_{(1-\alpha)\%}\left[\widehat{\widehat{Y}}_{c}^{( ns)}\right],\ c=1,\ldots,M. \tag{3}\] Note that for a non-sampled area \(c^{\prime}\), \(f_{c^{\prime}}=0\) and the estimator reduces to \(\widehat{\widehat{Y}}_{c^{\prime}}=\widehat{\widehat{Y}}_{c^{\prime}}^{(ns)}\), with prediction interval, \(\text{PI}_{(1-\alpha)\%}\left[\widehat{\widehat{Y}}_{c^{\prime}}\right]= \text{PI}_{(1-\alpha)\%}\left[\widehat{\widehat{Y}}_{c^{\prime}}^{(ns)}\right]\). Random forests and the LASSO are considered to estimate \(\widehat{\widehat{Y}}_{c}^{(ns)}\) in the model-based framework. For the sake of comparison, we also consider a forward variable selection approach in the frequentist paradigm and a Bayesian shrinkage method. In this model-based framework, the finite population response values \(\left\{y_{ck},\ c=1,\ldots,M,\ k=1,\ldots,N_{c}\right\}\) are assumed to be a realisation of super population independent random variables \(Y_{ck}\) that follow the model \[\mathbb{E}(Y_{ck})=f(\boldsymbol{x}_{ck},\boldsymbol{\beta})\quad\text{and} \quad\mathbb{V}(Y_{ck})=\sigma^{2}, \tag{4}\] where \(\boldsymbol{x}_{ck}\) is a \(p\)-dimensional vector of covariates. Consequently, it is assumed that the non-sampled units follow the same model as the sampled units. The four modelling approaches assume there are covariates available from the sample, \(\left\{\boldsymbol{x}_{ck},\ c,k\in s\right\}\), as well as areal covariate means, \(\overline{\boldsymbol{x}}_{c},\ c=1,\ldots,M\), which are known for all the areas of the finite population. Such information may be obtained from a census. Inference is carried out at the areal level in all three methods. The super population model (4) implies the following sampled and non-sampled moments \[\mathbb{E}\left(\overline{Y}_{c}^{(s)}\right) =f(\overline{\boldsymbol{x}}_{c}^{(s)},\boldsymbol{\beta}), \mathbb{V}\left(\overline{Y}_{c}^{(s)}\right) =\sigma^{2}/n_{c},\] \[\mathbb{E}\left(\overline{Y}_{c}^{(ns)}\right) =f(\overline{\boldsymbol{x}}_{c}^{(ns)},\boldsymbol{\beta}), \mathbb{V}\left(\overline{Y}_{c}^{(ns)}\right) =\sigma^{2}/(N_{c}-n_{c}). \tag{5}\] Therefore, inference is conducted using \(\left\{\left(\overline{y}_{c}^{(s)},\overline{\boldsymbol{x}}_{c}^{(s)} \right),\ c=1,\ldots,m\right\}\) and the non-sampled mean predictions, \(\widehat{\overline{Y}}_{c}^{(ns)},\ c=1,\ldots,M\), are computed using the available covariates' non sampled means, \(\overline{\mathbf{x}}_{c}^{(ns)},\ c=1,\ldots,M\). ### Machine learning approach First, we consider a random forest prediction approach. This non-parametric method makes no further assumption to Model (4). The corresponding moments of the sampled and non-sampled outcome means are as in (5). Following Breiman (2001), random forest point estimates are the average over \(B\) point estimates obtained from training \(B\) independent regression trees on \(B\) bootstrap versions of the original sample. Each regression tree partitions the bootstrap response values based on splitting rules applied to covariates. A random forest algorithm is described in appendix B. To measure the uncertainty associated to the random forest predictions, we propose a modification of the SC prediction intervals of Lei et al. (2018) which relaxes the assumption of identically distributed sampled and non-sampled data points. The original SC procedure assumes \(\overline{Y}_{c}^{(s)}\) and \(\overline{Y}_{c}^{(ns)}\) to be independently and identically distributed (i.i.d.). However, as shown in (5), \(\overline{Y}_{c}^{(s)}\) and \(\overline{Y}_{c}^{(ns)}\) are not identically distributed. Hence, in the proposed modified SC procedure, we assume the mean structures to be similar and allow the variances to be scaled differently, as is the case in (5). Specifically, in this context of a complex sampling design, we assume the variance is independent of the sample strata. The unit-level variance, \(\sigma^{2}\), is assumed fixed across the strata and the sampled and non-sampled areal-level variances only vary with the number of sampled and non-sampled units, \(n_{c}\) and \(N_{c}-n_{c}\), respectively. We propose to scale the residuals computed in the original SC procedure before computing the empirical quantile necessary to the prediction intervals. Said quantile is then scaled when computing the prediction intervals. The proposed scaled SC procedure can be described through the following steps: 1. Randomly split \(\left\{\left(\overline{y}_{c}^{(s)},\overline{\mathbf{x}}_{c}^{(s)}\right),\ c=1, \ldots,m\right\}\) into two equal sized datasets. Denote by \(S_{1}\) and \(S_{2}\) the resulting two sets of area indices; 2. Train a random forest on \(\left\{\left(\overline{y}_{c}^{(s)},\overline{\mathbf{x}}_{c}^{(s)}\right),\ c\in S_{1}\right\}\) and predict \(\left\{\widehat{\overline{Y}}_{c}^{(S_{2})},\ c\in S_{2}\right\}\); 3. Compute the scaled absolute residuals \(R_{c}=\sqrt{n_{c}}\times\left|\overline{y}_{c}^{(s)}-\widehat{\overline{Y}}_{c }^{(S_{2})}\right|,\ c\in S_{2}\); 4. Find \(d_{\alpha}\), the \(k_{\alpha}\)th smallest residual \(R\), for \(k_{\alpha}=\lceil(m/2+1)(1-\alpha)\rceil\); 5. Let the prediction interval be \(\text{PI}_{(1-\alpha)\%}\left[\widehat{\overline{Y}}_{c}^{(ns)}\right]= \widehat{\overline{Y}}_{c}^{(ns)}\pm d_{\alpha}/\sqrt{N_{c}-n_{c}},\ c=1, \ldots,M\). Hence, with a random forest procedure, the estimator and its uncertainty (2) and (3) become \[\widehat{\overline{Y}}_{c}=f_{c}\overline{y}_{c}^{(s)}+(1-f_{c})\left(\sum_{ c^{\prime}=1}^{m}w_{c^{\prime}}(\overline{\mathbf{x}}_{c}^{(ns)})\overline{y}_{c^{ \prime}}^{(s)}\right),\quad\text{PI}_{(1-\alpha)\%}\left[\widehat{\overline{ Y}}_{c}\right]=\widehat{\overline{Y}}_{c}\pm(1-f_{c})\frac{d_{\alpha}}{\sqrt{N_{c}-n_{c} }},\] where the weights \(w_{c^{\prime}}(\cdot)\) result from the random forest procedure shown in appendix B. ### Frequentist approach: the LASSO Second, we consider the LASSO to predict the areal non-sampled means while performing variable selection. The LASSO estimates \(\widehat{\mathbf{\beta}}^{\text{LASSO}}\) by solving \(\min_{\mathbf{\beta}\in\mathbb{R}^{p}}\left\{\left\|\overline{\mathbf{y}}^{(s)}- \overline{\mathbf{x}}^{(s)}\mathbf{\beta}\right\|_{2}^{2}/(2m)+\lambda\left\|\mathbf{ \beta}\right\|_{1}\right\},\)\(\lambda\geq 0\), where \(\overline{\mathbf{y}}^{(s)}=\left[\overline{y}_{1}^{(s)},\ldots,\overline{y}_{m}^{(s)} \right]^{\top}\) and \(\overline{\mathbf{x}}^{(s)}=\left[\overline{\mathbf{x}}_{1}^{(s)^{\top}},\ldots, \overline{\mathbf{x}}_{m}^{(s)^{\top}}\right]^{\top}\). Note that the shrinkage penalty parameter \(\lambda\) is fixed after a 10-fold cross-validation, seeking the smallest test MSE. To measure the uncertainty associated with these LASSO predictions, the proposed scaled SC approach described above is applied to obtain \(d_{\alpha}^{\text{LASSO}}\), for an interval of level \((1-\alpha)\%\). Note that in the second step of the proposed approach, the LASSO is fitted to the first subsample \(S_{1}\), instead of a random forest. Therefore, the estimator and its uncertainty (2) and (3) become \[\widehat{\overline{Y}}_{c}=f_{c}\overline{y}_{c}^{(s)}+(1-f_{c})\left[\overline {\mathbf{x}}_{c}^{(ns)^{\top}}\widehat{\mathbf{\beta}}^{\text{LASSO}}\right],\quad \text{PI}_{(1-\alpha)\%}\left[\widehat{\overline{Y}}_{c}\right]=\widehat{ \overline{Y}}_{c}\pm(1-f_{c})\frac{d_{\alpha}^{\text{LASSO}}}{\sqrt{N_{c}-n_{c }}}.\] ### Frequentist approach: forward variable selection As a comparison, we consider a frequentist method with the commonly used forward approach with AIC as a variable selection criterion. Model (4) is completed by assuming the errors are normally distributed. To predict \(\widehat{\overline{Y}}_{c}^{(ns)}\), the forward approach is a two-step procedure. First, a subset of \(K\) covariates \(\mathbf{z}\) is selected among the available \(\mathbf{x}\)'s. To that end, linear models are iteratively fitted, adding one covariate at a time based on the resulting AIC value. Then, using the selected covariates, a linear model is fitted: \(\overline{y}_{c}^{(s)}\sim\mathcal{N}\left(\overline{\mathbf{z}}_{c}^{(s)^{\top}} \mathbf{\eta},\sigma^{2}/n_{c}\right),\ c=1,\ldots,m\), to estimate \(\widehat{\mathbf{\eta}}\), \(\widehat{\mathbb{V}}\left(\widehat{\mathbf{\eta}}\right)\) and \(\widehat{\sigma}\). The steps required to run this forward approach are detailed in Appendix A. The estimator and uncertainty (2) and (3) become \[\widehat{\overline{Y}}_{c}=f_{c}\overline{y}_{c}^{(s)}+(1-f_{c}) \left[\overline{\mathbf{z}}_{c}^{(ns)^{\top}}\widehat{\mathbf{\eta}}\right],\] \[\text{PI}_{(1-\alpha)\%}\left[\widehat{\overline{Y}}_{c}\right]= \widehat{\overline{Y}}_{c}\pm q_{\alpha}(1-f_{c})\sqrt{\overline{\mathbf{z}}_{c}^ {(ns)^{\top}}\widehat{\mathbb{V}}\left(\widehat{\mathbf{\eta}}\right)\overline{ \mathbf{z}}_{c}^{(ns)}+\frac{\widehat{\sigma}^{2}}{N_{c}-n_{c}}},\] where \(q_{\alpha}\) denotes the \(\alpha\)-level quantile from a \(\mathcal{N}(0,1)\) distribution. Note that uncertainty in the covariates selected in not accounted for. ### Bayesian approach Finally, a Bayesian approach is considered, where all the available covariates, \(\mathbf{x}\), are used in a single step to model the outcome while applying the horseshoe prior (Carvalho et al., 2010) to the regression parameters. Similar to the forward approach, a normal distribution is further assumed for Model (4). The observed sampled means are modelled through \(\overline{y}_{c}^{(s)}\sim\mathcal{N}(\overline{\mathbf{x}}_{c}^{(s)\top}\mathbf{ \beta},\sigma^{2}/n_{c}),\ c=1,\ldots,m\), with priors \(\beta_{j}\sim\mathcal{N}(0,\lambda_{j}^{2}\tau^{2}),\ j=1,\ldots,p\), and \(\tau,\ \lambda_{1},\ldots,\lambda_{p}\sim\mathcal{HC}(0,1)\), where \(\mathcal{HC}\) denotes the half-Cauchy distribution. In this prior, \(\tau\) corresponds to the global shrinkage and \(\lambda_{j}\), to the local shrinkage. Then, inference is conducted through the posterior distributions, which are approximated through a Markov Chains Monte Carlo (MCMC) procedure. The estimator and its uncertainty (2) and (3) become \[\widehat{\widehat{Y}}_{c}=f_{c}\overline{y}_{c}^{(s)}+(1-f_{c}) \left(\frac{1}{L}\sum_{\ell=1}^{L}\widehat{\widehat{Y}}_{c}^{(ns)(\ell)}\right),\] \[\text{PI}_{(1-\alpha)\%}\left[\widehat{\widehat{Y}}_{c}\right]=f _{c}\overline{y}_{c}^{(s)}+(1-f_{c})\left[\widehat{\widehat{Y}}_{c,\text{lower $\alpha$}}^{(ns)},\widehat{\widehat{Y}}_{c,\text{upper$ \alpha$}}^{(ns)}\right],\] where \(\widehat{\widehat{Y}}_{c}^{(ns)(\ell)}\sim\mathcal{N}\left(\overline{\mathbf{x}}_ {c}^{(ns)^{\top}}\mathbf{\beta}^{(\ell)},{\sigma^{(\ell)}}^{2}/(N_{c}-n_{c})\right), \ \ell=1,\ldots,L,\ c=1,\ldots,M\), is the \(\ell\)th element of the MCMC posterior predictive sample, with \(\mathbf{\beta}^{(\ell)}\) and \(\sigma^{(\ell)}\) the \(\ell\)th elements in the MCMC samples. The \(\alpha\)-level empirical quantiles from the posterior predictive sample are denoted \(\widehat{\widehat{Y}}_{c,\text{lower$\alpha$}}^{(ns)}\) and \(\widehat{\widehat{Y}}_{c,\text{upper$\alpha$}}^{(ns)}\). ## 3 Simulation study This section presents two simulation studies to assess the performance of the proposed scaled SC procedure and to compare the four modelling methods. Section 3.1 focuses on the proposed scaled SC method that computes prediction intervals while relaxing the assumption of i.i.d. data points. In Section 3.2, different generating models and sampling designs are studied to compare the four model selection methods described in Section 2. Inference is performed in R. The random forests of \(B=1000\) trees are trained using the ranger package (Wright et al., 2022). For each simulation scenario, the random forest hyperparameters are fixed after a cross-validation study of different values. The code to conduct the proposed scaled SC procedure for random forest estimates is available in appendix C. The LASSO method is conducted through the glmnet package, using the cv.glmnet function to define the optimal shrinkage penalty parameter. The Bayesian inference is performed with the NIMBLE package (de Valpine et al., 2017). Convergence of the MCMC chains is assessed through trace plots, effective sample sizes and the \(\widehat{R}\) statistic (Gelman and Rubin, 1992). ### Simulation study: scaled split conformal procedure To assess the performance of the proposed scaled SC procedure, five simulation scenarios are considered. One finite population is created, and the different simulation scenarios correspond to the various sampling designs applied to that finite population. Let a finite population of \(M=500\) areas of sizes \(N_{c},\ c=1,\ldots,M,\) with \(\min_{c}(N_{c})=50\) and \(\max_{c}(N_{c})=500\). For \(c=1,\ldots,M,\) and \(k=1,\ldots,N_{c}\), the response variable has distribution \[y_{ck}\sim\mathcal{N}(9.5+x_{1,ch}-x_{2,ch}+2x_{3,ch}-x_{4,ch}+2x_{5,ch}+x_{6, ch},1),\] with 6 unit-level covariates, \(x_{1},\ldots,x_{6}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N}(0,1)\). From the finite population, \(R=500\) samples are drawn according to five sampling designs, which constitute the simulation scenarios: 1. (Stratified) Select all \(m=M=500\) areas and within each area, sample \(n_{c}=0.5N_{c},\ c=1,\ldots,m\) units; 2. (Stratified) Select all \(m=M=500\) areas and within each area, sample \(n_{c}=0.7N_{c},\ c=1,\ldots,m\) units; 3. (One-stage) Sample \(m=M/2\) areas and within each area, select all \(n_{c}=N_{c},\ c=1,\ldots,m\) units; 4. (Two-stage) Sample \(m=M/2\) areas and within each area, sample \(n_{c}=0.5N_{c},\ c=1,\ldots,m\) units; 5. (Two-stage) Sample \(m=M/2\) areas and within each area, sample \(n_{c}=0.7N_{c},\ c=1,\ldots,m\) units. The proportion of sampled areas is higher in the stratified sampling designs, as all areas are selected. Hence, the areal-level inference is conducted on more data points in the first two scenarios than in the remaining three, and we expect any modelling method to perform better in these two scenarios. The one-stage and two-stage designs all yield \(m=500\) areal-level responses. The difference between these last three scenarios is in the sampling fraction within areas. Out of the five simulation scenarios considered, the fourth one is the closest to the Ghanaian data analysed in Section 4. For each simulation scenario and in each sample, the estimates described in equation (2) are computed using four methods: a linear model that includes the correct six covariates, a linear model that omits \(x_{4}\), \(x_{5}\) and \(x_{6}\), a random forest method that considers all six covariates to grow the trees, and a linear LASSO model. The random forest hyperparameters are set after a cross-validation study as \(mtry=2\) and \(nodesize=5\). For each scenario, in each sample and for each modelling method, 50%, 80% and 95% prediction intervals (3) are computed following the SC procedure and the proposed scaled SC procedure. These non-parametric methods may be applied to any modelling approach: a linear regression method as well as a machine learning method. The objective of this simulation study is to assess whether the proposed scaled SC method yields valid coverage rates of the prediction intervals. All results are shown in Figure 1. The observed coverages for each scenario, method and interval level are shown in the left panel and the interval widths, in the right panel. The simulation scenarios are identified by their 1-5 number, as described above. Further, we differentiate the results for the sampled and non-sampled areas (Yes/No, respectively). Scenarios 1 and 2 only present results for sampled areas, as all areas are sampled in a stratified sample. Scenario 3 only shows the results for non-sampled areas because the target is exactly estimated in the sampled areas since all units are selected in a one-stage sample. Therefore, in the third scenario, we measure the predictions' uncertainty only in the non-sampled areas. In the first scenario and for the sampled areas in the fourth scenario, \(n_{c}\) and \(N_{c}-n_{c}\) are equal. Therefore, the data points are i.i.d. and the SC and proposed scaled SC approaches are the same. In these scenarios, both methods yield the right coverages of the prediction intervals, regardless of the modelling method. In terms of interval widths, the linear model with incorrect set of covariates leads to the widest intervals under both SC procedures. The linear model with correct covariates and the LASSO method yields the narrowest intervals regardless of the SC method. For all other sampling schemes, \(n_{c}\neq N_{c}-n_{c}\). Therefore, the sampled and non-sampled means, \(\overline{Y}_{c}^{(s)}\) and \(\overline{Y}_{c}^{(sn)}\), are not identically distributed, with differently scaled variances, \(\sigma^{2}/n_{c}\) and \(\sigma^{2}/(N_{c}-n_{c})\), respectively. In these cases, the original SC intervals obtained for all four modelling methods do not attain the right coverages. The original SC leads to under-coverage of the prediction intervals. On the other hand, the proposed scaled SC procedure produces prediction intervals with the right coverages, regardless of the interval level and modelling method. In particular, when fitting a linear model, with the LASSO constraint or with and without the right mean structure, the scaled SC intervals have exactly the right coverages. When modelling with a non-parametric random forest approach, the scaled SC prediction intervals show a slight error in the coverage rate. Specifically, in scenarios 2 and 5, the random forest shows slight under-coverage. In terms of interval width, the proposed scaled SC intervals tend to be a little wider than the original SC ones, for all interval levels. Regardless of the simulation scenario, the SC intervals and proposed scaled SC intervals obtained for the random forest estimates tend to be narrower than the ones obtained for the linear estimates using the incorrect set of covariates. ### Simulation study: prediction methods comparison To compare the performance of the random forest and the LASSO methods to the frequentist forward variable selection and the Bayesian shrinkage approaches, as described in Section 2, a simulation study is conducted considering three generating models for the outcome and five sampling designs. Three finite populations of \(M=1000\) areas of sizes \(N_{c}\) with \(\min_{c}(N_{c})=50\) and \(\max_{c}(N_{c})=500\) are created as follows, for \(c=1,\ldots,M\) and \(k=1,\ldots,N_{c}\): * \(y_{ck}\sim\mathcal{N}\left(20+\mathbf{x}_{ck}^{\top}\mathbf{\beta},0.5^{2}\right),\) where the covariates are such that \(\mathbf{x}_{ck}\sim\mathcal{N}_{100}(\mathbf{0},\mathbf{I})\) and with coefficients \(\mathbf{\beta}^{\top}=(1,-1,2,-1,2,1,2,1,-1,1,0,\ldots,0)\); * \(y_{ck}\sim\mathcal{N}\left(20+\mathbf{x}_{ck}^{\top}\mathbf{\beta},0.5^{2}\right),\) where the covariates are such that \(\mathbf{x}_{ck}\sim\mathcal{N}_{100}(\mathbf{0},\Sigma_{x})\), with \(\Sigma_{x}=\begin{bmatrix}1&0.5&\ldots&0.5\\ 0.5&1&\ldots&0.5\\ \vdots&\vdots&\ddots&\vdots\\ 0.5&0.5&\ldots&1\end{bmatrix},\) and \(\mathbf{\beta}^{\top}=(1,-1,2,-1,2,1,2,1,-1,1,0,\ldots,0)/10\); * \(y_{ck}\sim\mathcal{N}\left(x_{1,ch}^{2}+\exp\left(x_{2,ch}^{2}\right),0.3 \right),\) with covariates \(x_{j,c}\sim\mathcal{U}(-1,1)\), \(j=1,\ldots,100\). Populations A and B assume a linear relationship between the outcome and the first 10 covariates. In scenario B, however, the strength of the association is weak and the covariates are correlated. Population C is inspired by Scornet (2017) and assumes a non-linear relationship between the outcome and covariates. Throughout this simulation study, areas are indiscriminately termed "areas" or "EAs". From each finite population, \(R=100\) samples are drawn following the two sampling schemes: Figure 1: Coverages and widths of the prediction intervals (PI) obtained from the proposed scaled and original split conformal (SC) procedures for the four modelling methods and across the five simulation scenarios (1-5). Yes: coverages and widths across the sampled areas; No: coverages and widths across the non-sampled areas. 1. (Stratified) Select all \(m=M=500\) areas and within each area, sample \(n_{c}=15,\ c=1,\ldots,m\) units; 2. (Two-stage) Sample \(m=M/2\) areas and within each area, sample \(n_{c}=15,\ c=1,\ldots,m\) units. These simulation scenarios were motivated by the Ghanaian data analysed in Section 4, where only 15 households were sampled within the selected areas. Additional sampling designs are considered in Appendix D, with a higher number of sampled units within the selected areas. For each scenario, the estimates and their prediction intervals are computed as described in Section 2. Further, for each scenario, the estimates and their uncertainty are also computed assuming anonymised EAs. In this context, the modelling methods are trained on the sample and predictions are obtained ignoring which areas have been sampled, that is, assuming \(f_{c}=0,\ c=1,\ldots,M,\) at the prediction stage. Once again, this study with anonymised EAs is run because the available Ghanaian data that is analysed in Section 4 does not identify the sampled EAs. The random forest hyperparameters are set following a cross-validation study conducted for each simulation scenario. A random forest with hyperparameters set to \((mtry,nodesize)=(10,5)\) is found to perform the best for both sampling schemes in populations A and B. In population C, we set \((mtry,nodesize)=(70,200)\) and \((mtry,nodesize)=(70,9)\), for the stratified samples and the two-stage samples, respectively. In all scenarios, the Bayesian approach runs through a MCMC procedure with two chains of 5,000 iterations, which include a burn-in period of 2,500 iterations. We find out that with these values, practical convergence was attained, as assessed by the trace plots, effective sample sizes and \(\widehat{R}\) statistics (Gelman and Rubin, 1992). For a particular finite population and sampling scheme, running the random forest over 100 replicates takes 55 minutes, while the LASSO takes 4 minutes, the forward method takes 7 minutes and the Bayesian approach, 2.5 hours. The methods' performances are compared through the mean absolute bias, mean squared error (MSE), coverages of 50%, 80% and 95% prediction intervals and their proper interval score (Gneiting and Raftery, 2007). Let \([L,U]\) be the \((1-\alpha)\%\) prediction interval for a quantity \(\theta\) to estimate. The proper score is defined as \((U-L)+2/\alpha\left[(L-\theta)\mathds{1}_{L>\theta}+(\theta-U)\mathds{1}_{U< \theta}\right]\). Smaller values of the interval proper scores are preferred, indicating narrow intervals and average close to the nominal. Additionally, we extract which covariates have been selected from each method. Note that in the Bayesian framework, a covariate is said to be selected when its coefficient's posterior 95% credible interval does not include 0. For the random forest approach, when the \(p\)-value related to a variable's importance (Altmann et al., 2010) is smaller than 0.05, said variable is deemed selected. The variable importance is computed based on results from random forests fitted with permutations of the set of covariates. Figure 2 shows the selected covariates by each method for each finite population and sampling design. When the association is linear between the covariates and the outcome (A and B), regardless of the sampling design, the forward approach tends to adequately select the true auxiliary information. However, it also tends to select irrelevant variables. Each unimportant covariate is selected about 25% of the time by the forward method. In population A, the LASSO and Bayesian approaches also select the right covariates 100% of the time, while almost never including redundant covariates. When the association is weak, the LASSO and Bayesian methods tend to miss the right covariates 20%-50% of the time, depending on the sampling design. In scenarios A and B, the random forest method misses the right covariates 10%-50% of the time, while it always captures the correct set when the association is non-linear. The LASSO selects 1 out of 2 correct covariates about 80% of the time in this third population. Both the forward and Bayesian approaches miss the correct set of covariates in scenario C almost 100% of the time and include irrelevant variables. Figure 3 shows the absolute biases multiplied by 100, MSEs, prediction intervals' coverages and proper scores for all methods, generating models (A-C) and sampling schemes. The results for the sampled and non-sampled areas are differentiated through the red and black symbols, respectively. Only results for the sampled EAs are produced for the stratified sampling design, as all areas are sampled. The results assuming the anonymised EAs are distinguished from the ones in which we know which areas have been sampled by the circle and cross symbols, respectively. For all performance measures, the four modelling approaches yield similar results when it is known and unknown which areas have been sampled. For example, in population C with a two-stage sampling design and regardless of the modelling method, the MSE results over the anonymised sampled EAs are not worse than the results over the non-sampled EAs. This result is reassuring as for analysing the Ghanian data, where the sampled EAs are anonymised. In terms of bias, all methods are virtually unbiased with mean absolute biases between 0 and 0.8, regardless of the population and sampling design. The random forest tends to yield slightly higher mean absolute biases, compared to the forward, LASSO and Bayesian methods. As expected, the mean absolute biases are higher for all modelling methods in the non-linear scenario (C). Interestingly, the random forest, which always selects the right covariates in scenario C (see Figure 2), yields larger biases than the other three methods that miss the right set of covariates almost 100% of the time. In terms of MSE, there does not seem to be a difference between the LASSO, forward and Bayesian methods, for all populations and sampling schemes. These three methods, Figure 2: Covariate selection frequency for each method across the 6 simulation scenarios. Left of the vertical dashed line: true covariates used in the generating models. which fit linear models, yield slightly smaller MSEs than the random forest approach when the association between the covariates and the outcome is strongly linear (A). For scenario C, however, the random forest produces smaller MSEs than the three linear modelling approaches. The random forest method divides the other three modelling methods' MSEs by a factor of 3 in population C, regardless of the sampling scheme. The prediction intervals computed for all four methods in each sampling scheme for populations A and B yield the right coverages. These intervals are wider for the random forest method in population A, for both sampling designs, as deduced from the proper interval scores. When the relationship between the outcome and the covariates is non-linear (C), we observe that all four modelling methods yield under-coverage in both sampling designs. The random forest method, which accommodates a non-linear relationship, leads to prediction intervals with slightly higher coverage rates than the other three methods, but still misses the right rates by about 30%. Note, however, that the random forest approach produces prediction intervals with smaller proper scores than the other three modelling methods. ## 4 Areal log consumption prediction in the Greater Ac- cra Metropolitan Area In this section, the four modelling methods described in Section 2 are applied to the data for the Greater Accra Metropolitan Area (GAMA) in Ghana. Using the sixth GLSS and the 2010 Ghanaian census, a complete map of the mean equivalised consumption (in the log scale) is produced across the \(M=5019\) enumeration areas (EAs), for each method. Note that in the household survey, only \(m=136\) EAs have been sampled. To provide estimates in the missing areas, the response values are modelled using the \(p=174\) auxiliary variables which are measured in both available datasets. For the random forest approach, a cross-validation study on the survey data was run to set the hyperparameters to \(B=1000\) trees grown with \(mtry=25\) and \(nodesize=3\). The Figure 3: Mean absolute bias, MSE, coverages and proper scores of the prediction intervals, obtained for each method across the 6 simulation scenarios. RF: Random forest approach. Bayesian approach required two MCMC chains of 100,000 iterations, including a burn-in period of 50,000, and a thinning factor of 15. Convergence was attained as assessed by the trace plots, effective sample sizes and \(\widehat{R}\) statistics. Wakefield et al. (2020) point out the importance of including the design variables in model-based small area estimation methods. To that end, the urban indicator, which corresponds to the sample strata, is added to all four modelling methods. In the forward selection approach, this inclusion means that the urban indicator is added to the vector of selected covariates, even if it was not selected in the first step. In the Bayesian shrinkage and LASSO methods, it means there is no shrinkage applied to the regression coefficient that corresponds to the urban indicator. Finally, in the random forest approach, it means that the urban indicator is part of the variables considered for each covariate split. Figure 4 presents the covariates that were selected by each method. Despite all methods including the urban indicator, only the random forest finds it relevant with a \(p\)-value for its variable importance smaller than 0.05. Additionally, Figure 4 shows that the horseshoe prior leads to only one variable whose coefficient's posterior 95% credible interval does not include 0. The LASSO approach selects about 6% of the available covariates (11 variables), while the forward method and random forest methods select more than 12% of the variables (21 and 22, respectively). The variable indicating whether a household's floor is made of cement or concrete is selected by all four methods. Figure 5 shows the mean log consumption areal estimates and their 95% prediction intervals' widths for each of the four methods. Among the four methods, The random forest approach yields the most homogeneous point estimates across the EAs. This can further be seen in Figure 6 which compares the predictions obtained using each method for each EA. The prediction interval widths are shown across the EAs in Figure 5 and compared between the modelling methods in pairwise scatter plots in Figure 7. The prediction intervals computed for the linear approach with forward variable selection are the narrowest. As expected, the widths of the intervals obtained through the proposed scaled SC approach for the random forest and LASSO predictions behave similarly. The widths are of the form \((1-f_{c})\times 2\times d_{\alpha}/\sqrt{N_{c}-n_{c}}\), where \(d_{\alpha}\) is the only quantity that differs between the LASSO and random forest approaches. Note that in this analysis, the scaled SC procedure divides the dataset into two halves, consequently computing the necessary residuals and quantile based on only \(m/2=68\) data points. Finally, to determine which method performs the best in this particular data application, an 8-fold cross-validation study is conducted. The 136 sampled EAs are divided into 8 rural EAs and 128 EAs. Hence, in this 8-fold cross-validation study, 17 EAs are removed from the sample at a time (1 rural and 16 urban EAs), the four methods are fitted on the remaining 119 EAs and predictions are obtained for the 17 removed ones. The four methods are compared in terms of mean absolute bias, MSE, coverages and proper scores of the 50%, 80% and 95% prediction intervals in Table 1. The Bayesian shrinkage approach performs the best among the four methods we consider, yielding the smallest bias, MSE and interval scores and reaching the right coverage rates of the prediction intervals. On the other hand, the prediction intervals obtained for the forward selection approach lead to significant undercoverage. To further compare the performance of the four modelling approaches, the empirical cumulative distribution functions (CDFs) of the point estimates obtained by Figure 4: Selected covariates for each method when modelling the log equivalised consumption in GAMA. Figure 5: Estimated mean log equivalised consumption in the GAMA EAs (Left) and widths of the corresponding 95% prediction intervals (Right) obtained from each modelling method. RF: Random forest. each method are shown in Figure 8, alongside the empirical CDF of the observed Ghanaian sampled means. Figure 8 shows that the forward selection method performs the best when estimating the empirical distribution of the mean log equivalised consumption. The Bayesian shrinkage method is the second best method to estimate the empirical distribution of the mean log equivalised consumption. A cross-validation study was also conducted where the four methods were fitted without forcing the inclusion of the urban indicator. The results are not shown in this paper as the performance of the four methods in terms of bias, MSE, coverage and proper score of the prediction intervals were similar to the ones shown in Tables 1 and 8, obtained including the urban indicator, for each modelling method. The Bayesian shrinkage method considered in this paper consists in applying the horseshoe prior to the regression coefficients. Other priors could have been considered, such as a Bayesian ridge prior. A cross-validation study was conducted with the Bayesian ridge approach for the GAMA sample. Because the results were similar to the ones shown in Tables 1 and 8, the Bayesian shrinkage method is not able to estimate the empirical distribution of the mean log equivalised consumption. Figure 6: Pairwise comparison of the areal estimates obtained from each of the four methods: forward selection, LASSO, Bayesian shrinkage and random forest. Figure 7: Pairwise comparison of the areal prediction interval widths obtained from each of the four methods: forward selection, LASSO, Bayesian shrinkage and random forest. obtained with the horseshoe prior, in terms of bias, MSE, coverage and proper score of the prediction intervals, they are not presented in this paper. In the original scale, on average, we find that the Bayesian shrinkage consumption estimates among the richest 10% are 2.3 times bigger than the ones among the poorest 10%. We also find that the 92% urban EAs are not uniformly distributed across the estimated consumption deciles: there are only 79% urban EAs among the poorest 10%, versus 91% among the richest 10%. Following Dong and Wakefield (2021), to identify the EAs where interventions should be prioritized, we rank the EAs from poorest to richest, based on the \begin{table} \begin{tabular}{l c c c c c c c c} \hline & Absolute & \multicolumn{3}{c}{PI Coverage} & \multicolumn{3}{c}{Proper interval score} \\ & Bias & MSE & 95\% & 80\% & 50\% & 95\% & 80\% & 50\% \\ \hline Bayesian shrinkage & 0.244 & 0.086 & 94.1 & 80.9 & 48.5 & 1.33 & 1.91 & 3.86 \\ Forward selection & 0.975 & 0.168 & 72.1 & 52.2 & 28.7 & 3.55 & 5.35 & 8.03 \\ LASSO & 0.965 & 0.133 & 91.9 & 76.5 & 49.3 & 1.75 & 2.48 & 4.65 \\ Random forest & 0.516 & 0.097 & 91.9 & 79.4 & 50.0 & 1.61 & 2.08 & 4.16 \\ \hline \end{tabular} \end{table} Table 1: Mean absolute bias, MSE, coverages and proper scores of the 50%, 80% and 95% prediction intervals, obtained for each method in the 8-fold cross-validation study on the GAMA sample. Figure 8: Empirical CDFs obtained for each method in the 8-fold cross-validation study on the GAMA sample. Black: empirical CDF for the observed sampled means. Bayesian shrinkage point estimates. In particular, in this Bayesian framework, we obtain each EAs posterior ranking distribution, by ranking the point estimates at each MCMC iteration. Figure 9 shows the posterior ranking distributions for 5 of the 10% poorest EAs and 5 of the 10% richest EAs. Additionally, the right-hand side of Figure 9 maps the 10% poorest and richest EAs. We find that the Greater Accra South district, which corresponds to the western EAs in Figure 9, gathers most of the poorest EAs, while the Accra Metropolitan Area district, which corresponds to the southern EAs in Figure 9, is the richest. Figure 9 further shows that the 500 poorest EAs' ranking distributions overlap, which seems to indicate that there is a need to intervene in the poorest 500 EAs. Discussion In this paper, we compare four methods that perform variable modelling to estimate area-level means of a variable of interest. Throughout, the areas correspond to the sampling clusters. The methods are area-level model-based small area prediction procedures used to obtain areal estimates and their uncertainties. First, a random forest approach models the outcome values. By construction, auxiliary variables are selected when partitioning the response values through covariate splits. Second, in the frequentist framework, a LASSO method selects covariates by shrinking irrelevant regression coefficients towards 0. Then, in the frequentist framework, a forward variable selection approach with the AIC as the optimisation criterion is considered. Finally, in the Bayesian framework, the horseshoe prior, which shrinks the coefficients towards zero, _a priori_, is assumed for the regression coefficients. Further, a modification of the split conformal (SC) procedure to compute prediction intervals is proposed. The SC algorithm (Lei et al., 2018) estimates prediction intervals with no specific distribution assumption for the data. However, the data are assumed to be identically and independently distributed i.i.d.. The proposed scaled SC procedure relaxes the assumption that the data are identically distributed. Specifically, the proposed algorithm allows the data points to have variances of different scales. This proposed scaled SC procedure allows inference to be conducted for the random forest and the LASSO estimates. A first simulation study assesses the performance of the proposed scaled SC method compared to the original SC procedure. It is found that when the data points are i.i.d., both procedures perform similarly, regardless of the modelling method. In the simulation scenarios where the number of sampled units is not equal to the number of non-sampled units (\(n_{c}\neq N_{c}-n_{c}\)), the variances are scaled differently, \(\sigma^{2}/n_{c}\neq\sigma^{2}/(N_{c}-n_{c})\). Hence, the SC procedure does not yield the appropriate coverage rates for the prediction intervals in these scenarios. The proposed scaled SC method corrects the under-coverage in all the simulation scenarios that were considered. The four variable selection methods under study are compared in an additional sim ulation study. When data are generated from a linear model, the methods that assume normality yield smaller biases and MSEs than the random forest approach. All modelling methods, however, lead to adequate prediction interval coverages. The random forest method performs better in terms of MSE when the data are generated from a non-linear model. All methods yield under-coverage when few units are selected within the sampled areas in this complex population. In the sixth Ghana Living Standards Survey, from 2012-2013, the log equivalised consumption is measured at the household level in a small fraction of the areas (EAs) within the Greater Accra Metropolitan Area (GAMA), alongside 174 auxiliary variables. The same auxiliary information is recorded for all the GAMA EAs in the 2010 Ghanaian census. Using both datasets and the four EA-level method-based approaches, areal estimates of the mean log equivalised consumption are computed for all EAs in the GAMA. Additionally, prediction intervals are computed for all EA estimates to measure their uncertainties. The LASSO and forward variable selection methods select more than 10% of the auxiliary variables, while the Bayesian horseshoe model yields posterior credible intervals that do not include 0 for only one coefficient. The random forest procedure estimates a smoother map of the mean log consumption than the other three approaches. A cross-validation study conducted on the sample data shows that the Bayesian shrinkage method performs the best, among the four methods considered, on this particular dataset. Finally, in this paper, before fitting random forests to the different datasets, cross-validation studies were run to help set the hyperparameters. These hyperparameters are the number of regression trees included in the forest, the number of variables considered at each step when growing the trees, and the final node sizes. This step remains to be improved: as other hyperparameters could have led to better performing random forests. For further discussion on the selection of random forest hyperparameters; see e.g., McConville and Toth (2019); Dagdoug et al. (2023). On the other hand, the proposed scaled SC procedure used to compute prediction intervals for the random forest and LASSO estimates relies on an equal split of the data points to grow a forest and compute prediction errors. In the data application of this paper, this implies that the prediction interval limits are based on 68 data points. This partition, suggested by Lei et al. (2018) for the original SC algorithm, could be revisited to attempt to narrow down the resulting intervals.
2302.06819
L4 Pointer: An efficient pointer extension for spatial memory safety support without hardware extension
Since buffer overflow has long been a frequently occurring, high-risk vulnerability, various methods have been developed to support spatial memory safety and prevent buffer overflow. However, every proposed method, although effective in part, has its limitations. Due to expensive bound-checking or large memory in taking for metadata, the software-only support for spatial memory safety inherently entails runtime overhead. Contrastingly, hardware-assisted methods are not available without specific hardware assistants. To mitigate such limitations, Herein we propose L4 Pointer, which is a 128-bit pointer extended from a normal 64-bit virtual addresses. By using the extra bits and widespread SIMD operations, L4 Pointer shows less slow-down and higher performance without hardware extension than existing methods.
Seong-Kyun Mok, Eun-Sun Cho
2023-02-14T04:12:41Z
http://arxiv.org/abs/2302.06819v1
L4 Pointer: An efficient pointer extension for spatial memory safety support without hardware extension ###### Abstract. Since buffer overflow has long been a frequently occurring, high-risk vulnerability, various methods have been developed to support spatial memory safety and prevent buffer overflow. However, every proposed method, although effective in part, has its limitations. Due to expensive bound-checking or large memory intaking for metadata, the software-only support for spatial memory safety inherently entails runtime overhead. Contrastingly, hardware-assisted methods are not available without specific hardware assistants. To mitigate such limitations, Herein we propose L4 Pointer, which is a 128-bit pointer extended from normal 64-bit virtual addresses. By using the extra bits and widespread SIMD operations, L4 Pointer shows less slow-down and higher performance without hardware extension than existing methods. d + Footnote †: c) 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00 [https://doi.org/XXXXXXXXXXXX](https://doi.org/XXXXXXXXXXXX) **CCS Concepts:** **Computer systems organization Embedded systems**; _Redundancy_; Robotics; **Networks \(\rightarrow\)** Network reliability. **Keywords:** datasets, neural networks, gaze detection, text tagging ## 1. Introduction Buffer overflows have long been the most dangerous vulnerability and still rank among the most dangerous software weaknesses in the CWE Top 25 (Kang et al., 2018; Chen et al., 2019). Attackers exploit buffer overflows and gain control of the flow of execution or security leaks. Buffer overflows even occur in widely used open sources. For instance, the Heartbleed bug is the most famous vulnerability categorized buffer overflow found in OpenSSL (Krizhevsky et al., 2014; Krizhevsky et al., 2015). In addition, as the use of IoT devices increases, so does the frequency of buffer overflow (Krizhevsky et al., 2015), because Due to the characteristic of embedded devices, C/C++ has to be used and thus buffer overflow occurs more frequently. Bounds checking is one of the oldest and most common defenses used to prevent buffer overflow (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015). By saving the bounds information of a memory object with each pointer, defenses can insert runtime bounds checking to verify that the pointer still points to the valid range. Since some programming languages like Java support bounds checking, it can be performed without a special process. However, other programming languages like C/C++ do not support bounds checking. As such, supporting bounds checking in C/C++ has been extensively studied. Since bounds checking is inserted into the program, it incurs runtime and memory overhead. Bounds checking is an appropriate method of decreasing overhead. Bounds information transforms and stores forms called metadata. First, for this to successful occur, the metadata must be the appropriate size. If the metadata is too small, it will not include bounds information. Some researches support small metadata, so supports only upper bounds (Krizhevsky et al., 2015), or has false negative (Krizhevsky et al., 2015). On the contrary, too big a size of metadata incurs memory overhead (Krizhevsky et al., 2015). Second, it is necessary to design the manipulation method of the metadata properly. With bounds checking, we should consider runtime overhead of a process. In addition, the manipulation of a pointer with metadata should be atomic, otherwise, it will lead to false positives or negatives in multi-threading programs. While the Intel MPX has excellent performance, it is deprecated because it does not provide multi-threaded safety (Krizhevsky et al., 2015; Krizhevsky et al., 2015). Recent works have proposed 'fat-pointers' enriching pointers with metadata for bounds checking. However, the fat pointers implemented by a software-only method suffer from high runtime overhead when applied to real-world cases (Krizhevsky et al., 2015; Krizhevsky et al., 2015). To get over this, hardware-assisted solutions have been suggested against buffer overflow to lower runtime overhead. Unfortunately, traditional hardware-assisted fat pointers do not support multi-thread safety in most cases, not to get engaged in conflicts with other functionalities from the hardware. Recently, advanced hardware-assisted fat pointers have been implemented on special hardware. Although this approach achieves low runtime overheads and multithread safety, such fat pointers are not easily available without specific hardware assistants. This paper proposes the L4 Pointer, which incurs low overhead and atomicity without need for special hardware. Based on 128-bit pointers, the L4 Pointer reserves sufficient space to store metadata as well as the pointer itself. Depending on the vector operations and 128-bit registers supported by most hardware architectures (Bauer et al., 2015; Bauer et al., 2015), the L4 Pointer is anticipated to be preferred in real-world usage. The contributions of this paper include the following. * This paper presents the design and implementation of a prototype of the L4 Pointer, which provides sufficient metadata storage space to support both upper and lower-bound checks against buffer overflow. * The proposed L4 Pointer provides atomic operations of the pointer and metadata. * The proposed L4 Pointer works on commonly used architectures such as Intel processors and ARM processors. The target architecture of the prototype of the L4 Pointer is Intel x86 (Bauer et al., 2015; Bauer et al., 2015). ## 2. Related Works To prevent buffer overflow, tracking and saving metadata should be implemented. This paper groups, according to the method of metadata manipulation, (A) per-pointer and (B) per-object, (C) fat pointer, and (D) tagged pointer. In addition, there are also (E) hardware-assisted approaches. Further details are provided in the following subsections, and related works are summarized in Table 2. As shown in Table 1, some approaches mixed the way to compensate for limitations. For example, ALEXIA and Shakti-MS mixed the per-object and per-pointer methods for the bounds check of the sub-object (Sandhi et al., 2016; Sandhi et al., 2016). Further details are provided in later sections. ### Per-pointer The per-pointer method involves metadata per pointer (Sandhi et al., 2016; Sandhi et al., 2016; Sandhi et al., 2016). Even if different pointers point to the same memory object, pointers with different memory addresses have different metadata. For example, there are variables of pointer type, namely p and q, pointing to the same memory object but not the same location. Thus, the metadata of variables p and q are different in the per-pointer method. In this method, whenever pointer arithmetic, the metadata must also be calculated, so precision is ensured compared to per-object, but compatibility is lowered than per-object. The per-pointer method can prevent sub-objects overflow. Assume the following structure in the program written in C as shown below. When the structure below is allocated in memory, the buffer check for the memory of the object as well as the boundary check for the sub-object "buf" in the object are required. This method is capable of performing boundary checks on sub-objects. In addition, the runtime overhead is high (Sandhi et al., 2016). To solve limitations, CCured works to determine a safe pointer to reduce overhead (Sandhi et al., 2016). A safe pointer means that there is no possibility of a buffer overflow. ### Per-object The per-object method has bounds metadata for each memory object (Bauer et al., 2015; Sandhi et al., 2016; Sandhi et al., 2016; Sandhi et al., 2016). That is, if different pointers point to the same memory object, they have the same metadata. For example, the situation is the same as above subsection 2.1, but the metadata of p and q are the same. The per-object method does not perform boundary checking on sub-objects. Since it is determined whether the memory object is accessed or not, it is impossible to check the internal object boundary. Other than that, for performance reasons, some studies allocate larger than the original allocated memory size and as such, the precision is lowered. This is a method that sacrifices precision for some runtime overhead. Unlike the per-pointer method, the metadata policy, here, is not essential. Therefore, while this method has less runtime overhead and compatibility compared to the per-pointer method, it is less precise. ### Fat pointer The fat pointer method utilizes a larger pointer than the 64-bit general pointer to store metadata (Bauer et al., 2015; Sandhi et al., 2016). This method can be expressed as the code below with C syntax. struct FatPointer{ void* metadata; void* pointer; }; In traditional fat pointers, metadata is saved in a bound table. In the above code, metadata is not stored directly in the pointer, but in the address where the metadata is stored. As this method cannot guarantee thread safety, researchers have begun investigating a method of implementing hardware to compensate (Sandhi et al., 2016; Sandhi et al., 2016; Sandhi et al., 2016; Sandhi et al., 2016). However, some studies are not applicable to existing processors because they are implemented as processor emulators. Since the size of the pointer is different from that of the general pointer, compatibility problems and memory overhead occur. ### Tagged pointer Tagged pointer is a method of storing metadata in unused bits. While the general pointer size is 64 bits, not all 64 bits are used due to physical limitations in addressable memory. The number of bits used is different for each architecture in addition to different operating systems. The advantage of tagged pointers is that the size of the pointer does not change, so the compatibility is high and the calling conventions do not need to be changed. However, some approaches are implemented in such a way as to forgo precision (Han et al., 2011) or compatibility because the size of storable metadata is restricted due to the limitation of the size of bits (Krause et al., 2015). Delta pointer uses the lower 32 bits as a virtual address, and the upper 32 bits to store metadata. Because only the 32-bit is used as a virtual address, compatibility is poor. Further, as 32-bit only stores information about upper bounds when used as metadata, it so it cannot detect underflow. In addition, FRAMER stores the frame of the memory in the upper 16 bits. Even if a buffer overflow occurs in a 'frame' written in the tag, FRAMER cannot detect buffer overflow (Krause et al., 2015; Han et al., 2011). ### Hardware-assisted There is also a way to solve to prevent buffer overflow using hardware. Intel MPX and ARM MTE provide memory safety by providing an extended instruction set (Han et al., 2011; Delfon et al., 2012). Although it is possible to provide faster performance than the existing methods with an extended instruction set, these methods also have disadvantages. Intel MPX is a per-pointer bounds-tracking method, but pointer arithmetic operations and metadata operations are not performed as atomic operations. Additionally, it is no longer supported by the compiler because it is not superior to other methods in terms of speed. ARM MTE tags each memory area, records the tag of the pointer, and checks memory safety by matching the tag. Because it provides instruction set extensions, it achieves a high speed. In addition, the architecture is extended to implement the fat pointer mentioned above. Recent studies have implemented a fat pointer with extended hardware (Han et al., 2011; Delfon et al., 2012; Delfon et al., 2012). The tool in the existing architecture cannot guarantee speed and sufficient metadata storage space. However, this method cannot be applied to the existing architecture such as Intel and ARM. ### Limitations of previous works Table 2 presents a comparison of related works and their limitations. The operation method of the L4 pointer is derived from the delta pointer. However, the delta pointer does not cover buffer underflow, which often occurs (Delfon et al., 2012; Delfon et al., 2012). Some algorithms use a decrement operation to access an array, such as the string-matching algorithm, underflow must also be covered(Han et al., 2011). Intel MPX does not ensure multi-threaded safety. Slow-down only increases 1.5 times, but both false positives and negatives may occur. While FRAMER ensures multithread safety, false negatives may still occur, and overhead increases 3.23 times. The hardware extension methods typically used to overcome these shortcomings cannot be applied to the existing architecture. L4 Pointer is available on the used architectures supported by SIMD, detects buffer overflows and underflows, and supports multithread safety. Also, L4 Pointer incurs lower runtime overhead than software-only approaches. ## 3. L4 Pointer To use the L4 Pointer, a normal pointer of 64 bits is expanded to a consecutive 128 bits. Metadata is stored in the upper 64 bits, which are evenly shared by the upper and low bounds. It is significantly similar to delta pointers (Krause et al., 2015), except that the metadata of a delta pointer is 32 bits long, which is too short to hold both the upper and lower bounds. To handle 128-bit long pointers and manipulate metadata and pointers atomically, the L4 pointer uses vector operations and vector registers (Blei et al., 2015; Han et al., 2011). Fortunately, the most commonly used architectures such as X86 and ARM provide these facilities. More details are provided below. ### SIMD (Single Instruction Multiple Data) SIMD (Single Instruction Multiple Data) is designed for parallel processing and is a method of computing multiple data with one instruction. Intel and ARM architectures, which are commonly used, support this operation as well as special registers and instructions set extensions. The method proposed in this study needs to update metadata and pointers simultaneously. Thus, SIMD operations must be used. AVX (Advanced Vector eXtensions) of Intel support SIMD. As the memory operation instructions of AVX used in L4 Pointer are carried out atomically, the L4 Pointer supports multithread safety in Intel architectures. However, in the ARM architecture, certain conditions must be satisfied to ensure that the SIMD instruction is atomic. ARM only provides SIMD atomic when the SIMD element is 32-bit or small and \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & & Approaches & Runtime overhead & Multithread-safety \\ \hline \multirow{2}{*}{\begin{tabular}{c} Tracking \\ metadata \\ \end{tabular} } & \multirow{2}{*}{Per-object} & Each pointer has each & \multirow{2}{*}{High} & varying \\ & & metadata & & \\ \cline{2-2} & Per-pointer & Memory object has & Low & O \\ \hline \multirow{2}{*}{ \begin{tabular}{c} Size of \\ metadata \\ \end{tabular} } & Fat pointer & Extending the size of the pointer & High & X \\ \cline{2-2} & Tagged & Saving metadata in & & \\ \cline{2-2} & pointer & unused bits & Low & O \\ \hline \multicolumn{2}{|c|}{Hardware-assisted} & Machine dependent approaches & Low & O \\ \hline \end{tabular} \end{table} Table 1. Comparisons of approaches aligned. Since the prototype of the proposed L4 pointer is Intel architecture, it is not implemented to apply to ARM, but ARM can provide it as well. ### Layout of L4 Pointer and initialization The layout of the L4 Pointer is illustrated in Fig. 1. As previously mentioned, the upper 64 bits of the L4 pointer are used to store metadata, whereas the lower 64 bits are used for the virtual address. The upper 64 bits are halved, the upper 32 bits are used for upper-bound storage, and the lower 32 bits are used for lower-bound storage. In each of the 32 bits, bound information is stored in only 31 bits, and the highest 1 bit is used as a flag. \[\begin{split}& stmt\to expr=expr\\ expr&\to expr\ op\ expr\ |\ IDENT_{[constant]}\\ &\to expr[m:n]\\ op&\to+|-|\ o|<<|>>|AND|OR\end{split}\] The (1)-(12) are expressions of the L4 Pointer expressed as bit vector [(5)]. This paper used the bit-vector formula put forward by C.W.Barrett et al [(5)]. The syntax of the bit-vector arithmetic is given below. The subscript [constant] in the expression is the size of the bit vector. For example, the subscript "[64]" means that the size of bit vector A is 64. The "[m:n]" means the slicing of the bit vector from m\({}^{th}\) to n\({}^{th}\), and m and n are constant. The op and mean multiplication and concatenation of bit vector, respectively. \[L4_{[constant]}=pointer_{[64]}\circ upper_{[32]}\circ lower_{[32]} \tag{1}\] \[pointer_{[64]}=L4_{[128]}[0:63] \tag{2}\] \[upper_{32}=L4_{[128]}[64:95] \tag{3}\] \[lower_{[32]}=L4_{[128]}[96:127] \tag{4}\] In the formula in (1), "L4" is a 128-bit L4 pointer and consists of the concatenation of the upper and lower bounds and the real pointer, as the upper, lower, and pointer, respectively. Conversely, the upper and lower are expressed using L4 as (2), (3), and (4), respectively. indicating that it is a slice of L4. The expression, when a pointer is allocated, (5) (7) is formulated as below. The pointer is a recorded return of malloc() in (5). The upper is initialized as in the below formula (6), and the reason for this definition is to set the most significant bit to one when a value exceeding size is added. The lower bit is set to zero. Similarly, when the negative offset is added to the initial value, the most significant bit is set to one. Fig. 2 is the initialization of the process (5) (7) \[pointer_{[64]}=returnOfMalloc_{[64]} \tag{5}\] \[upper_{[32]}=(1<<32)_{[32]}-size_{[32]} \tag{6}\] \[lower_{[32]}=0 \tag{7}\] ### Pointer Arithmetic The below expressions are the process of pointer arithmetic. When the pointer is actually calculated, the offset is added from the base pointer as shown in the formula (8) below. The offset is not added only to but is also added to the and respectively. Fig. 3.2 is the arithmetic of L4 Pointer as the explanation. Although it looks like each is added and combined \begin{table} \begin{tabular}{|l||c|c|c|c|} \hline & Subject of tracking & Subject of saving & Runtime & **Note** \\ & metadata & metadata & **Compatibility** & **overhead** & **Note** \\ \hline Intel MPX & Per-pointer & Bounds table & Intel & 1.5x & \begin{tabular}{c} Multithread \\ safety X \\ \end{tabular} \\ \hline ARM MTE & Per-object & Tagged pointer + shadow memory & ARM & - & \\ \hline Delta pointer & Per-pointer & Tagged pointer & All architectures & 1.35x & \begin{tabular}{c} Only upper \\ bounds check \\ \end{tabular} \\ \hline FRAMER & Per-object & Tagged pointer & All architectures & 3.23x & \\ \hline CHERI & Per-pointer & Fat pointer & The proposed architectures & - & \\ \hline Address sanitizer & Memory & Shadow memory & All architectures & 2x higher & \\ \hline In-Fat pointer & Per-object & Fat pointer & The proposed architectures & 1.24x & \\ \hline ALEXIA & Per-object+per-pointer & Tagged pointer + Fat object & The proposed architectures & 1.14x & \\ \hline Snakti-MS & Per-object+per-pointer & Fat pointer & The proposed architectures & 1.13x & \\ \hline EffectiveSan & Per-object & Shadow memory & All architectures & 2.15x & Only subobject \\ \hline L4Pointer & Per-pointer & Fat pointer & The architectures supporting SIMD & 1.44x & \\ \hline \end{tabular} \end{table} Table 2. Comparisons of Related Works Figure 1. The layout of L4 Pointer. Figure 2. The initialization of the L4 Pointer as ”p=malloc(size),”. in an expression, it is actually made up of one instruction using SIMD operation. The details are below section. In summary, the formula is as follows. \[pointer+offset =L4_{[128]}+offset_{[64]}\circ offset_{[32]}\] \[\circ offset_{[32]} \tag{8}\] First, the following expressions (10) and (11) are performed so that only the most significant bit of the lower and upper is left and other bits are set to zero. Next, if the most significant bit of the lower or upper is one, a bit vector of which the most significant bit is set to one, called signal in (14), is created. Then, OR is performed with a pointer as shown in (15). The result will be used as the operand of the load or store. ### Bounds check The method used in this paper utilizes the exception of memory management units (MMU) without inserting a condition jump for bounds check. Most existing studies have a condition jump as "if statements" for bounds check. Bounds check without a condition jump is used for the delta pointer (Muller, 2016). Pointers must be in the canonical form on Intel architectures. If it is not in canonical form, the MMU (Memory Management Units) of Intel architectures raises an exception. It is possible to terminate the program using the exception that occurred at this time. Setting the most significant bit to one breaks the canonical form when a buffer overflow or underflow occurs. This method can also be used in ARM architecture, where the most significant bits (Shi et al., 2017; Shi et al., 2017) must be all zeros or ones. Other than that, an exception is raised for the bit. Therefore, if the most significant bit of pointers is set to one, the MMU of Intel and ARM architectures raise an exception. If the pointer is exceeded, the most significant bit is set to one. Most architectures do not utilize as many memory addresses as \(2^{64}\) (there are differences between architectures). Therefore, if the most significant bit is set to one, an exception is raised in the CPU's MMU and the program is terminated. \[msb\_lower_{[32]}=lower_{[32]}\ AND\ (1<<31)_{[32]} \tag{10}\] \[msb\_upper_{[32]}=upper_{[32]}\ AND\ (1<<31)_{[32]} \tag{9}\] \[msb\_bits_{[64]}= (0_{[32]}\ \circ(msb\_lower_{[32]}\ OR\ msb\_upper_{[32]}))\] \[OR\ (1<<63)_{[64]} \tag{11}\] \[msb\_bits_{[64]}=pointer_{[64]}\ OR\ msb\_bits_{[64]} \tag{12}\] ### Example of manipulations of L4 Pointer Fig. 4 is an example of a pointer arithmetic and dereferencing pointer. For the L4 Pointer arithmetic, the offset converts to the L4 Pointer. The locations of the pointer, upper and lower bound are set to offset. To leave only the most significant bit of the upper and lower bounds, we perform an 'AND' operation on 0x8000000, upper and lower bounds, and upper and lower bounds combined. We perform an 'OR' operation on the result and virtual address. The result of the operation is dereferencing the pointer. The buffer overflow occurs in Fig. 4, so the most significant bit is set to one, so MMU (Memory Management Unit) raises an exception and terminates the program. ## 4. Transformations and Implementations ### Type conversions The L4 pointer was implemented as code instrumentation using LLVM pass (Lill, 2016). All pointer-type variables in the program convert to the L4 pointer. As per the code below, the pointer type is converted to L4 type. The L4 is wrapping a pointer type. Unwrapping the L4 will return the pointer type. \[int*ptr;\to L4_{<int*>}ptr; \tag{1}\] The pointer array is also converted to an array of L4 type array. It will change as shown below. The size of the pointer type is 64 bits, so the size of the array increases because L4 Figure 4. The example of operation of L4 Pointer. The top is pointer arithmetic, the bottom (left) is the tag clear for dereferencing pointers, bottom (right) is making a virtual address for dereferencing pointers. Figure 3. The arithmetic of L4 Pointer as ”p+=0x18,”. is 128 bits. \[int*ptr[5];\to L4_{<int*>}ptr[5]; \tag{2}\] The type of the member of the structure is a pointer, it is converted to L4. Therefore, the argument of the allocation function such as malloc must be changed. The argument is changed according to the size of the changed structure. ### The way to protect stack and global object L4 The pointer can checks the bound of the stack and global objects as well. In C programs, they are an array. In order to support the array, it is converted to indirect access to an array. If an array is declared, an L4 type variable is additionally declared, and the address and metadata information of the array are stored in the L4 type. Then, if there is access to the array, it \[intptr[10]; \tag{3}\] \[L4_{<int*>}indirectPtr=makeL4(\&ptr,10); \tag{4}\] ### Examples Table 3 is an example of when the source code of the target program is instrumented by the L4 Pointer. The white codes are the original codes, and the gray code is the instrumented code. A structure called L4 is added for readability, but it is done by vector operations. When the'malloc()' is called, the code in lines 21 to 24 is performed to make an L4 pointer. If the argument of the function is a pointer, the transformation of the function is also performed to pass the L4 Pointer. The pointer arithmetic operations use special registers. The prototype of the L4 Pointer is currently operating on an Intel x86 (Castro et al., 2016). Special registers called the XMM registers of Intel x86 are 128-bit long and used for vector or float calculations. Fig. 5 shows the pointer operation process in the x86 instruction. The instructions'movaps' and 'paddq' are used instead of'mov' and 'add' and are used for the XMM registers. The calculated value was then set in the XMM0 register, and the next operation is performed using the 'paddq' instruction. The result of the operation is stored in the XMM1 register and memory. ### Implementations The L4 pointer is implemented in the form of Pass of LLVM (Castro et al., 2016). We used WLLVM to transform the code (Mikolov et al., 2007). WLLVM merges multiple source codes into one bitcode, and enables the transformation of the code into a single file. The reason for making one-bit code is to ensure correspondence to the structure. As mentioned earlier, pointers in structures are also converted to L4 pointers. External library calls are made by changing these to general pointers. In order to reduce external source code calls to maintain the L4 pointer, they are combined into one file and transformed. The 128-bit pointer type is implemented as a vector type. The target of the prototype is x86, and it is implemented as a type of \(\mathtt{<2}\)\(\mathtt{{}^{*}}\) i64>. As mentioned earlier, the pointer is converted to a vector type and if ARM is targeted, the type of the L4 pointer becomes \(\mathtt{<4}\)\(\mathtt{{}^{*}}\) i32> so that atomicity can be guaranteed. ## 5. Evaluation This paper shows evaluations of the L4 Pointer in terms of performance, given as follows: * Runtime overhead * Memory overhead * Conflict between L4 operations and programs using SIMD instructions To evaluate runtime overhead and memory overhead, we ran benchmark programs, namely Olden, coremark and two programs of mibench. To evaluate the conflict, we ran programs using SIMD in llvm-test-suite. Figure 5. Example assembly codes in Intel x86 Architecture used L4 Pointer \begin{table} \begin{tabular}{l} 1. struct L4{ \\ 2. uint64\_t tag; \\ 3. uint64\_t ptr; \\ 4.} 5. void foo(char\({}^{*}\) ptr) \\ 6. void foo(L4 ptr){ \\ 7. ptr[3] = “a’; \\ 8. uint128 t tmp= (uint128\_t) ptr; \\ 9. tmp+= (index \(<\)\(\mathtt{<5\%}\)) (index\(<\)\(\mathtt{<64}\)) | (index); \\ 10. L4 temp.ptr= (L4) tmp; \\ 11. uint64\_t upper_tag= temp.ptr\& x800000000000; \\ 12. uint64\_t lower_tag= temp.ptr\& x80000000000; \\ 13. uint64\_t tag = upper_tag lower_tag; \\ 14.char\({}^{*}\) accessed.ptr= temp.ptr.ptr.ptr[ tag; \\ 15. accessed.ptr[3] = “a’; \\ 16. } 17. int main(){ \\ 18. char\({}^{*}\) ptr; \\ 19. L4 L4\_ptr; \\ 20. ptr= malloc(100); \\ 21. address = malloc(100); \\ 22. L4\_tag = ((1 \(<\)\(\mathtt{<31}\) ) - 100) \(<\)\(\mathtt{<32}\); \\ 23. L4\_ptr.tag = L4\_tag; \\ 24. L4\_ptr.ptr = address; \\ 25. foo(ptr); \\ 26. foo(L4 ptr); \\ 27. } \\ \end{tabular} \end{table} Table 3. The example code of L4 Pointer like C ### Runtime overhead To test the runtime overhead, we used three benchmarks, namely olden, coremark, and mibench, and test cases of llvm-test suite (Becker et al., 2015; Dosov et al., 2017; Dosov et al., 2018). Six programs of olden are executed, one of coremark, and two of mibench, viz. basicmath and susan. In the llvm-test-suite, isamax, steppft, and expandfft were executed. As will be explained in the next subsection, this test case was tested to check whether there was a conflict with the SIMD program. Fig. 6 is the graph of runtime overhead of the L4 Pointer, and the slow-down of the L4 Pointer averaged 2.07\(\times\). The average overhead is lower than the reported values from FRAMER (3.23\(\times\)). It is slower than the delta pointer, which is 1.3\(\times\), but the delta pointer only checks the upper bound. The bounds heck of EffectiveSan incurs overhead about 2.15\(\times\) as slow as ours, but only supports sub-objects. L4 pointer is slower than the hardware-assisted approach. In-Fat, ALEXIA, and Shakti-MS incur 1.24\(\times\), 1.14\(\times\), and 1.13\(\times\), respectively. However, L4 pointer has been implemented without any architectural extensions. The highest overhead is 1.8x for mst in olden benchmark. Mst uses a kind of a linked list, which uses multiple nonspatial pointers in a vertex, but all are converted to L4 Pointers, causing a slow down. Fig. 6 depicts the runtime overhead of L4 Pointer, and the slowdown of L4 Pointer averaged 1.44x. The average overhead is lower than the reported values from FRAMER (3.23x). Delta pointer is 1.3x, it is slower than delta pointer, but delta pointer only checks the upper bound. The bounds check of EffectiveSan incurs overhead about 2.15x as slow as ours, but only supports subobjects. L4 pointer is slower than the hardware-assisted approach. Intel MPX, In-Fat, ALEXIA, and Shakti-MS incur 1.24\(\times\), 1.14\(\times\), and 1.13\(\times\), respectively. However, L4 pointer has been implemented without any architectural extensions. The highest overhead is 1.8x for mst in olden benchmark. Mst uses a kind of a linked list, which uses multiple nonspatial pointers in a vertex, but all are converted to L4 Pointers, causing a slow down. Fig. 8 depicts the runtime overhead of L4 Pointer, and the slowdown of L4 Pointer averaged 1.44x. The average overhead is lower than the reported values from FRAMER (3.23x). Delta pointer is 1.3x, it is slower than delta pointer, but delta pointer only checks the upper bound. The bounds check of EffectiveSan incurs overhead about 2.15x as slow as ours, but only supports subobjects. L4 pointer is slower than the hardware-assisted approach. Intel MPX, In-Fat, ALEXIA, and Shakti-MS incur 1.24\(\times\), 1.14\(\times\), and 1.13\(\times\), respectively. However, L4 pointer has been implemented without any architectural extensions. The highest overhead is 1.8x for mst in olden benchmark. Mst uses a kind of a linked list, which uses multiple nonspatial pointers in a vertex, but all are converted to L4 Pointers, causing a slow down. Fig. 6. The runtime overhead of L4 Pointer Fig. 7. The runtime overhead of mst based on program’s input Fig. 9. The memory overhead of mst based on program’s input and Shakti-MS incur 1.5x, 1.24x, 1.14x, and 1.13x, respectively. However, L4 pointer has been implemented without any architectural extensions. The highest overhead is 1.8x for mst in olden benchmark, again. Experiments were conducted with various inputs of mst. In the graph of Fig. 7, depending on the input of mst, the number of vertices is the size of the linked list structure. It was confirmed that the larger the input, the closer to twice. As will be explained later, the runtime overhead increases up to twice, but the memory overhead increases up to three times. With the bounds checking code added for each vertex, the instruction is executed linearly along the linked list, yielding runtime overhead in proportion to the number of vertices. On the other hand, memory overhead increases faster than runtime overhead along the size of the linked list. We guess that it is because each vertex has more than one pointer field and the extra memory for all the vertices collectively forms a kind of tree, instead of a linked list. ### Memory overhead We measure memory overhead using the same programs mentioned in the above subsection. Fig. 8 is the graph of the memory overhead of the L4 pointer. The memory overhead averaged 1.7\(\times\). Address Sanitizer and Memory Sanitizer incur memory overheads of 3.37\(\times\) and 3.32\(\times\) in their paper. The memory overhead of FRAMER averaged 1.23\(\times\) smaller than ours because L4 Pointer is twice the size of the normal pointer. The In-Fat pointer incurs memory overhead of 1.21\(\times\) in their paper. Mst and coremark are higher memory overhead than others, they use a linked list. As mentioned above, a linked list also incurs memory overhead. As with runtime overhead, mst recorded high memory overhead. Similarly, when the input was changed, the memory overhead increased as the number of vertices increased. Fig. 9 is the result. Unlike runtime overhead convergence to 2x, memory overhead increased from 2x to 3x. It was confirmed that the memory overhead increases as the number of the used linked lists increases. "Vertex" structure type in mst has two pointers as members. Since an L4 pointer is twice the size of a pointer, a structure with many pointer members takes a lot of memory overhead. In addition, there are many pointer types as members in other used structure types. ### The programs using SIMD In Fig. 6 and Fig. 8, isamax, stephrt and expandfft are programs using SIMD and which are instrumented and executed by L4 Pointer. They worked without conflict and the average of runtime and memory overhead is 1.34\(\times\) and 1.46\(\times\), respectively. AVX of Intel supports XMM0-XMM15 registers and L4 Pointer only uses "movaps" and "paddq" [(1)]. Therefore, L4 Pointer does not conflict with SIMD programs. ## 6. Disscusion As previously mentioned, Table 2 shows a comparison between related works and the L4 Pointer, and Fig. 10 is a graph that compares performance. On the graph, L4 Pointer has a lower runtime overhead than FRAMER. FRAMER supports full bounds checking but has the limitation of being software-only. Delta pointer requires a lower runtime overhead than L4 Pointer, but delta pointer only supports upper bound. EffectiveSan only subobject bounds check. MPX does not support multithread safety. ASan incurs twice of runtime overhead mentioned in their paper, but four times of runtime overhead. Hardware-assisted approaches have a lower runtime overhead than L4 Pointer, however, they are not available on the existing architectures. That is, they need extended hardware, to support multithread safety as well as lower overhead bounds checking using instruction set extensions, and multithread safety. We could not compare L4 Pointer with other studies for memory overhead because most of them did not publicize memory overhead. Only the In-Fat pointer incurs 1.21 times memory overhead, which is lower than L4 Pointer. Experiments show that the more overhead occurs, the more pointer fields a structural object has. Therefore, many studies do not insert bounds checking for pointers embedded in a structure object, to minimize the number of bounds checking, and improve performance [(6; 32; 22)]. However, we insert bounds checking into all kinds of pointers, in order not to lose complicated cases of buffer overflow. In the future, we will study how to improve performance while ensuring safety for structures with multiple pointer fields. ## 7. Conclusions This paper proposes new bounds checking named L4 Pointer. L4 pointer is applicable to any architecture that provides Figure 10. Comparisons between related works and L4 Pointer SIMD operation. By using SIMD Operation, multithread safety is ensured, and sufficient metadata storage space is provided. L4 pointer was 1.44\(\times\) slow-down without hardware extensions, and memory overhead was 1.7\(\times\). ## 8. Acknowledgments We would like to thank Editage (www.editage.co.kr) for English language editing. This work was supported by Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government (MSIT)(No.2022-0-01200, Training Key Talents in Industrial Convergence Security)
2307.15705
Integral Field Spectroscopy of 13 Tidal Disruption Event Hosts from the ZTF Survey
The host galaxies of tidal disruption events (TDEs) have been shown to possess peculiar properties, including high central light concentrations, unusual star-formation histories, and ``green'' colors. The ubiquity of these large-scale galaxy characteristics among TDE host populations suggests they may serve to boost the TDE rate in such galaxies by influencing the nuclear stellar dynamics. We present the first population study of integral field spectroscopy for thirteen TDE host galaxies across all spectral classes and X-ray brightnesses with the purpose of investigating their large-scale properties. We derive the black hole masses via stellar kinematics (i.e., the $M-\sigma$ relation) and find masses in the range $5.0 \lesssim \log(M_{\rm BH}/M_\odot) \lesssim 8.0$, with a distribution dominated by black holes with $M_{\rm BH} \sim 10^6 M_\odot$. We find one object with $M_{\rm BH} \gtrsim 10^8 M_\odot$, above the ``Hills mass'', which if the disrupted star was of solar type, allows a lower limit of $a \gtrsim 0.16$ to be placed on its spin, lending further support to the proposed connection between featureless TDEs and jetted TDEs. We also explore the level of rotational support in the TDE hosts, quantified by $(V/\sigma)_e$, a parameter which has been shown to correlate with stellar age and may explain the peculiar host galaxy preferences of TDEs. We find that the TDE hosts exhibit a broad range in $(V/\sigma)_e$ following a similar distribution as E+A galaxies, which have been shown to be overrepresented among TDE host populations.
Erica Hammerstein, S. Bradley Cenko, Suvi Gezari, Sylvain Veilleux, Brendan O'Connor, Sjoert van Velzen, Charlotte Ward, Yuhan Yao, Matthew Graham
2023-07-28T17:52:49Z
http://arxiv.org/abs/2307.15705v2
# Integral Field Spectroscopy of 13 Tidal Disruption Event Hosts from the ZTF Survey ###### Abstract The host galaxies of tidal disruption events (TDEs) have been shown to possess peculiar properties, including high central light concentrations, unusual star-formation histories, and "green" colors. The ubiquity of these large-scale galaxy characteristics among TDE host populations suggests they may serve to boost the TDE rate in such galaxies by influencing the nuclear stellar dynamics. We present the first population study of integral field spectroscopy for thirteen TDE host galaxies across all spectral classes and X-ray brightnesses with the purpose of investigating their large-scale properties. We derive the black hole masses via stellar kinematics (i.e., the \(M-\sigma\) relation) and find masses in the range \(5.0\lesssim\log(M_{\rm BH}/M_{\odot})\lesssim 8.0\), with a distribution dominated by black holes with \(M_{\rm BH}\sim 10^{6}M_{\odot}\). We find one object with \(M_{\rm BH}\gtrsim 10^{8}M_{\odot}\), above the "Hills mass", which if the disrupted star was of solar type, allows a lower limit of \(a\gtrsim 0.16\) to be placed on its spin, lending further support to the proposed connection between featureless TDEs and jetted TDEs. We also explore the level of rotational support in the TDE hosts, quantified by \((V/\sigma)_{e}\), a parameter which has been shown to correlate with stellar age and may explain the peculiar host galaxy preferences of TDEs. We find that the TDE hosts exhibit a broad range in \((V/\sigma)_{e}\) following a similar distribution as E+A galaxies, which have been shown to be overrepresented among TDE host populations. + Footnote †: journal: ApJ 0000-0001-8000 A TDE occurs when a star passes sufficiently close (i.e., within the tidal radius) to a SMBH such that the tidal forces felt by the star are stronger than its own self-gravity, resulting in the star being torn apart and roughly half of that stellar debris being eventually accreted by the black hole, creating a luminous flare of radiation potentially visible from Earth (Rees, 1988; Evans and Kochanek, 1989; Ulmer, 1999). TDEs were only a theoretical prediction just \(\sim\)50 yrs ago (Hills, 1975; Lidskii and Ozernoi, 1979), and we now have observational evidence of these events from the radio to X-rays, with the largest samples of TDEs discovered in the optical using surveys such as iPTF (Blagorodnova et al., 2017, 2019; Hung et al., 2017), ASAS-SN (Holoien et al., 2014, 2016, 2019; Wevers et al., 2019; Hinkle et al., 2021), Pan-STARRS (Gezari et al., 2012; Chornock et al., 2014; Holoien et al., 2019; Nicholl et al., 2019), SDSS (van Velzen et al., 2011), and ZTF (van Velzen et al., 2019, 2021; Hammerstein et al., 2023; Yao et al., 2023). While the light curves and spectra of TDEs offer important clues to the formation of the accretion disk, winds, and jets, the host galaxies of these transients provide insights into SMBH-galaxy co-evolution, galaxy evolution and mergers, and the dynamics of galaxy nuclei. Understanding the environments that are most likely to host TDEs will even lead to more efficient discovery and follow-up during the era of the Vera Rubin Observatory, which is predicted to observe hundreds to even thousands of new TDEs a year (van Velzen et al., 2011). TDEs have also been shown to be observed preferentially in E+A or post-starburst galaxies (Arcavi et al., 2014; French et al., 2016; Law-Smith et al., 2017; Hammerstein et al., 2021), whose optical spectra are characterized by little to no H\(\alpha\) or [O ii] emission and strong Balmer absorption, indicating the presence of stars formed within the past Gyr but no current star formation activity. Typical E+A overrepresentation (i.e., the ratio between the fraction of TDE hosts that are E+As to the fraction of all galaxies that are E+As) ranges widely depending on the study, with some population studies finding an overrepresentation of over \(100\times\)(Law-Smith et al., 2017) and others finding an overrepresentation of just 22\(\times\)(Hammerstein et al., 2021). E+A galaxies are also known to have large bulge-to-light ratios, high Sersic indices, and high concentration indices (Yang et al., 2008), all of which have been shown to greatly enhance the TDE rate in these galaxies by making more stars available in the nuclear region to be tidally disrupted (Stone and van Velzen, 2016; Stone and Metzger, 2016; French et al., 2020). Several previous studies have aimed to characterize the environments that are most likely to host TDEs and have shown that certain large-scale galaxy properties are indeed linked with higher TDE rates. Graur et al. (2018) found that TDE host galaxies have higher stellar mass surface density and lower velocity dispersions as compared to a sample of galaxies not known to host recent TDEs. Law-Smith et al. (2017) examined a sample of TDE host galaxies in comparison to the local galaxy population and found that all of the TDE hosts in their sample reside below the star formation main sequence, have bluer bulge colors, high Sersic indices and high bulge-to-light ratios compared to galaxies of similar masses. Hammerstein et al. (2021) found that 61% of TDE host galaxies in their sample were in the green valley between the star-forming "blue cloud" and the passive "red sequence" of galaxies, compared to only 13% of SDSS galaxies. They also found that while most green valley galaxies have Sersic indices comparable to blue cloud galaxies, the TDE hosts had higher Sersic indices most similar to red, passive galaxies. All of these properties are indicative of systems which have undergone a merger that produce concentrated central stellar distributions and can indeed enhance the TDE rate (Stone and van Velzen, 2016; Stone and Metzger, 2016; French et al., 2020). In this paper, we present integral field spectroscopy (IFS) of a sample of 13 TDE host galaxies from the Zwicky Transient Facility (ZTF) survey in order to obtain their black hole masses and understand their large-scale kinematics and stellar populations, the latter of which we compare to several other galaxy populations, including E+A galaxies. Integral field spectroscopy provides spatially resolved spectra which gives a study such as this one an edge over long-slit spectroscopy when attempting to probe various size scales of the TDE host galaxies. In Section 2, we describe the observations of the 13 TDEs in our sample as well as the subsequent data reduction and analysis methods. We present the results of the kinematic and stellar population analysis and discuss these results in Section 3. We discuss the results pertaining to the black hole mass in Section 4 and those pertaining to the stellar kinematics and populations in Section 5. We close with our conclusions in Section 6. ## 2 Observations & Data Analysis We selected our host galaxy sample from the ZTF-I TDEs published in van Velzen et al. (2021) and Hammerstein et al. (2023), with the intention of constructing a sample which includes multiple TDE spectral classes and X-ray brightnesses. We point to van Velzen et al. (2021) for a full description of the ZTF TDE search, although we note that the method for discovering TDEs is agnostic to host galaxy type apart from filtering out known AGN. While this search is thus agnostic to host galaxy type, we do note that our selection of TDE hosts from the ZTF sample, designed to include TDEs from all classifications, will not follow the true observed rate of each type of TDE. However, this is likely not relevant for the study presented here as we do not make conclusions by comparing the TDE types. We show SDSS and Pan-STARRS images of each of the host galaxies in Figure 1. Our sample of thirteen TDEs includes all four TDE spectral classes (for a description of all classes, see Hammerstein et al., 2023), with 2 TDE-H, 8 TDE-H+He, 2 TDE-He, and 1 TDE-featureless, 6 of which are also X-ray detected TDEs. The hosts span redshifts in the range \(0.015\leq z\leq 0.345\) and have stellar masses in the range \(9.56\leq\log(M_{\rm gal}/M_{\odot})\leq 11.23\), both of which we take from the published values of van Velzen et al. (2021) and Hammerstein et al. (2023). In Figure 2, we show the redshift distribution of the TDE hosts. In Sections 4 and 5, we separate and discuss our results based on resolution. In Figure 3 we show the rest-frame, extinction corrected \(u-r\) color from Hammerstein et al. (2023) derived from fitting the host SED for the TDE host galaxies as a function of host galaxy stellar mass. We also include a background sample of 955 galaxies from the SAMI Galaxy Survey DR3 (Croom et al., 2021), which provides spatially resolved stellar kinematic and population information, discussed further in Section 5. The galaxies in the SAMI sample were selected to span the plane of mass and environments, with the redshifts spanning \(0.004\leq z\leq 0.095\), masses between \(10^{7}-10^{1}2M_{\odot}\), magnitudes with \(r_{\rm pet}<19.4\), and environments from isolated galaxies to groups and clusters (Bryant et al., 2015). \(\sim\)54% of the TDE hosts are in the green valley compared to just \(\sim\)20% of the background galaxies, in line with previous findings (e.g., Hammerstein et al., 2021; Sazonov et al., 2021; Hammerstein et al., 2023; Yao et al., 2023). We summarize the properties of the host galaxies and include references to the first TDE classification in Table 1. ### Large Monolithic Imager and GALFIT We obtained optical imaging of the host galaxies in our sample using the Large Monolithic Imager (LMI) mounted on the 4.3-m Lowell Discovery Telescope (LDT) in Happy Jack, AZ. Data were obtained on 2022-10-30, 2022-11-30, and 2023-02-13 (PIs: Hammerstein, O'Connor) under clear skies and good observing conditions (seeing \(\sim\)1''). The targets were observed in the SDSS \(r\)-band filter with varying exposure times depending on the galaxy brightness, e.g., from 50 s for \(r\approx 14\) AB mag to 200 s for \(r\approx 19.5\) AB mag. The chosen exposure times lead to a high signal-to-noise ratio (SNR) for each galaxy, which when combined with the spatial resolution of LMI allow for an improved morphological analysis when compared to available archival data (e.g., SDSS). We were able to observe all thirteen host galaxies through this program. We reduced the LMI data using a custom python pipeline (see Toy et al., 2016; O'Connor et al., 2022) to perform bias subtraction, flat-fielding, and cosmic ray rejection. The observations for each galaxy, including observation date, exposure time, and seeing during each observation are described in Table 2. Given that the LMI observations were obtained several years after peak for all objects, we do not expect that the transient will contribute any appreciable flux to the photometry that may affect the fitting performed here. We use GALFIT(Peng et al., 2002) to perform 2D fits to the host galaxy photometry and obtain morphological parameters such as the effective radius, ellipticity, and position angle of the host galaxies. Because we are interested in exploring galaxy properties at several different scales, we perform two fits with two different models. The first model includes a Sersic component and exponential disk component which is used to obtain a bulge effective radius (\(R_{\rm e,bulge}\)). This radius is used to mask a region in the IFU data for obtaining the bulge velocity dispersion and subsequently the black hole mass. The second fit includes a single Sersic component, used to obtain the effective radius of the entire galaxy light profile (\(R_{\rm e,gal}\)). We use this radius to mask the region for general kinematic and stellar population analysis. We fit all galaxies using these two models with the exception of AT2019qiz. The prominent bar in AT2019qiz required the addition of another component in order to isolate the bulge of the galaxy. Instead, we used a model which includes an exponential disk and two Sersic components, one for the bulge and one for the bar, which was sufficient to isolate the bulge and obtain the bulge effective radius. Some galaxies required additional components to mask out nearby stars or faint galaxies in the fitting window, which we included when necessary. We present the results of this fitting, namely the galaxy and bulge effective radii, in Table 4 and show an example fit and residuals in Figure 4. ### Keck Cosmic Web Imager and GIST We present Keck Cosmic Web Imager (KCWI; Morrissey et al., 2018) observations of thirteen TDE host galaxies selected from the ZTF-I sample of TDEs. Integral field spectra were obtained on the night of 2021-12-25 under clear weather conditions (seeing \(\sim 0.8\arcsec\)) as part of program ID N096 (PI: Gezari). Observations for each object, described in Table 3, were obtained using the small (\(8\farcs 4\times 20\farcs 4\)) slicer and 'BM' grating, which gives a nominal resolution of \(R_{0}=8000\) and an average bandpass of 861 A. In Table 3, we provide the instrumental resolution, \(\sigma_{\rm instr}\), for each object measured from the FWHM of the arc spectrum at the observed wavelength of the Ca ii H and K lines. We also provide the days since peak for each observation as well as the average seeing between coadded exposures in Table 3. Three different central wavelengths were used to ensure that important host galaxy stellar absorption lines were observed for each galaxy. The final configurations are as follows: 1. _C1_: Small slicer, 'BM' grating, central wavelength of 4200 A. 2. _C2_: Small slicer, 'BM' grating, central wavelength of 4800 A. 3. _C3_: Small slicer, 'BM' grating, central wavelength of 5200 A. \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline \multicolumn{1}{c}{ ID} & Name & RA & Dec. & First TDE Classification & Spectral Class & Redshift & \(\log(M_{\rm gal}/M_{\odot})\) & \(m_{r}\) & \(\sigma_{\rm instr}\) \\ \hline 1 & **AT2018zr** & 07:56:54.55 & +34:15:43.6 & Tucker et al. (2018) & TDE-H & 0.071 & \(10.01^{+0.08}_{-0.14}\) & 18.02 & 18.3 \\ 2 & AT2018bsi & 08:15:26.63 & +45:35:32.0 & Gezari et al. (2018) & TDE-H+He & 0.051 & \(10.62^{+0.09}_{-0.09}\) & 15.50 & 18.8 \\ 3 & **AT2018hyz** & 10:06:50.88 & +01:41:33.9 & Dong et al. (2018) & TDE-H+He & 0.046 & \(9.96^{+0.07}_{-0.16}\) & 16.96 & 16.3 \\ 4 & AT2018lni & 04:09:37.65 & +73:53:41.7 & van Velzen et al. (2021) & TDE-H+He & 0.138 & \(10.10^{+0.10}_{-0.13}\) & 19.46 & 15.4 \\ 5 & AT2018lna & 07:03:18.65 & +23:01:44.7 & van Velzen et al. (2019c) & TDE-H+He & 0.091 & \(9.56^{+0.11}_{-0.14}\) & 19.51 & 17.1 \\ 6 & **AT2019azh** & 08:13:16.95 & +22:38:53.9 & van Velzen et al. (2019a)1 & TDE-H+He & 0.022 & \(9.74^{+0.08}_{-0.05}\) & 14.39 & 22.1 \\ 7 & **AT2019ehz** & 14:09:41.91 & +55:29:27.8 & Gezari et al. (2019) & TDE-H & 0.074 & \(9.81^{+0.09}_{-0.12}\) & 18.72 & 19.8 \\ 8 & AT2019qiz & 04:46:37.88 & \(-10\):13:34.9 & Siebert et al. (2019) & TDE-H+He & 0.015 & \(10.01^{+0.10}_{-0.12}\) & 14.17 & 18.6 \\ 9 & **AT2020ddv** & 09:58:33.42 & +46:54:40.4 & Gezari et al. (2020b) & TDE-He & 0.160 & \(10.30^{+0.13}_{-0.16}\) & 19.37 & 14.9 \\ 10 & **AT2020ocn** & 13:53:53.80 & +53:59:49.7 & Gezari et al. (2020a) & TDE-He & 0.070 & \(10.28^{+0.13}_{-0.17}\) & 17.57 & 18.3 \\ 11 & AT2020qhs & 02:17:53.95 & \(-09\):36:50.9 & Hammerstein et al. (2023) & TDE-featureless & 0.345 & \(11.23^{+0.07}_{-0.07}\) & 19.40 & 13.0 \\ 12 & AT2020wey & 09:05:25.91 & +61:48:09.1 & Arcavi et al. (2020) & TDE-H+He & 0.027 & \(9.63^{+0.09}_{-0.22}\) & 16.61 & 22.1 \\ 13 & AT2020zso & 22:22:17.13 & \(-07\):15:58.9 & Ihance et al. (2020) & TDE-H+He & 0.057 & \(10.05^{+0.09}_{-0.12}\) & 17.03 & 21.4 \\ \hline \end{tabular} Note. – Labels used in figures, RA and Dec, TDE classification references, spectral classes, redshifts, host galaxy stellar masses, and host galaxy apparent \(r\)-band magnitudes for the thirteen objects in our sample. All spectral classifications, redshifts, and host galaxy stellar masses are based on those provided in van Velzen et al. (2021) and Hammerstein et al. (2023). Host magnitudes are derived from Pan-STARRS. X-ray detected events are bolded. We also provide the instrumental resolution, \(\sigma_{\rm instr}\), measured from the FWHM of the arc spectrum at the observed wavelength of the Ca ii H and K lines for each object. \end{table} Table 1: Sample of TDE Host Galaxies Figure 1: SDSS and Pan-STARRS _gri_ images of the thirteen TDE host galaxies, with the yellow rectangle representing the positioning of the KCWI field of view. All images are \(34\arcsec\times 34\arcsec\) and the KCWI field of view is \(8\farcs 4\times 20\farcs 4\). In Figure 1, we overplot the KCWI pointing for each observed galaxy. Three host galaxies, AT2018bsi, AT2019azh, and AT2019qiz, have angular sizes larger than the KCWI field-of-view. For each of these galaxies we obtained sky exposures offset from the host galaxy in order to perform sky subtraction. The observations were reduced using the standard procedure of the KCWI data reduction pipeline (Neill et al., 2023) which includes bias subtraction, flat fielding, cosmic ray removal, sky subtraction, wavelength calibration, heliocentric correction, and flux calibration. We used CWITools(O'Sullivan and Chen, 2020) to apply a WCS correction to the KCWI data in'src_fit' mode, which fits 1D profiles to the spatial data to find the peak of the source and then applies a correction to the WCS such that the peak aligns with the input coordinates. We use the Galaxy IFU Spectroscopy Tool (GIST; Bittner et al., 2019) modified to work with KCWI data to obtain the stellar kinematic and population information. The GIST pipeline performs all necessary steps to analyze the KCWI IFU spectra with ppxf(Cappellari, 2022), including spatial masking and binning, SNR determination and masking, stellar kinematic analysis, \begin{table} \begin{tabular}{l r r r} \hline \hline \multicolumn{1}{c}{ Name} & \multicolumn{1}{c}{Obs. Date} & \multicolumn{1}{c}{Exp. Time} & \multicolumn{1}{c}{Seeing} \\ & & \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{\({}^{\prime\prime}\)} \\ \hline AT2018zr & 2022 Oct. 31 & 150 & 1.0 \\ AT2018bsi & 2022 Dec. 01 & 55 & 1.0 \\ AT2018hyz & 2022 Dec. 01 & 80 & 1.1 \\ AT2018lni & 2022 Dec. 01 & 200 & 1.1 \\ AT2018lna & 2022 Oct. 31 & 200 & 1.1 \\ AT2019azh & 2022 Oct. 31 & 70 & 1.0 \\ AT2019ehz & 2023 Feb. 13 & 120 & 1.9 \\ AT2019qiz & 2022 Dec. 01 & 50 & 1.1 \\ AT2020ddv & 2022 Oct. 31 & 200 & 1.3 \\ AT2020ocn & 2022 Dec. 01 & 100 & 1.1 \\ AT2020qbs & 2022 Dec. 01 & 200 & 1.0 \\ AT2020wey & 2022 Oct. 31 & 80 & 1.4 \\ AT2020zso & 2022 Dec. 01 & 60 & 1.2 \\ \hline \end{tabular} Note. – Summary of observations obtained with LMI, including the observation date, exposure time, and seeing measured from the PSF of the observation. All observations were performed using the SDSS \(r\)-band filter. \end{table} Table 2: Summary of LMI observations Figure 3: The rest-frame, extinction corrected \(u-r\) color as a function of host galaxy mass for the TDE host galaxies and a sample of 955 galaxies from the SAMI survey. The dashed green lines indicate the location of the green valley, the location of which we take from Hammerstein et al. (2023). The colors and shapes of the points indicate the spectral class of TDE for each event. IDs are listed in Table 1. The TDE hosts are typically less massive than the background sample and more often reside in the green valley compared to the background galaxies (\(\sim\)54% vs. \(\sim\)20%). Figure 2: The distribution of redshifts for the TDE host galaxies in our sample. The distribution peaks below \(z\sim 0.1\), with the highest redshift object, AT2020qhs, at \(z=0.345\). Values are taken from van Velzen et al. (2021) and Hammerstein et al. (2023). and stellar population analysis. The X-shooter library of simple stellar population models (XSL; Verro et al., 2022) offers the best spectral resolution (\(\sigma\sim\) 13 km s\({}^{-1}\), \(R\sim 10000\)) and wavelength coverage (3500 A- 24800 A) which matches our KCWI observations (\(\lambda_{\rm obs,min}=3768\) A in configuration C1 and \(\lambda_{\rm obs,max}=5624\) A in configuration C3), meaning we can fit the entire spectral range for each host galaxy. The XSL provides several options for initial mass functions (IMF) and isochrones. We choose the set of models that utilizes the Salpeter IMF (Salpeter, 1955) and PARSEC/COLIBRI isochrones (Bressan et al., 2012; Marigo et al., 2013), which includes stellar populations with ages above 50 Myr and metallicities in the range \(-2.2<\rm{[Fe/H]}<+0.2\), normalized to obtain mass-weighted stellar population results. We run the GIST pipeline three times for each host galaxy, each time using different binning and masking criteria, and using 1000 Monte-Carlo simulations to extract the uncertainties on the stellar kinematics. We spatially mask and bin the spaxels for the three different fits as follows: 1. _Bulge \(\sigma\) fit_: Mask all spaxels outside of \(R_{\rm e,bulge}\) obtained from GALFIT; combine remaining spaxels into one bin to obtain \(\sigma\), the bulge velocity dispersion. 2. _Galaxy \((V/\sigma)_{e}\) fits_: Mask all spaxels outside of \(R_{\rm e,gal}\) obtained from GALFIT; apply no binning to obtain the spatially resolved galaxy line-of-sight velocities (\(V\)) and velocity dispersions (\(\sigma\)), with \((V/\sigma)_{e}\) being the ratio of random to ordered motion within the galaxy effective radius. 3. _Stellar population fit_: Mask all spaxels outside of \(R_{\rm e,gal}\) obtained from GALFIT; combine remaining spaxels into one bin. We are motivated to perform three different fits for several reasons. The first is so that our black hole masses are determined only from the bulge velocity dispersions, with the bulge effective radius determined from the two component GALFIT fit. The second is so that our determination of the large-scale kinematics and stellar population properties follows most closely the methods of van de Sande et al. (2018), who perform two fits within an ellipse that encloses half of the projected total galaxy light: one which is similar to our galaxy \((V/\sigma)_{e}\) fit and one which is similar to our stellar population fit. There are four cases in which the bulge effective radius is smaller than the seeing of the KCWI observations: AT2018lni, AT2020ddv, AT2020ocn, and AT2020qhs. For these objects, instead of simply using the bulge effective radius given by GALFIT to perform the bulge \(\sigma\) fit, we use the sum in quadrature of the bulge effective radius and the seeing given in Table 3. The galaxy effective radius for AT2018lni is also smaller than the seeing, and in this case, we use the sum in quadrature of the galaxy effective radius and the seeing to perform the galaxy \((V/\sigma)_{e}\) fits and the stellar population fit. We present and discuss the results of this analysis in the next sections. ## 3 Results Figure 4: A 29\({}^{\prime\prime}\times\)29\({}^{\prime\prime}\) cutout of the LMI observations of the host galaxy of AT2018bsi, shown with the GALFIT model and residuals. All images are on the same scale. GALFIT is able to model the host galaxy reasonably well with the residuals showing potential dust lane or spiral arm features which are not as straightforward to model with GALFIT and for the purposes of the study presented here, are unimportant. In the left panel we show two ellipses representing the fitted bulge effective radius (\(R_{\rm e,bulge}\), cyan) and the disk effective radius (where the relationship between the effective radius and the scale length of the disk is \(R_{\rm e,disk}=1.678R_{\rm s,disk}\), white). We present the results of our kinematic and stellar population analysis on the KCWI spectra of the 13 TDE host galaxies. We summarize our main results in Table 4. In Figure 5, we show a white light image of the host galaxy of AT2019azh and example output maps from GIST, including the line-of-sight velocity and velocity dispersion as well as the stellar population age and metallicity. In Figure 6, we show the bins constructed by GIST, as well as two example spectra and ppxf fits from different bins. The output we show in Figures 5 and 6 involves no spatial masking like that described in Section 2.2, but instead masks spaxels below the isophote level which has a mean SNR of 2.2. This particular fit is not used for any analysis and is for illustrative purposes only. One important comparison to make for all results is that of the differing angular resolutions resulting from the range of redshifts for the TDE hosts. As such, we investigate whether angular resolution may influence the results we discuss in Sections 4 and 5. We split our sample into three different angular resolution bins: 1. [label=_._] 2. \(\sim\)0.5 kpc/\({}^{\prime\prime}\): AT2019azh, AT2019qiz, AT2020wey 3. \(\sim\)1.0 kpc/\({}^{\prime\prime}\): AT2018bsi, AT2018hyz, AT2020zso 4. \(\gtrsim\)1.3 kpc/\({}^{\prime\prime}\): AT2018zr, AT2018lni, AT2018lna, AT2018ehz, AT2020ddv, AT2020ocn, AT2020qhs We perform an Anderson-Darling test to compare these three subsamples and find that we cannot reject the null hypothesis that they are drawn from the same distribution of host galaxy stellar mass, velocity dispersion, black hole mass, or \((\mathrm{V}/\sigma)_{e}\) (\(p\)-value \(\geq 0.25\) for all tests). However, the sample sizes compared are small and may not provide a true measure of how angular resolution affects studies such as the one presented here. In the following sections, we discuss our results on obtaining the black hole masses and characterizing the host galaxy stellar kinematics and populations. ## 4 Black hole masses We derive the black hole masses through the \(M_{\mathrm{BH}}-\sigma\) relation of Gultekin et al. (2009), assuming that this relation holds valid for all galaxies in this sample: \[\log(M_{\mathrm{BH}}/M_{\odot})=8.12+4.24\log\left(\frac{\sigma}{200\ \mathrm{km\ s^{-1}}}\right) \tag{1}\] We propagate the uncertainties on the velocity dispersion through this relation and add them linearly with the intrinsic scatter on the relation to obtain the uncertainty on the black hole mass. In Figure 7, we show the distribution of black hole masses for the entire sample in addition to the subsamples of X-ray bright and X-ray faint events. We find that the distribution peaks at \(\log(M_{\mathrm{BH}}/M_{\odot})=6.05\) with a range of masses \(4.98\leq\log(M_{\mathrm{BH}}/M_{\odot})\leq 8.01\), which is consistent with previous studies performing a similar analysis (e.g., Wevers et al., 2017, 2019; Yao et al., 2023). We examine whether the populations of X-ray bright and X-ray faint events show any significant difference in their black hole mass distributions by performing an Anderson-Darling test and find that we cannot reject the null hypothesis that the X-ray bright and X-ray faint samples are drawn from the same distribution in black hole mass (\(p\)-value \(\geq 0.25\)). This is consistent with several previous studies (e.g., Wevers et al., 2019; French et al., 2020; Hammerstein et al., 2023) which largely found no significant difference in the black hole, host galaxy, or even light curve properties between X-ray bright and X-ray faint TDEs. This lack of difference between X-ray bright and X-ray faint populations may be explained by the unifying theory of Dai et al. (2018), which posits that whether or not X-rays are observed in a particular TDE is a matter of viewing angle effects. Figure 8 shows the black hole mass as a function of the velocity dispersion along with several derived relations from the literature, including Gultekin et al. (2009), Xiao et al. (2011), and Kormendy & Ho (2013). While values derived from the Kormendy & Ho (2013) relation would generally be higher than those derived from the Gultekin et al. (2009) relation, the Xiao et al. (2011) \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Name} & Config. & Exp. Time & \(\Delta t_{\mathrm{obs-peak}}\) & Seeing \\ & & (s) & (days) & \({}^{\prime\prime}\) \\ \hline AT2018zr & C1 & \(2\times 900\) & 1372 & 0.72 \\ AT2018bsi & C1 & \(2\times 150\) & 1362 & 0.65 \\ AT2018hyz & C1 & \(2\times 600\) & 1150 & 0.61 \\ AT2018lni & C2 & \(2\times 1800\) & 1097 & 0.69 \\ AT2018lna & C1 & \(2\times 1500\) & 1067 & 0.82 \\ AT2019azh & C1 & \(2\times 100\) & 1008 & 0.68 \\ AT2019ehz & C1 & \(2\times 1000\) & 960 & 0.65 \\ AT2019qiz & C1 & \(2\times 500\) & 807 & 0.95 \\ AT2020ddv & C2 & \(2\times 1500\) & 655 & 0.71 \\ AT2020ocn & C1 & \(2\times 600\) & 585 & 0.52 \\ AT2020qhs & C3 & 1350, 500 & 511 & 0.75 \\ AT2020wey & C1 & \(2\times 200\) & 418 & 0.74 \\ AT2020zso & C1 & 300, 600 & 386 & 0.66 \\ \hline \end{tabular} Note. – Summary of observations obtained with KCWI, including the instrument configuration, exposure times, days post-peak from the tidal disruption flare, and the average seeing for the coadded observations. \(t_{\mathrm{peak}}\) is taken from Hammerstein et al. (2023). The configuration notation is described in Section 2.2. \end{table} Table 3: Summary of KCWI observations relation is flatter, with higher velocity dispersion values yielding lower black hole masses and lower velocity dispersion values yielding higher black hole masses. We discuss further implications of our choice of \(M_{\rm BH}-\sigma\) relation used to derive black hole masses in Sections 4.2 and 4.3. In Figure 9, we show the derived black hole masses as a function of host galaxy stellar mass along with several empirical relations from the literature. Reines and Volonteri (2015) derived the relations for AGN and inactive galaxies, while Greene et al. (2020) derived the relations for late, early, and all galaxy types. Importantly, Greene et al. (2020) used upper limits in their calculations which are crucial for including low-mass systems, such as the ones that host TDEs, in the relation. We also show the fitted relation from Yao et al. (2023), which was derived by fitting a linear relation between \(M_{\rm gal}\) and \(M_{\rm BH}\) for the TDE hosts in their sample. Rather interestingly, the TDE hosts most closely follow the relation for late-type galaxies, despite very few being classified as such. This could be explained by the very few low-mass early-type galaxies used in deriving the relations for early-type galaxies and all galaxy types. Alternatively, this may be caused by our choice in \(M_{\rm BH}-\sigma\) relation, although each scaling will have its own resulting offset. ### Comparisons to previous measurements All objects in our sample have previously measured black hole masses through a variety of methods, although only three have previously measured velocity dispersions. We compare our estimate of the black hole mass derived from the bulge velocity dispersion and \(M_{\rm BH}-\sigma\) relation with previous estimates using the same method. _AT2019azh_: Yao et al. (2023) derived the black hole mass for AT2019azh by fitting the optical ESI spectrum using ppxf. They found \(\sigma_{\star}=67.99\pm 2.03\) km s\({}^{-1}\), corresponding to a black hole mass of \(\log(M_{\rm BH}/M_{\odot})=6.44\pm 0.33\) using the \(M_{\rm BH}-\sigma\) relation of Kormendy and Ho (2013). Our value of \(\sigma_{\star}=68.01\pm 2.02\) km s\({}^{-1}\) is consistent with that of Yao et al. (2023). _AT2020wey_: Yao et al. (2023) also measured the velocity dispersion of the host galaxy of AT2020wey in the same manner as AT2019azh, finding \(\sigma_{\star}=39.36\pm 2.79\) km s\({}^{-1}\). We find a significantly higher value for the velocity dispersion of \(\sigma_{\star}=53.54\pm 4.75\) km s\({}^{-1}\). It is possible that with the small effective radius of AT2020wey (see Table 4), the long-slit spectra used to derive the velocity dispersion in Yao et al. (2023) are inclusive of stars much farther from the bulge effective radius and thus have lower velocity dispersions. This may explain \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & kpc/\({}^{\prime\prime}\) & \(R_{\rm e,gal}\) & \(R_{\rm e,bulge}\) & \(\sigma_{\star}\) (km s\({}^{-1}\)) & \(\log(M_{\rm BH}/M_{\odot})\) & (V/\(\sigma\))\({}_{e}\) & Age (Gyr) \\ \hline AT2018zr & 1.35 & 1.87 & 0.89 & 49.79\(\pm\)4.93 & 5.56\(\pm\) 0.76 & 0.52\(\pm\) 0.20 & 2.65 \\ AT2018bsi & 1.00 & 6.15 & 1.84 & 117.54\(\pm\)8.12 & 7.14\(\pm\) 0.62 & 0.93\(\pm\) 0.15 & 0.57 \\ AT2018hyz & 0.90 & 1.34 & 0.69 & 66.62\(\pm\)3.12 & 6.10\(\pm\) 0.67 & 0.12\(\pm\) 0.05 & 6.95 \\ AT2018lni & 2.44 & 0.56 (0.88) & 0.34 (0.78) & 59.47\(\pm\)3.78 & 5.89\(\pm\) 0.70 & 0.26\(\pm\) 0.09 & 8.65 \\ AT2018lna & 1.70 & 1.15 & 0.92 & 36.43\(\pm\)4.52 & 4.98\(\pm\) 0.83 & 0.78\(\pm\) 0.38 & 3.23 \\ AT2019azh & 0.45 & 9.75 & 2.52 & 68.01\(\pm\)2.02 & 6.13\(\pm\) 0.66 & 0.88\(\pm\) 0.11 & 8.68 \\ AT2019ehz & 1.41 & 1.76 & 1.15 & 46.65\(\pm\)11.83 & 5.44\(\pm\) 0.98 & 0.37\(\pm\) 0.20 & 6.03 \\ AT2019qiz & 0.31 & 8.85 & 2.27 & 71.85\(\pm\)1.93 & 6.23\(\pm\) 0.65 & 0.71\(\pm\) 0.08 & 2.15 \\ AT2020ddv & 2.76 & 0.88 & 0.47 (0.85) & 73.44\(\pm\)10.06 & 6.28\(\pm\) 0.78 & 0.09\(\pm\) 1.11 & 6.12 \\ AT2020ocn & 1.34 & 1.40 & 0.28 (0.59) & 90.15\(\pm\)4.46 & 6.65\(\pm\) 0.63 & 0.36\(\pm\) 0.14 & 8.09 \\ AT2020qhs & 4.89 & 2.05 & 0.72 (1.04) & 188.69\(\pm\)37.86 & 8.01\(\pm\) 0.82 & 0.53\(\pm\) 0.15 & 1.98 \\ AT2020wey & 0.55 & 2.49 & 0.87 & 53.54\(\pm\)4.75 & 5.69\(\pm\) 0.74 & 0.40\(\pm\) 0.32 & 8.43 \\ AT2020zso & 1.10 & 2.57 & 1.08 & 61.80\(\pm\)4.93 & 5.96\(\pm\) 0.71 & 1.08\(\pm\) 0.27 & 6.32 \\ \hline \end{tabular} Note. – The results from our photometric and kinematic analysis of the LMI and KCWI data, including the galaxy and bulge half light radii measured from GALFIT, the bulge velocity dispersion and derived black hole mass, the ratio of ordered rotation to random stellar motion (V/\(\sigma\))\({}_{e}\), and the stellar population age within the galaxy effective radius. For AT2018lni, AT2020ddv, AT2020ocn, and AT2020qhs, the values in parentheses are the values obtained from adding the GALFIT values and the KCWI seeing in quadrature, and are the values used to extract the bulge \(\sigma\) fits, and in the case of AT2018lni, the galaxy kinematics and stellar population fits. \end{table} Table 4: Results from photometric and kinematic analysis the discrepancy we see here. Indeed, a fit to the entire host galaxy of AT2020wey reveals that regions away from the nucleus have much lower velocity dispersions (\(\sim 24\) km s\({}^{-1}\)) which may influence the resulting black hole mass derived from stellar kinematics. _AT2019qiz_: Nicholl et al. (2020) fit the late time X-shooter spectrum of AT2019qiz using ppxf and found \(\sigma_{\star}=69.7\pm 2.3\) km s\({}^{-1}\). Our value for the velocity dispersion is marginally higher, \(\sigma_{\star}=71.85\pm 1.93\) km s\({}^{-1}\), but still consistent within the mutual uncertainties of the two measurements. All objects in our sample also have at least one estimate of the black hole mass obtained from fitting the TDE light curve with the MOSFiT(Guillochon et al., 2018) TDE model (Mockler et al., 2019). The TDE model fits each TDE by generating bolometric light curves via hydrodynamical simulations and passing them through viscosity and reprocessing transformation functions to create the the single-band, observed light curves. MOSFiT then uses the single-band light curves to fit the multi-band input data to estimate the light-curve properties and information on the disrupted star in addition to the mass of the SMBH. Hammerstein et al. (2023) used MOSFiT to fit the light curves of every object in our sample, but found no significant correlation with the host galaxy mass. We now reexamine any potential correlation using the derived black hole mass instead. In Figure 10, we show the MOSFiT black hole mass as a function of the black hole mass we have derived here. The gray dashed line indicates a one-to-one relationship. While we do find a weak positive correlation between the MOSFiT masses Figure 5: An example output from GIST of the host galaxy of AT2019azh. The left panel shows an unbinned white light image of the KCWI observation. The panels on the right depict the output maps from GIST, which show the ppxf-derived line-of-sight velocity, velocity dispersion, and stellar population ages and metallicities. The bins in this figure are constructed using the Voronoi binning method (Cappellari and Copin, 2003) to reach a threshold SNR for each bin, in this case SNR \(\sim 10\). We note that Voronoi binning is not performed for the fits used in the analysis. and the masses we derive here using a Kendall's tau test (\(\tau=0.05\)), it is not significant (\(p\)-value \(=0.9\)). As our \(M_{\rm BH}-\sigma\) derived black hole masses are so well correlated with the host galaxy masses from Hammerstein et al. (2023), it is not surprising that we do not find a significant correlation between the MOSFiT masses and our masses. Given that the MOSFiT mass are typically orders of magnitude larger than those inferred through the \(M_{\rm BH}-\sigma\) relation, it is possible that an underestimation of the black hole mass due to uncertainties of the relation at such low velocity dispersions is causing the discrepancy. Additional updates to the MOSFiT TDE model, which will be presented in Mockler & Nicholl et al. (2023, in prep), may also help to address the discrepancies. Hammerstein et al. (2023) also estimated the black hole mass using the TDEmass code (Ryu et al. 2020) which assumes that circularization happens slowly, and that the UV/optical emission arises from shocks in the intersecting debris streams instead of in an outflow or wind. Again, they found no significant correlation between the SMBH mass and the host galaxy mass. We show the TDEmass SMBH mass as a function of the SMBH mass derived from stellar kinematics in Figure 10, with gray dashed line indicates a one-to-one relationship. We note that the mass for AT2020qhs (ID 11) was not able to be determined with TDEmass. We find Figure 6: Example ppxf fits to the host galaxy of AT2019azh output from GIST. The left panel shows the bins constructed with GIST where the color represents the bin to which each spaxel belongs. Bins are constructed using the Voronoi binning method (Cappellari & Copin 2003) to reach a threshold SNR for each bin, in this case SNR \(\sim 10\). We note that Voronoi binning is not performed for the fits used in the analysis. The two panels on the right show the spectra (purple and teal lines) and ppxf fits (black lines) from the outlined bins on the left. We show the uncertainties on the spectra in gray. Figure 7: Distribution of black hole masses for the host galaxies in our sample. We show the entire sample in black, with the divisions on X-ray bright vs. X-ray faint in purple and orange, respectively. The distribution peaks at \(\log(M_{\rm BH}/M_{\odot})=6.05\), consistent with previous results for similar analyses. We find no significant difference in black hole masses between the X-ray bright (6 total) and X-ray faint (7 total) events. no significant correlation between the TDEmass values for the black hole mass and the ones we derive here (\(p\)-value = 0.4). While it is not surprising that the MOSFiT and TDEmass values do not agree, as they derive the black hole mass using differing assumptions on the origin of the UV/optical emission, the lack of any correlation with host galaxy properties is puzzling. Previous studies (e.g., Ramsden et al., 2022; Mockler et al., 2019) which derive the black hole mass from MOSFiT have found weak correlations between the SMBH mass and properties such as the bulge mass and host galaxy stellar mass, but parameters such as the bulge mass can be difficult to determine for TDE host galaxies without sensitive imaging given their masses and redshifts. On the other hand, studies like Wevers et al. (2019) have confirmed a disparity between SMBH masses measured using MOSFiT and those from host scaling relations such as \(M_{\rm BH}-\sigma\). The lack of correlation is not entirely discouraging, as there is indeed some correlation between light curve properties such as rise and fade timescale and the black hole mass (van Velzen et al., 2021; Nicholl et al., 2022; Hammerstein et al., 2023; Yao et al., 2023), and perhaps indicates a need to revisit the exact ways in which the properties of the black hole are imprinted onto the observed TDE light curves. ### Correlations with TDE light curve properties Many previous studies have found significant correlations between the light curve properties of TDEs and the black hole mass or, more often, the host galaxy mass. van Velzen et al. (2021) found a correlation between the decay timescale and host galaxy stellar mass, which Hammerstein et al. (2023) further confirmed with a larger sample. This is consistent with many previous results in the literature (e.g., Blagorodnova et al., 2017; Wevers et al., 2017). Hammerstein et al. (2023) additionally found a weak correlation between the rise timescale and the host galaxy stellar mass as well as between the peak luminosity and the host galaxy stellar mass. We now reexamine the correlations with host galaxy stellar mass presented in Hammerstein et al. (2023). Between the SMBH mass and the decay rate for the 13 TDEs, we find a weak positive correlation with a Kendall's tau test, but the \(\tau=0.26\) correlation is not Figure 8: The black hole mass as a function of the velocity dispersion, along with several derived relations from the literature. We employ the relation of Gülekin et al. (2009) (Gütlekin+09) to derive the black hole masses presented here. Black hole masses derived from Kormendy & Ho (2013) (K&H13) would generally be higher than those derived from Gütlekin et al. (2009), while the Xiao et al. (2011) relation (Xiao+11) would yield lower masses at the higher velocity dispersion end of the relation and higher masses at the lower velocity dispersion end of the relation. Labels for each TDE are in Table 1. Figure 9: The black hole mass as a function of the host galaxy stellar mass. We show several derived \(M_{\rm BH}-M_{\rm gal}\) relations. Black dashed and long-dashed lines show the relations from Reines & Volonteri (2015) derived from AGN host galaxies and inactive galaxies, respectively. The blue dashed, dotted, and dot-dashed lines show the relations from Greene et al. (2020) derived from late-type galaxies, early-type galaxies, and all galaxy types, respectively. We also showed the fitted relation from Yao et al. (2023), which was fit only for TDE hosts. Labels for each TDE are in Table 1. significant with \(p\)-value \(=0.25\). The Kendall's tau test between the SMBH mass and the rise results in \(\tau=0.41\), but again is not significant with a \(p\)-value \(=0.06\). We no longer find a correlation between the black hole mass and the peak blackbody luminosity. While we generally find the same trends as previous works, our smaller sample size weakens our ability to make significant conclusions and the disappearance of significant correlations here should be interpreted with caution. The black hole mass now makes it possible to compare the peak blackbody luminosity of the TDE light curves with the Eddington luminosity implied by the black hole mass. We define the Eddington luminosity as \(L_{\rm Edd}\equiv 1.25\times 10^{38}(M_{\rm BH}/M_{\odot})\) and take values for the peak blackbody luminosity from Hammerstein et al. (2023) measured using the peak UV/optical SED. In Figure 11, we show the peak blackbody luminosity as a function of the Eddington luminosity, with solid, dashed, and dotted curves representing lines of constant Eddington ratio. All of our events are consistent with being at or below the Eddington luminosity (solid line), apart from AT20181na (ID 5), with its blackbody luminosity significantly super-Eddington even at the maximum extent of its uncertainties. We note that this is also the lowest mass object in our sample with \(\log(M_{\rm BH}/M_{\odot})=4.98\pm 0.83\). The apparent significantly super-Eddington luminosity may be due to the large uncertainty on the calibration of \(M_{\rm BH}-\sigma\) relation at such low velocity dispersions, although without larger samples of dynamically measured masses for intermediate mass black holes, this problem is hard to constrain (for a review on such measurements, see Greene et al.2020). If we instead obtain the mass for AT20181na using the relation from Xiao et al. (2011), derived from active galaxies with low black hole masses, we find that the resulting black hole mass is higher: \(\log(M_{\rm BH}/M_{\odot})=5.22\). Although the peak luminosity is still super-Eddington. The mass for AT20181na should thus be interpreted with caution. Super-Eddington mass fallback rates are not unexpected for black holes with such low masses, with duration of \(\dot{M}/M_{\rm Edd}>1\) longer for smaller black holes (De Colle et al.2012). AT20181na, the lowest mass black hole and the one with the largest Eddington ratio, does indeed follow this expected relation, its bolometric luminosity staying above Eddington for much longer than the other objects in this sample when examing the light curve fits of Hammerstein et al. (2023). AT2020qhs is an outlier in black hole mass, but not necessarily an outlier in its Eddington ratio. Wevers et al. (2019) found that the TDE candidate ASASSN-15lh possessed similar qualities and that the observed emission is consistent with the peak Eddington ratio and luminosity of a maximally spinning Kerr black hole. As we discuss in Section 4.3, a non-negligible spin may explain the properties of AT2020qhs. Yao et al. (2023) found a correlation between the Eddington ratio (\(\lambda_{\rm Edd}\equiv L_{\rm bb}/L_{\rm Edd}\)) and the black hole mass which was inconsistent with the expected ratio Figure 10: _Top panel_: The black hole mass derived from MOSFiT as a function of the black hole mass we derive from host kinematics. The gray dashed line indicates a one-to-one relationship. We do not find a significant correlation between the two measurements. _Bottom panel_: The black hole mass derived from TDEmass as a function of the black hole mass we derive from host kinematics. The gray dashed line indicates a one-to-one relationship. We note that the mass for AT2020qhs (ID 11) was not able to be determined with TDEmass. We do not find a significant correlation between the two measurements. Labels for each TDE are in Table 1. between the peak fallback rate and Eddington accretion rate. Instead, they found a much shallower relation between \(\dot{M}_{\rm fb}/\dot{M}_{\rm Edd}\) and the black hole mass, which they attribute to either Eddington-limited accretion or that the UV/optical luminosity only captures a fraction of the total bolometric luminosity. We report similar findings here, with a moderate negative correlation between \(\lambda_{\rm Edd}\) and \(M_{\rm BH}\) resulting from a Kendall's tau test (\(\tau=-0.46\), \(p\)-value = 0.03). In Figure 12, we show \(\log(\lambda_{\rm Edd})\) as a function of \(M_{\rm BH}\), along with the fitted relations from Yao et al. (2023) (solid line, fitted for all 33 TDEs in their sample: \(\dot{M}_{\rm fb}/\dot{M}_{\rm Edd}\propto M_{\rm BH}^{-0.49}\), dashed line, correcting for selection bias by only fitting objects with \(z<0.24\): \(\dot{M}_{\rm fb}/\dot{M}_{\rm Edd}\propto M_{\rm BH}^{-0.72}\)) and the expected relation \(\dot{M}_{\rm fb}/\dot{M}_{\rm Edd}\propto M_{\rm BH}^{-3/2}\). Visual inspection shows that the relation for our sample may be steeper than that found by Yao et al. (2023). ### AT2020qhs and the TDE-featureless class We now turn our attention specifically to AT2020qhs (ID 11), which is a notable event for several reasons. AT2020qhs is a member of the new class of featureless TDEs put forth by Hammerstein et al. (2023). These events are characterized by optical spectra showing a strong blue continuum but with no broad Balmer or He ii emission typical of the optical spectra of TDEs. The peak flare luminosities of these events are several orders of magnitude larger than those of broad-line TDEs, but the rise and fade timescales are similar to the other spectral classes. The host galaxies of TDE-featureless events are typically more massive than broad-line TDEs, suggestive of a higher central black hole mass. Indeed, we find that AT2020qhs possesses the highest black hole mass in our sample, with \(\log(M_{\rm BH}/M_{\odot})=8.01\pm 0.82\). We caution, however, that AT2020qhs is also the highest redshift event in our sample, and as such has the lowest spatial resolution of any event in our sample (4.89 kpc/\(\arcsec\)). Additionally, the choice of \(M_{\rm BH}-\sigma\) relation can affect the derived black hole mass, which may have implications for the resulting conclusions made here. Yao et al. (2023) measured the velocity dispersions for two additional TDE-featureless events, AT2020acka (Hammerstein et al., 2021; Yao et al., 2023) and AT2021ehb (Gezari et al., 2021; Yao et al., 2022), and found corresponding black hole masses of \(\log(M_{\rm BH}/M_{\odot})=8.23\pm 0.40\) and \(\log(M_{\rm BH}/M_{\odot})=7.16\pm 0.32\), respectively. If we use the Greene et al. Figure 11: The peak blackbody luminosity as a function of the Eddington luminosity implied by the black hole mass. The solid, dashed, and dotted lines indicate constant Eddington ratios. We find that nearly all TDEs in our sample are consistent with being at or below the Eddington limit, with the exception of AT2018lna. This object has the lowest velocity dispersion in our sample and the black hole mass should be interpreted with caution. Labels for each TDE are in Table 1. Figure 12: The Eddington ratio as a function of black hole mass. The dotted line is the expected Eddington ratio for the peak fallback accretion rate and the solid and dashed lines are the fitted relations from Yao et al. (2023) where \(\dot{M}_{\rm fb}/\dot{M}_{\rm Edd}\propto M_{\rm BH}^{-0.49}\) and \(\dot{M}_{\rm fb}/\dot{M}_{\rm Edd}\propto M_{\rm BH}^{-0.79}\), respectively. We find a moderate negative correlation between \(\lambda_{\rm Edd}\) and the black hole mass, with the relation shallower than the expected \(\lambda_{\rm Edd}\propto M_{\rm BH}^{-3/2}\), but likely steeper than that obtained by Yao et al. (2023). Labels for each TDE are in Table 1. (2020) \(M_{\rm BH}-M_{\rm gal}\) relation for late-type galaxies to estimate the black hole masses for the remaining three featureless events in the Hammerstein et al. (2023) sample, AT2018jbv, AT2020riz, and AT2020ysg, we obtain masses within the range \(\log(M_{\rm BH}/M_{\odot})=6.48\) - 7.70, which are still among the highest masses of those obtained here. The dependence of the tidal radius and the Schwarzschild radius on the black hole mass is such that above \(\sim 10^{8}M_{\odot}\) (sometimes called the "Hills mass"; Hills 1975), a solar-type star will typically pass beyond the black hole's event horizon undisturbed, producing no visible flare. While the black hole mass for AT2020qhs is above this limit, it is still possible to produce an observable TDE around a SMBH of this size. The Hills mass may be exceeded through the disruption of giant stars, although the long timescales and lower luminosities of these events makes it less likely that they will be detected and noted by traditional TDE search methods (Syer and Ulmer, 1999; MacLeod et al., 2012). This explanation for such a high black hole mass seems unlikely, as the TDE-featureless class is shown to have the highest luminosities of any TDE class while the timescales for these events are comparable to other classes of TDEs (Hammerstein et al., 2023). A more favorable explanation is that the SMBH of AT2020qhs possesses a non-negligible spin which serves to increase the Hills mass (Kesden, 2012), as was similarly suggested for the TDE candidate ASASSN-15lh (Leloudas et al., 2016). It has been shown, however, that such SMBHs will contribute only marginally to the overall TDE rate (Stone and Metzger, 2016). The low predicted rates of spinning SMBHs amongst TDEs may not be a large concern, as Hammerstein et al. (2023) noted that most of the TDE-featureless events occur at high redshifts, implying that a larger volume is needed to observe them and hinting at their rarity. Following the work of Kesden (2012) and under the assumption that the disrupted star was of solar type, we can place a lower limit on the spin of the AT2020qhs black hole of \(a\gtrsim 0.16\). However, if we instead derive the black hole mass for AT2020qhs using the relation from Xiao et al. (2011), the black hole mass becomes \(\log(M_{\rm BH}/M_{\odot})=7.60\), which requires no spin for the disruption of a solar type star. We note that the disruption of a higher mass star can also potentially explain the black hole mass of AT2020qhs. Leloudas et al. (2015) also addressed this for ASASSN-15lh, finding that only star masses greater than \(\sim 3M_{\odot}\) can be disrupted by a non-rotating Schwarzschild black hole. These events are also rare (Stone and Metzger, 2016; Kochanek, 2016), but may be a plausible explanation for AT2020qhs flare. Mockler et al. (2022) used measurements of the N iii to C iii ratio in UV spectra to infer the masses of the disrupted stars, finding that the observed ratios necessitate the disruption of more massive stars in the post-starburst hosts they targeted. Larger samples of UV spectra for all TDE types and black hole masses are needed to further investigate whether this is the case for TDE-featureless events such as AT2020qhs. Spin has been invoked to explain other phenomena observed in TDEs, such as the launching of relativistic jets. Recently, Andreoni et al. (2022) reported the discovery of a jetted TDE in the ZTF survey, concluding that a high spin is likely required to produce such jets. They put a lower limit on the spin parameter of \(a\gtrsim 0.3\). Andreoni et al. (2022) also noted the similarities between AT2022cmc and the TDE-featureless class, with the comparable peak flare luminosities and similar lack of broad emission lines in spectra suggesting a connection between the two classes of events. They propose that TDE-featureless events may be jetted TDEs observed off-axis, but further multi-wavelength follow-up of these events is needed to confirm this hypothesis. Nonetheless, the black hole masses AT2020qhs and AT2020acka imply SMBHs with rapid spins and further bolster the possible connection between jetted TDEs and the TDE-featureless class. ## 5 Galaxy Kinematics and Stellar Populations We now investigate the kinematic properties on the scale of the effective radius of the entire galaxy light profile (\(R_{\rm e,gal}\)). Our fits using ppxf yield velocities and velocity dispersions, which can be used to estimate the level of rotational support the TDE hosts possess, quantified by the ratio of ordered to random stellar motion \((V/\sigma)_{e}\), where lower values of \((V/\sigma)_{e}\) indicate a higher degree of random stellar motions. We adopt the formula of Cappellari et al. (2007), defined for integral field data: \[\left(\frac{V}{\sigma}\right)^{2}_{e}\equiv\frac{\langle V^{2}\rangle}{ \langle\sigma^{2}\rangle}=\frac{\Sigma_{n=1}^{N}F_{n}V_{n}^{2}}{\Sigma_{n=1}^{ N}F_{n}\sigma_{n}^{2}}, \tag{2}\] where \(F_{n}\) is the flux contained within the \(n\)th bin, while \(V_{n}\) and \(\sigma_{n}\) are the mean measured velocity and velocity dispersion within that bin. In Figure 13 we show the \((V/\sigma)_{e}\) for the thirteen TDE host galaxies as a function of stellar population age. We also show the same comparison sample of galaxies as in Figure 3. The top and side panels of Figure 13 show the distribution of galaxies in the red sequence, which hosts largely quiescent, elliptical galaxies, the blue cloud, which hosts primarily star-forming galaxies, and the green valley, which hosts recently quenched galaxies, defined from Figure 3, E+A galaxies, and the TDE hosts. E+A galaxies from the SAMI survey were selected using the H\(\alpha\) equivalent width and Lick H\(\delta_{\rm A}\) absorption index using values presented in the MPA+JHU catalogs (Brinchmann et al., 2004). We note that only a third of the galaxies in the SAMI survey have a counterpart in the MPA+JHU catalog. The H\(\alpha\) equivalent width is limited to \(<4.0\) A and the H\(\delta_{A}\) index is limited to H\(\delta_{\rm A}-\sigma(\)H\(\delta_{\rm A})>4.0\) A to isolate post-starburst galaxies. van de Sande et al. (2018) found a strong correlation between the ratio of ordered rotation to random stellar motion and the stellar population age of a galaxy, such that younger stellar populations are predominantly rotationally supported as in late-type galaxies while older stellar populations are pressure supported by random stellar motions as in early-type galaxies. They also found that \((V/\sigma)_{e}\) is linked to the observed shape (quantified by the ellipticity \(\epsilon\)). These correlations link a galaxy's star formation history with its merger history, as mergers will enhance the formation of bulges which in turn lowers a galaxy's \((V/\sigma)_{e}\) and ellipticity. We find that the TDE host galaxies largely follow this same relation between \((V/\sigma)_{e}\) and stellar population age, apart from two outliers AT2019azh and AT2020zo (IDs 6 and 13, respectively). AT2019azh is a known E+A galaxy, which have been shown to have varied central stellar population ages and young stellar populations not necessarily confined to the nucleus (Norton et al., 2001; Pracy et al., 2009). This may affect the measurement of the host galaxy stellar population age in the central regions in unforeseen ways. The close link between the merger history, stellar population age, and stellar kinematics is very likely a driving factor behind post-starburst color (used as a proxy for stellar population age) and morphology, and may help explain the TDE preference for such environments. Even before van de Sande et al. (2018) noted the connection between stellar kinematics and stellar population age, Schawinski et al. (2010) found that low-mass morphologically early type galaxies in the green valley, which is thought to contain more recently quenched galaxy populations, are linked to mergers which rapidly ushered their migration from the star-forming blue cloud to the green valley and which changed their shape from disk to spheroidal. Schawinski et al. (2014) subsequently found that these systems have classic post-starburst populations. However, the majority of galaxies migrate into the green valley through a slow decline in star formation rate likely as a result of gas supply shut off and retain hence their disk shape. The population of green, spiral-like galaxies is noted in Hammerstein et al. (2021), who compared 19 TDE hosts to red sequence, green valley, and blue cloud galaxies, finding that the TDE hosts are inconsistent with the majority of green valley galaxies which maintained their disk-like morphology inferred through the Sersic index. Given the rate enhancement of TDEs in green valley (and E+A) galaxies, one could expect that TDE host galaxies also cluster in a specific region of \((V/\sigma)_{e}\). However, we observe a relatively large spread in \((V/\sigma)_{e}\). The TDE hosts are more evenly distributed in \((V/\sigma)_{e}\) with a median value of 0.52. We compare the distribution of the TDE host galaxies in \((V/\sigma)_{e}\) and mass to the red sequence, green valley, and blue cloud galaxies. We find that the TDE hosts, while predominantly green, are generally less massive than the majority of green valley galaxies. This is in agreement with the findings of Hammerstein et al. (2021) for a larger sample of 19 TDE host galaxies from ZTF. The green valley and red sequence distributions in \((V/\sigma)_{e}\) peak around \(\sim 0.2\), indicating that these galaxies are dominated by random stellar motions. In general, we expect a negligible contribution to the TDE rate from stars on circular orbits. This could lead one to conclude that at a fixed stellar mass, a low \((V/\sigma)_{e}\) might imply a higher TDE rate. However, we should note that the stars within the SMBH sphere of influence (radius \(\sim 1\) parsec) contribute only a tiny fraction to the stellar light within the effective radius. Hence the large spread in the \((V/\sigma)_{e}\) that we observe for the TDE host galaxies cannot directly be translated into a spread in the TDE rate. We thus arrive at the somewhat puzzling observation that the TDE rate appears to be correlated more strongly with the global colors of the host galaxy than the \((V/\sigma)_{e}\) at its effective radius. Galaxies that are most certainly dominated by random stellar motions and have stellar populations older than 10 Gyr (i.e., early-type galaxies), have a mean \((V/\sigma)_{e}=0.22\). Although three TDE hosts have values around or below this level, they have stellar population ages younger than 10 Gyr at \(\sim 7.1\) Gyr. The older, more massive galaxies which are dominated by random stellar motions may also host black holes which exceed the Hills mass, which could explain why the TDE hosts with lower \((V/\sigma)_{e}=0.22\) have younger stellar populations than galaxies with similar kinematics. The difference in age between galaxies dominated by random stellar motions and the TDE hosts of similar \((V/\sigma)_{e}\) implies that the TDE rate likely declines as a galaxy ages despite the increase in the degree of random motion, although the precise reason, whether it be black hole growth beyond the Hills mass or otherwise, and the connection this has with nuclear dynamics is not yet clear given the indi rect relationship that these global properties have with factors influencing the TDE rate in the nucleus. The E+A distribution in \((V/\sigma)_{e}\) has a mean value of 0.49, similar to the TDE hosts' median value of 0.52. The E+A mass distribution also peaks at \(\log(M_{\rm gal}/M_{\odot})=10.07\), while the median TDE host galaxy mass is \(\log(M_{\rm gal}/M_{\odot})=10.09\). It is clear that the TDE host galaxies are likely consistent with the same population of galaxies as post-starburst galaxies, which has been suggested previously (e.g., Law-Smith et al., 2017; Hammerstein et al., 2021). We can also rule out that the TDE hosts come from the same population as red sequence galaxies. An Anderson-Darling test comparing the \((V/\sigma)_{e}\) of red sequence galaxies to the TDE hosts reveals that the null hypothesis that the two are drawn from the same parent population can be rejected (\(p\)-value = 0.02). The same cannot be said, however, when comparing green valley galaxies and blue cloud galaxies to the TDE hosts. The TDE host galaxies also differ in age when compared to the E+A galaxies, with the former having a median stellar population age of 6.12 Gyr, while the E+A galaxies have a mean stellar population age of 2.82 Gyr. One possible conclusion from this is that the TDE host galaxies are post-merger, similar to E+As, but the younger stellar populations produced in the merger-induced starburst having subsided meaning the ages of the stellar populations are older but the other factors which enhance the TDE rate in E+A galaxies (e.g., nuclear star clusters, high central stellar concentrations) remain. Future observations which search for merger signatures, such as in French et al. (2020), for larger samples of TDEs will be able to confirm the prevalence of post-merger galaxies among TDE host populations. The GALFIT residuals for several galaxies from the LMI data presented here do show remaining features, although differentiating normal dust lane features from true merger signatures like tidal features is difficult. Figure 13: The ratio between stellar ordered rotation and random orbital motion of the TDE host galaxies, defined as \((V/\sigma)_{e}\), as a function of galaxy stellar mass, with the color of the points/pixels corresponding to the stellar population age. The median uncertainty on the TDE host galaxy values is shown in the top left. Galaxies from the SAMI galaxy survey are shown in the background, with the mean stellar population age of galaxies within a pixel used to determine the pixel color. White contours represent the number density of background galaxies. The top and side panels show the distribution of the TDE hosts and the red sequence, green valley, blue cloud, and E+A galaxies in the background sample obtained by kernel density estimation. We find that the TDE hosts are generally lower mass than most of the background sample, with a larger spread in \((V/\sigma)_{e}\) than green valley or red sequence galaxies but a distribution similar to E+A galaxies. Labels for each TDE are in Table 1. Stone et al. (2018) examined factors which enhance TDE rates in post-starburst galaxies, such as SMBH binaries, nuclear stellar overdensities, radial orbit anisotropies, and delay between the initial starburst and the enhancement of the TDE rate due to these factors. This delay time between the initial post-merger starburst and the enhancement of the TDE rate could help to explain why the TDE hosts show older ages but similar global stellar dynamics to the younger post-starburst galaxies. ## 6 Conclusions We have presented the first sample study of IFU observations of thirteen TDE host galaxies from the ZTF survey in order to investigate their kinematic properties and infer their black hole masses. Our main conclusions are as follows: * The black hole mass distribution peaks at \(\log(M_{\rm BH}/M_{\odot})=6.05\), consistent with theoretical predictions that TDE populations are dominated by lower mass SMBHs and past observational findings. * There is no significant statistical difference between the X-ray bright and X-ray faint population of TDEs in our sample, which further supports the unifying theory of Dai et al. (2018) that proposes viewing angle effects as the factor which determines X-ray brightness in a TDE. * We find no significant correlation between the black hole masses derived from \(M_{\rm BH}-\sigma\) and the black hole masses derived from MOSFiT or TDEmass. This may indicate a need to revisit the way that the black hole mass is imprinted on the light curves of TDEs. * The Eddington ratio is moderately correlated with the black hole mass, although the correlation is likely shallower than the expected relation between the peak fallback accretion rate and the black hole mass, similar to the findings of Yao et al. (2023). * We find that the event AT2020qhs, a member of the TDE-featureless class, has the highest black hole mass of the sample: \(\log(M_{\rm BH}/M_{\odot})=8.01\pm 0.82\), above the Hills mass for the disruption of a solar type star. We suggest that the SMBH at the center of this event is rapidly spinning and, assuming that the disrupted star was of solar type, put a lower limit on the spin of \(a\gtrsim 0.16\). This further supports the proposed connection between jetted TDEs and the TDE-featureless class put forth by Andreoni et al. (2022). * We investigate the large-scale kinematics of the TDE host galaxies, particularly the ratio of ordered rotation to random stellar motions \((V/\sigma)_{e}\), and find that the TDE hosts show similar distributions in \((V/\sigma)_{e}\) to E+A galaxies but older stellar populations. This may indicate that TDE host galaxies, like E+A galaxies, are post-merger galaxies with the younger stellar populations produced in the merger-induced starburst having subsided, leaving only the older stellar populations. The delay time between post-merger starburst and TDE rate enhancement may also explain the discrepancy in age (e.g., Stone et al., 2018) We thank the anonymous referee for their helpful comments towards improving this paper. EH acknowledges support by NASA under award number 80GSFC21M0002. These results made use of the Lowell Discovery Telescope (LDT) at Lowell Observatory. Lowell is a private, non-profit institution dedicated to astrophysical research and public appreciation of astronomy and operates the LDT in partnership with Boston University, the University of Maryland, the University of Toledo, Northern Arizona University and Yale University. The Large Monolithic Imager was built by Lowell Observatory using funds provided by the National Science Foundation (AST-1005313). The data presented here were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST ATLAS Survey. The SAMI Galaxy Survey is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions. The SAMI Galaxy Survey website is [http://sami-survey.org](http://sami-survey.org). The data analysis in this paper was performed on the Yorp and Astra clusters administered by the Center for Theory and Computation, part of the Department of Astronomy at the University of Maryland. LDT (LMI), Keck:II (KCWI) GALFIT, KCWI-DRP, CWITools, GIST, ppxf
2301.00938
Measuring Physical and Electrical Parameters in Free-Living Subjects: Motivating an Instrument to Characterize Analytes of Clinical Importance in Blood Samples
Significance: A path is described to increase the sensitivity and accuracy of body-worn devices used to monitor patient health. This path supports improved health management. A wavelength-choice algorithm developed at Mayo demonstrates that critical biochemical analytes can be assessed using accurate optical absorption curves over a wide range of wavelengths. Aim: Combine the requirements for monitoring cardio/electrical, movement, activity, gait, tremor, and critical biochemical analytes including hemoglobin makeup in the context of body-worn sensors. Use the data needed to characterize clinically important analytes in blood samples to drive instrument requirements. Approach: Using data and knowledge gained over previously separate research threads, some providing currently usable results from more than eighty years back, determine analyte characteristics needed to design sensitive and accurate multiuse measurement and recording units. Results: Strategies for wavelength selection are detailed. Fine-grained, broad-spectrum measurement of multiple analytes transmission, absorption, and anisotropic scattering are needed. Post-Beer-Lambert, using the propagation of error from small variations, and utility functions that include costs and systemic error sources, improved measurements can be performed. Conclusions: The Mayo Double-Integrating Sphere Spectrophotometer (referred hereafter as MDISS), as described in the companion report arXiv:2212.08763, produces the data necessary for optimal component choice. These data can provide for robust enhancement of the sensitivity, cost, and accuracy of body-worn medical sensors. Keywords: Bio-Analyte, Spectrophotometry, Body-worn monitor, Propagation of error, Double-Integrating Sphere, Mt. Everest medical measurements, O2SAT Please see also arXiv:2212.08763
Barry K. Gilbert, Clifton R. Haider, Daniel J. Schwab, Gary S. Delp
2023-01-03T03:20:13Z
http://arxiv.org/abs/2301.00938v2
Measuring Physical and Electrical Parameters in Free-Living Subjects: Motivating an Instrument to Characterize Analytes of Clinical Importance in Blood Samples ###### Abstract **Significance:** A path is described to increase the sensitivity and accuracy of body-worn devices used to monitor patient health. This path supports improved health management. A wavelength-choice algorithm developed at Mayo demonstrates that critical biochemical analytes can be assessed using accurate optical absorption curves over a wide range of wavelengths. **Aim:** Combine the requirements for monitoring cardio/electrical, movement, activity, gait, tremor, and critical biochemical analytes including hemoglobin makeup in the context of body-worn sensors. Use the data needed to characterize clinically important analytes in blood samples to drive instrument requirements. **Approach:** Using data and knowledge gained over previously separate research threads, some providing currently usable results from more than eighty years back, determine analyte characteristics needed to design sensitive and accurate multiuse measurement and recording units. **Results:** Strategies for wavelength selection are detailed. Fine-grained, broad-spectrum measurement of multiple analytes' transmission, absorption, and anisotropic scattering are needed. Post-Beer-Lambert, using the propagation of error from small variations, and utility functions that include costs and systemic error sources, improved measurements can be performed. **Conclusions:** The Mayo Double-Integrating Sphere Spectrophotometer (referred hereafter as MDISS), as described in the companion report [1], produces the data necessary for optimal component choice. These data can provide for robust enhancement of the sensitivity, cost, and accuracy of body-worn medical sensors. Bio-Analyte, Spectrophotometry, Body-worn monitor, Propagation of error, Double-Integrating Sphere, Mt. Everest medical measurements, O2\({}_{\mathrm{SAT}}\) ## 1 Introduction This paper describes a bio-analyte characterization process and the associated instrumentation that was developed to support that characterization. These instruments are used to provide the parameters to monitor clinically relevant medical data with body-worn devices. Improving body-worn clinical-grade health monitoring units has been a major end goal of our lab since the early 2000s. We began by implementing features on these units such as ECG and physical activity, but always with the goal of incorporating additional measurement parameters into the units over time, e.g., blood oxygen saturation and carbon monoxide measurements, detection of methemoglobins, and other physiological parameters as feasible. This monitoring awaits appropriate reference data becoming available from laboratory-grade measurements. Discussion of measuring these additional analytes appears in [1]. Our intent in this manuscript is to highlight several separate research areas that coalesced in our laboratory over decades. These related threads led to our recognition of the need for a new state-of-art spectrophotometer that would yield the previously unavailable baseline data in support of the design of analyte-measuring untethered body-worn units. Although Section 3 of this paper may appear in part to be an historical review, that is not our primary intent; dedicated reviews are in the published literature. Rather, we wish to illustrate the way in which eight decades of prior work, much of it conducted in the authors' home institution, resulted in our most recent efforts, as described below. These fields have been continually active for decades; there are hundreds of references, some of which we cite herein. The initial commercial development in the early 2000s of consumer-grade, non-analyte, body-worn units, and our development in the 2010s of clinical-grade, non-analyte, body-worn units, are described, followed by a brief review of Mayo Clinic's optically based analyte measurements originally made in the 1940s. Thereafter, our development of a high-performance research spectrophotometer system to collect data to support the design of battery-powered body-worn analyte measurement units is introduced. The companion report, [1], describes further spectrophotometer engineering details and presents example analyte measurement results from the MDISS. ## 2 Initial Development of Clinical-Quality-Grade, Body-Worn, Physiological Measurement Units In the early 2000s, several consumer products companies began to market devices catering to the burgeoning field of self-help health-and wellbeing techniques, in particular, small, battery-operated monitoring devices, _e.g._, a generic class of "step counters" and physical activity monitors [2-10]. These consumer units were not intended to be used in monitored clinical settings. However, we and our clinical colleagues at the Mayo Clinic needed similar units to measure the health and progress of patients, whose collected data would be of clinical grade. The outcome of these requirements was a multi-year project to develop high-quality, ruggedized, wearable sensing and recording devices, which could perform and record long-term motion tracking [11-18], as well as high fidelity monitoring of a free-living individual's electrocardiogram [19, 20]. Using miniaturized electronic components and microprocessors, and small high-energy-density batteries that became available in the first half of the 2000s, we also created ruggedized versions of these units for extreme-activity athletes and mountaineers [21] with several-week run times (24/7), as depicted in Figure 1. These studies were conducted under full written, informed consent according to Mayo Clinic Institutional Review Board (IRB) study IDs 11-006747, 12-001512 and 14-001445. Before embarking on the development for clinical-quality wearable units, and to broaden our knowledge base, we began by purchasing, reviewing, and documenting the characteristics of several consumer self-help devices that were available in the open market [19] (as was also done by [2, 3]). Guided by these initial reviews of consumer-grade devices, and to ensure clinical-quality data, we incorporated features into our design such as: 1) Very high sampling rates of the measured analog signals, initially up to 400 samples/second, and later, up to 1000 samples/second, 8 or 12 bits/sample; 2) ECG waveform monitoring; 3) rigorous static and dynamic calibration of the accelerometers in every unit to NIST-traceable standards; 4) accelerometer slope and offset correction measurements on NIST traceable platforms, with data values stored to allow for post processed compensation for minor manufacturing differences (including a unique serial number and manufacturing date for each device for complete traceability); 5) autonomous multi-day operation without battery change or charging or any other intervention by the wearer; and 6) a stable time-of-day clock allowing the synchronization of data from multiple units worn by a single subject. Body-worn units designed, fabricated, tested, Figure 1: The small battery-powered body-worn units developed for continuous measurement of motion and ECG, worn by the mountain climbers on a Mayo-sponsored expedition to Mount Everest in 2012 [20]. The unit was 38.9 mm wide, 70.2 mm long, 8.9 mm thick; 3 3-axis accelerometers; 2 or 3-electrode, 400 samples/second ECG at 12-bit sampling resolution; 2-week run time. Our goal was to incorporate the analyte measurement capabilities into a physical form factor like these units (this figure also appears in U.S. Patent 8,849,387). \((43333)^{2}\) and deployed in this manner resulted in a reliable physiological measurement capability, in a small form factor, representing a set of potentially useful clinical tools for eventual monitoring of patients in their free-living environments. We also demonstrated the ability to monitor patients, via short- and long-haul wireless and wired connections, from their home environments back to a medical center, where the collected raw data could be analyzed in near real-time [20]. ## 3 Evolution to Clinical-Quality-grade, Body-Worn Biological Analyte Measurement Units The next request from our clinical colleagues was for an ability to measure and record, noninvasively and over a duration of days to a few weeks, the blood oxygen saturation levels in free-living patients (rather than in a hospital or clinic setting); we were also asked if it would be possible to measure the concentrations of other naturally occurring analytes as well. The tiny, long-lasting body-worn units represented the target form factors into which our clinical collaborators asked us to incorporate these additional measurement modalities. Regarding the measurement of blood oxygen saturation, we relied upon prior research in this field as a starting point. We began by reviewing the significant body of work, beginning in 1935 and progressing steadily thereafter, on techniques for measuring blood oxygen saturation noninvasively. This capability, using optical techniques based on two wavelengths of light, was first demonstrated in 1935 by Matthes [22], then extended by Milliken [23] and by Goldie [24] in the early 1940s. Also, in the early 1940s, significant contributions to this field were made by Wood and colleagues [25-35]. However, the Wood team was unable to publish their results until 1947 and thereafter [36] because of wartime restrictions on the release of "sensitive" information since this work was conducted under the auspices of the U.S. Army Air Corps [though funded by Mayo Clinic as a contribution to the WW2 scientific efforts]. As with the work described in [22-24], the Wood earpiece oximeter employed two optical wavelengths, but it also incorporated a pressure-activated plunger to expel blood from the upper edge of the pinna (i.e., the upper portion of the outer ear) to achieve a hemoglobin-free tissue baseline that could be incorporated into the blood O2 calculations (Figure 2). By present standards, the units were heavy and bulky, and had to be taped to the subject's head to provide mechanical support (Figure 3). The Mayo-developed units were used in studies of G-induced loss of consciousness (G-LOC) in human subjects during World War II on a full-sized human centrifuge (partially visible in Figure 3) installed at the Mayo Clinic. The earpiece oximeters continued in use without major changes until the early 1960s, in centrifuge studies of the Project Mercury astronaut couches. The on-body oximetry technology continued to evolve. The next significant advance, referred to as pulse oximetry (a variant on the original continuous oximetry) was first described in 1972 by Aoyagi and Kishi [37-39], with additional refinements in the subsequent decades. The pulse oximetry approach effectively supplanted the original non-pulsatile approach in clinical implementations (see below). Over the decades, commercial industry extended the implementation of pulse oximetry through hardware refinements using newer components and with algorithmic and software extensions to improve the usefulness and accuracy of the collected data, e.g., [40]. In 2000 Masimo introduced a technology approach referred to as Signal Extraction Technology (SET) [41], in which five proprietary algorithms were developed to remove the extraneous variability in the arterial oxygenation waveform caused by changes in the venous circulation, thereby providing a more accurate arterial oxygenation signal. Figure 3: Volunteer in the cockpit of Mayo Clinic’s human centrifuge, wearing earpiece oximeter on right ear, as indicated by white arrow, ca. 1962 (Photo courtesy of Don Hagland). (1954)3 Figure 2: Earpiece oximeter developed at Mayo Clinic, illustrating light and plunger fully depressed against photodetector, ca. 1943. (42175) Others have continued to document and refine the understanding of the physiological and optical processes underlying blood oximetry; see Severinghaus's excellent review of the early years of oximetry [39], and an exposition by Mannheimer of the optical physics and hemodynamics of the process [42]. By 2010, "finger-tip" oximeters, cable-powered by a desktop unit at the patient's bedside, were coming into use in hospital settings. These wired, clip-on devices are placed on the patient's index finger, and use "transmission oximetry", i.e., where light from two or more light-emitting diodes (LEDs) of different wavelengths passes through the finger, with the residual light then detected by one or more solid-state photodiodes. To incorporate noninvasive analyte detection and measurement into our planned free-standing body-worn units we needed to identify optical components that would be compatible with the physical form factor constraints of the ECG/motion units. Our motion-and-ECG units were powered by small batteries. We viewed the analyte detection capability as an addition to the original baseline functionality. Thus, we needed to remain within the size and power constraints of those units, with the battery limitation being the most important. The consumer-grade LEDs used in the wired units require more power than we could support with small batteries. LEDs also have relatively wide emission bandwidths (20-70 nm full-width half-max [FWHM]) and uncertain center frequencies, which, as we later demonstrated, degrade the quality of data generated from them (discussed below). We turned to a family of small solid-state lasers, referred to as vertical cavity surface emitting lasers (VCSELs), which, in addition to their narrow-banded emission characteristics (FWHM optical bandwidths of 2-5 nm), were available over a wide range of optical center frequencies. The first practical room temperature non-pulsatile (i.e., continuous-wave or CW) VCSEL was reported by Koyama et al. in 1988 [43]. Development of these tiny optical sources accelerated in the late 1990s and early 2000s by DARPA funding [44]. Our intent was to pair a small number of frequency-selected VCSELs with equally tiny solid-state photodetectors, either avalanche photodiodes (APDs) or P-I-N photodiodes (PIN diodes). APDs exhibit more signal gain than PIN diodes, but also have more intrinsic noise, so we concentrated on PIN diodes for our application. PIN diodes can be selected to cover a range of optical wavelengths. By combining narrow-spectrum VCSELs with PIN diodes having wide wavelength sensitivity, we could allow the light from VCSELS of different wavelengths to impinge on a single PIN diode. If in addition the light output from each VCSEL was modulated with a unique on-off keyed (OOK) pseudo-random pulsatile sequence (code division, Sig/Noise gain), and the PIN diode were integrated with an on-unit multi-channel correlator (i.e., in a small microcontroller or a custom correlator chip), accurate through-the-skin measurement of several analytes of clinical importance could be accomplished, simultaneously, in a small form factor, unlike in conventional pulse oximetry, where the measurements from the different LEDs must be made sequentially (thus introducing time skew between the two measured values in each pair). The VCSELs' narrow emission lines indicated that considerable analyte measurement specificity could be achieved, far better than the LED-based wire-tethered fingertip pulse oximeters, based on Aoyogi's implementation [37-39], that entered hospital use in the early 2000s. However, to select the appropriate VCSEL wavelengths, we needed more frequency-accurate data than was identifiable in the published literature. It was this need for improved wavelength resolution, a larger wavelength measurement span, and additional wavelength measurement parameters that led to our decision to design and fabricate a spectrophotometer with extended functionality, as discussed below and in [1]. With this type of higher resolution and more comprehensive data available, it appeared feasible to design a wearable analyte sensor with considerably extended in vivo measurement capabilities compared to the then-commercial offerings. We undertook that effort. The goal of achieving an extended-capability autonomous on-body analyte measurement unit, underpinned by the historical evolution of such a capability as described above, represents the end goal of the entire sequence of projects described here and in [1]. Because we did not wish to be bound to through-the-finger transmission measurements, we investigated an alternate approach, referred to as "reflection oximetry", in which light is reflected from underlying bone, e.g., at the forehead, back to the photodetectors. The requirement for underlying bone also constrained placement options for the body-worn system. Therefore, we investigated a third approach, "scatter" oximetry, in which photons from the optical sources are directed into the skin and are sensed by one or more photodetectors placed several cm from the sources. Scattered photons entering the inputs of the photodetectors acquire and carry with them the optical information required to calculate the concentrations of the analytes of interest. Using scatter oximetry, the battery-powered measurement device can be placed on a wrist, on an arm, or on the torso, i.e., less intrusively than on a finger. ## 4 Approaches for Selecting the Optimum Measurement Wavelengths for on-Body Oximetry Next, with this conceptual optical measurement chain sketched out, we needed a method to select the optimum wavelengths to measure analyte concentrations in vivo, so that the correct VCSELs could be incorporated into body-worn units. To address this problem one of us (CRH) developed a robust algorithm [45, 46] for the selection of optimal excitation wavelengths. Using the wavelengths selected by the algorithm allowed for the measurement of relative and absolute concentrations of a set of analytes in a sample. The _Beer-Lambert_ molar extinction coefficients of homogeneous materials can be measured for selected frequencies. Measurements in "the wild," however, need to incorporate many more factors, e.g., diffusion; heterogeneous paths; reflection; and the propagation of potential error from the inputs, through the measurement system, and continuing to a consideration of the variations and non-linearities of receivers. This system-level approach required more accurate and higher-resolution measurements of analyte characteristics, including diffusion, reflection, mean-free-path variation, florescence, phosphorescence, and various anisotropies. Our view of the importance of this information was informed by the guidance provided by [45, 46]. However, we did not have this data. Thus, we instituted a literature search for absorption curves for various forms of hemoglobin over a wide range of wavelengths. Such curves were first published in 1987 by Barker and Tremper [47], and again in 1989 by Tremper and Barker [48] (the authors credit Susan Manson, Biox/Ohmeda as their source of this data, as do many subsequent authors, which is widely accepted in the field), and are reproduced and duplicated in both the left and right panels of Figure 4; Figure 7 is from the same published sources. (Note: these curves might or might not be "correct" in some absolute sense, but they were the data available to us; "errors" in the shapes of these curves would of course have deleterious effects on the results that we present below but were unknowable.) Next, we searched the published literature for a selection of wavelengths that would yield the most accurate measures of the concentrations of four forms of hemoglobin (COHb, MetHb, OxyHb, and DeoxyHb). Lamego _et al._[40] describes _eight_ "optimum" wavelengths (610, 620, 630, 655, 700, 720, 800 and 905 nm). The wavelengths in the rightmost panel of Figure 4, illustrated as vertical lines, are similar though not identical to those documented in Lamego _et al._ The left panel of that figure depicts four wavelengths calculated from the algorithm of [45, 46] also provides a means of computing a quality factor, referred to as the "condition number" for a selected set of wavelengths, in which a lower number is "better". A condition number for the eight wavelengths in the right panel of Figure 4, taken from [40] as the outcome of a literature search, was calculated as 31.2, whereas the condition number for the four wavelengths in the left panel of that figure was calculated to be 19.8, i.e., a better condition number for the use of fewer wavelengths. The algorithm of [45, 46] teaches the following conditions, all of which must be satisfied simultaneously as best possible: 1) each selected measurement wavelength should be maximally separated from other wavelengths; 2) the measured curves should be maximally separated in the vertical, or extinction coefficient, axis; 3) wavelengths should not be selected in regions where an extinction curve is "steep", since slight differences in the shapes of the curves and/or in their left-to-right or vertical positions due to component variations or other differences can introduce errors in the final resulting measurements (this constraint is often difficult to obey). In the rightmost panel, the wavelengths 610, 620, and 630 nm are "too close together". The wavelengths at 660, 730, 805 and 905 nm are "good" choices, according to our algorithm, although the overall condition number is degraded by the wavelengths at 610, 620, and 630 nm. The four wavelengths selected by our algorithm, in the left panel of Figure 4, are in partial violation of the ground rules described above; the wavelengths are constrained to some extent by the shapes of the curves themselves. In addition, there is a fifth curve, labeled "Penalty", which we incorporated to account for the fact that some VCSELs are more difficult and costly to manufacture than others; the shape of this curve changes over time as manufacturing processes evolve [courtesy of M. Hibbs-Brenner, Vixar Corporation, _ca._ 2010]. Were the penalty curve not incorporated into the calculations, the selection of wavelengths would have been somewhat different, and might yield results which obey the above-described guidelines more closely. Finally, in Figure 4, note the addition of small, bell-shaped curves at the bottom of the rightmost panel, which presents the wavelengths published by a commercial vendor [40]. These bell-shaped curves illustrate the wavelength spread generated by LEDs on either side of their center frequencies, which, as commented above, can be 20-70 nm (FWHM) wide. The LEDs thus generate wavelength components which overlap one another, thereby degrading the measured signal-to-noise ratios at the photodetector circuitry, because each light source will be carrying "information" that optimally would be out-of-band for those light sources. VCSELs generating FWHM wavelengths only a few nm in width will not create or will at least minimize these artifactual components, thereby motivating their use in place of LEDs. As is clear from this chart, the optical wavelengths at which meaningful hemoglobin measurements can be performed are in the range of approximately 450 nm to 1000 nm. As will be noted below, several other analytes can also be measured using a similar implementation, but the wavelength ranges needed to be extended down to 200 nm and up to 2000 nm. ## 5 The Need for, and Our Approach to Obtaining, an Improved Spectrophotometer The results described above demonstrated that with good continuous optical absorption data over a broad wavelength span, the algorithm described here could select the optimum wavelengths, and hence the correct VCSELs, to detect one or more naturally occurring analytes present in sufficient concentration in the blood of an individual as measured by an autonomous battery-operated (i.e., untethered!) body-worn unit. We also recognized that the more optical parameters from laboratory samples that we had, the more specifically we could select the optimum VCSEL wavelengths. To gather the requisite data, we needed a spectrophotometer capable of measuring many optical parameters simultaneously, in a wide variety of physiological samples (e.g., whole blood, plasma, lymph, etc.). Figure 4: Oxyhemoglobin, Deoxyhemoglobin, Carboxyhemoglobin, and methemoglobin extinction coefficients as a function of wavelength (four algorithmically selected maximally linearly independent wavelengths in the left panel, versus eight wavelengths selected by a commercial vendor in the right panel). Narrow-band VCSEL sources are assumed, though commercial vendors typically employ LEDs [40], whose wider bandwidths and overlaps are illustrated by the bell-shaped curves at the bottom of the rightmost panel. Condition numbers: left panel: 19.8; right panel: 31.2. (42003) A review of available commercial spectrophotometers confirmed that such a machine did not exist. The available models for purchase measured only two parameters, _i.e._, transmission and backscatter, and were lacking most of the desired operational features. We also conducted a literature search against the possibility that other research groups had developed very capable spectrophotometers, from which they might have published high accuracy absorption curves of one or more naturally occurring analytes. We discovered developments, both theoretically and in some cases through the implementation of actual hardware [49-52] that addressed some of the parameters that appeared to us to be important, but none that were sensitive to as many parameters as our studies indicated might be physically implementable and clinically relevant. In several of those previously published papers, descriptions of the characteristics of those machines were limited. Therefore, we elected to design and fabricate an optical instrument capable of measuring more of these parameters, over a wide spectral range, in very short durations. Our intent was to create a set of baseline data that would enable the next step, that of designing the desired autonomous body-worn units. Following a brief introduction to the spectrophotometer developed here, we will return to a discussion of the value of the extended measurement parameters that we believed to be important. The Mayo Double-Integrating Sphere Spectrophotometer (MDISS): Lessons-Learned Regarding the Selection of Optical Wavelengths for Body-Worn Analyte Measurement Units Only a summary of the features of the MDISS is presented here; a complete description appears in [1]. From a functional perspective, we consider the MDISS to be a multi-generational successor to the Beckman DU spectrophotometer [53-55]; also, we wished to extend the capabilities of the experimental machines previously referenced with a combination of quantitative functionality features including: a broad wavelength measurement span (190-2750 nm); high wavelength accuracy and high wavelength precision (FWHM on the order of 1 nm); fine wavelength resolution (0.1-5 nm wavelength step sizes); high amplitude sensitivity; a rapid-throughput automated measurement capability; simultaneous acquisition of diffuse reflected (DR), diffuse transmitted (DT), and unscattered transmitted (UT) energy; input energy monitoring for high accuracy energy values on a pulse by pulse basis (with options for fluorescent and opto-acoustic acquisition); and the flexibility to accommodate a wide variety sample holder types and volumes, including high-pressure (up to 30 atmosphere [450 psi]) cells for measurements of blood oxygen saturation characteristics in hyperbaric environments). We also attempted to mitigate potential sources of measurement error of parameters, as will be noted below. Figure 5 is a photo of an early version of the unit. ### MDISS Design Tradeoffs The design of the MDISS system required a recognition of the various cost tradeoffs in terms of money, time (development time and actual test time), and specific performance parameters. A single example of a specific performance parameter trade-off was prioritizing wavelength span rather than absolute energy sensitivity of the detectors. This selection was accepted to address the numerous unknowns regarding the analyte characteristics, since we had no prior knowledge Figure 5: Double-integrating sphere spectrophotometer system (narrow-linewidth pump laser coupled to a tunable optical parametric oscillator, allowing precise optical characterization of biological analytes over a 192-2750 nm wavelength span. (44414) Figure 6: A set of spectroscopic transmission curves measured with the MDISS at each of six oxygen saturation levels, at wavelength separations of 2-nm, over a range of 800 nm. The desaturation gas in this example was carbon monoxide (note the isosbestic point at 650 nm, rather than at 800 nm if the desaturation gas had been carbon dioxide). (46584) of those subregions in the entire wavelength span in which a given analyte would display maximum amplitude differences. This requirement for sensors with wide optical bandwidths justified the use of pyroelectric sensors, which have broad wavelengths spanning 190-12,000 nm, and the use of an optical parametric oscillator (OPO) which produced energy from 192 to 2750 nm. If it had been determined that, even with the broadband capability, there was a need for more sensitivity, then alternate detectors could have been implemented, albeit at the expense of reduced bandwidth. As was discovered during the blood hemoglobin and oxygenation characterizations with different desaturation gases, most of the variations that we needed to detect occurred in the 600-1200 nm span. In a mutually reinforcing fashion, the underlying theory was subsequently corroborated by the measured results from the MDISS when it became operational. Without the machine, which was designed and constructed under the assumption that the guidance gleaned from [45, 46] was correct, many if not most of the results demonstrated by the machine to are valid, and necessary for the design of accurate body-worn analyte measurement units, would have been unavailable. Noninvasive optical sensing can be used with other clinically important physiological and biochemical variables besides hemoglobin species. As one example, Figure 7 displays absorption curves for blood glucose, blood protein, and blood lipids in the near-IR range of 1350 nm to 1850 nm. As in Figure 4, these waveforms were identified and employed following a search of the open literature. An absorption curve for water is provided for comparison. With the appropriate sensing wavelengths as determined by the algorithm and indicated by the vertical lines in this figure, detection and measurements of glucose, lipids, proteins, and even a measure of either total body water or central blood volume could be made. ## 6 The Long-Term Goal of this Effort The long-term goal of this multiple-step project was the development of a sufficient technical base so that small, battery-powered, microcontroller-enhanced body-worn units could be designed to measure and record, with high accuracy and in real-time, blood oxygen saturation, blood levels of carbon monoxide, and concentrations of other clinically relevant analytes. The first developmental stage was the MDISS instrument, which was to yield the type of clinically actionable data described above. A next step, described in [55], was the prototyping of the battery-powered electrical and optical components to conduct these measurements continually, in a sufficiently small form factor that they could be incorporated into versions of the body-worn units depicted in Figure 1, that could be fielded into the clinical practice. In addition, although we did not pursue this path, commercial versions of the MDISS system could be used as research Figure 7: Water, protein, and glucose extinction coefficients as a function of wavelength, assuming ideal 1-nm bandwidth light sources using maximally independent wavelengths. Condition number: 10.9. (41964) instruments; or, reduced-wavelength-span and/or lower cost versions targeted for a subset of analytes could be used for higher throughput routine clinical diagnostic purposes. ## 7 Summary We have described the evolution of consumer-grade body-worn physiological measurement units. We then introduced an evolutionary thread from early work in the development of _research-grade_ body-worn blood oxygen saturation units conducted in the first half of the 1940s. Next, we reviewed our recognition in the 2010s of the requirements for higher-quality laboratory measurements of the optical characteristics of medically relevant blood analytes (_e.g._, oxygen saturation-versus-wavelength behavior of critical blood analytes). We extended that line of investigation with the design of a spectrophotometer, the MDISS, with the needed sensitivity and specificity to help us gather new optically based blood characterization data [1]. We also initiated efforts to use the results reported here and in [1] to develop a new-technology body worn unit that operates on a somewhat different principle from classical pulse oximetry. Although body-worn units were our major end goal, we have also underscored the benefits of using optical analyte data over a wide wavelength range with high wavelength resolution. An instrument with capabilities beyond those available is required; these new data support a future effort to simplify and optimize body-worn monitors. ### Disclosures The authors declare no conflicts of interest. Although various patents cover various aspects of this work, none of these patents are licensed for gain or profit. ### Acknowledgements We wish to acknowledge the years-long contributions of the following individuals to this project: Charles Burfield, Anthony Ebert, Theresa Funk, Nicholas Klitzke, Steven Polzer, Jason Prairie, and Steven Schuster; and Drs. Franklyn Cockerill, Kendall Cradic, Graham Cameron, E. Rolland Dickson, Stefan Grebe, Ravinder Singh, and Nathan Harff. The clinical materials used in the study were produced by the Mayo Clinic Department of Laboratory Medicine and Pathology and processed by them for us using their standard clinical laboratory tools.
2307.02279
From NeurODEs to AutoencODEs: a mean-field control framework for width-varying Neural Networks
The connection between Residual Neural Networks (ResNets) and continuous-time control systems (known as NeurODEs) has led to a mathematical analysis of neural networks which has provided interesting results of both theoretical and practical significance. However, by construction, NeurODEs have been limited to describing constant-width layers, making them unsuitable for modeling deep learning architectures with layers of variable width. In this paper, we propose a continuous-time Autoencoder, which we call AutoencODE, based on a modification of the controlled field that drives the dynamics. This adaptation enables the extension of the mean-field control framework originally devised for conventional NeurODEs. In this setting, we tackle the case of low Tikhonov regularization, resulting in potentially non-convex cost landscapes. While the global results obtained for high Tikhonov regularization may not hold globally, we show that many of them can be recovered in regions where the loss function is locally convex. Inspired by our theoretical findings, we develop a training method tailored to this specific type of Autoencoders with residual connections, and we validate our approach through numerical experiments conducted on various examples.
Cristina Cipriani, Massimo Fornasier, Alessandro Scagliotti
2023-07-05T13:26:17Z
http://arxiv.org/abs/2307.02279v2
# From NeurODEs to AutoencODEs: ###### Abstract The connection between Residual Neural Networks (ResNets) and continuous-time control systems (known as NeurODEs) has led to a mathematical analysis of neural networks which has provided interesting results of both theoretical and practical significance. However, by construction, NeurODEs have been limited to describing constant-width layers, making them unsuitable for modeling deep learning architectures with layers of variable width. In this paper, we propose a continuous-time Autoencoder, which we call AutoencODE, based on a modification of the controlled field that drives the dynamics. This adaptation enables the extension of the mean-field control framework originally devised for conventional NeurODEs. In this setting, we tackle the case of low Tikhonov regularization, resulting in potentially non-convex cost landscapes. While the global results obtained for high Tikhonov regularization may not hold globally, we show that many of them can be recovered in regions where the loss function is locally convex. Inspired by our theoretical findings, we develop a training method tailored to this specific type of Autoencoders with residual connections, and we validate our approach through numerical experiments conducted on various examples. _Keywords--_ Machine Learning, Optimal Control, Gradient Flow, Minimising Movement Scheme, Autoencoders ## 1 Introduction In recent years, the field of artificial intelligence has witnessed remarkable progress across diverse domains, including computer vision and natural language processing. In particular, neural networks have emerged as a prominent tool, revolutionizing numerous machine learning tasks. Consequently, there is an urgent demand for a robust mathematical framework to analyze their intricate characteristics. A deep neural network can be seen as map \(\phi:\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}^{d_{\mathrm{out}}}\), obtained as the composition of \(L\gg 1\) applications \(\phi=\phi_{L}\circ\ldots\circ\phi_{1}\), where, for every \(n=1,\ldots,L\), the function \(\phi_{n}:\mathbb{R}^{d_{n}}\to\mathbb{R}^{d_{n+1}}\) (also referred as _the \(n\)-th layer_ of the network) depends on a _trainable_ parameter \(\theta_{a}\in\mathbb{R}^{m_{n}}\). The crucial process of choosing the values of the parameters \(\theta_{1},\ldots,\theta_{L}\) is known as the _training of the network_. For a complete survey on the topic, we recommend the textbook [24]. Recent advancements have explored the link between dynamical systems, optimal control, and deep learning, proposing a compelling perspective. In the groundbreaking work [29], it was highlighted how the problem of training very deep networks can be alleviated by the introduction of a new type layer called "Residual Block". This consists in using the identity map as skip connection, and after-addition activations. In other words, every layer has the following form: \[X_{n+1}=\phi_{n}(X_{n})=X_{n}+\mathcal{F}(X_{n},\theta_{n}), \tag{1.1}\] where \(X_{n+1}\) and \(X_{n}\) are, respectively, the output and the input of the \(n\)-th layer. This kind of architecture is called _Residual Neural Network_ (or ResNet). It is important to observe that, in order to give sense to the sum in (1.1), in each layer the dimension of the input should coincide with the dimension of the output. In the practice of Deep Learning, this novel kind of layer has turned out to be highly beneficial, since it is effective in avoiding the "vanishing of the gradients" during the training [5], or the saturation of the network's accuracy [28]. Indeed, before [29], these two phenomena had limited for long time the large-scale application of deep architectures. Despite the original arguments in support of residual blocks being based on empirical considerations, their introduction revealed nevertheless a more mathematical and rigorous bridge between residual deep networks and controlled dynamical systems. Indeed, what makes Residual Neural Networks particularly intriguing is that they can be viewed as discretized versions of continuous-time dynamical systems. This dynamical approach was proposed independently in [18] and [26], and it was greatly popularized in the machine learning community under the name of NeurODEs by [13]. This connection with dynamical systems relies on reinterpreting the iteration (1.1) as a step of the forward-Euler approximation of the following dynamical system: \[\dot{X}(t)=\mathcal{F}(X(t),\theta(t)), \tag{1.2}\] where \(t\mapsto\theta(t)\) is the map that, instant by instant, specifies the value of the parameter \(\theta\). Moreover, the training of these neural networks, typically formulated as empirical risk minimization, can be reinterpreted as an optimal control problem. Given a labelled dataset \(\{(X_{0}^{i},Y_{0}^{i})\}_{i=1}^{N}\) of size \(N\geq 1\), the depth of the time-continuous neural network (1.2) is denoted by \(T>0\). Then, training this network amounts to learning the control signals \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) in such a way that the terminal output \(X_{T}^{i}\) of (1.2) is close to it corresponding label \(Y_{0}^{i}\) for all \(i=1,\ldots,N\), with respect to some distortion measure \(\ell(\cdot,\cdot)\in C^{1}\). A typical choice is \(\ell(x,y):=|x-y|^{2}\), which is often referred as the _squared loss function_ in the machine learning literature. Therefore, it is possible to formulate the following optimal control problem \[\inf_{\theta\in L^{2}([0,T];\mathbb{R}^{m})}J^{N}(\theta):=\begin{cases} \frac{1}{N}\sum_{i=1}^{N}\ell\big{(}X^{i}(T),Y^{i}(T)\big{)}+\lambda \int_{0}^{T}|\theta(t)|^{2}\,dt,\\ \text{s.t.}\ \begin{cases}\dot{X}^{i}(t)=\mathcal{F}(t,X^{i}(t),\theta(t)), \hskip 28.452756pt\dot{Y}^{i}(t)=0,\\ \big{(}X^{i}(t),Y^{i}(t)\big{)}\big{|}_{t=0}=(X_{0}^{i},Y_{0}^{i}),\end{cases}&i \in\{1,\ldots,N\},\end{cases}\] where, differently from (1.2), we admit here the explicit dependence of the dynamics on the time variable. Notice that the objective function also comprises of Tikhonov regularization, tuned by the parameter \(\lambda\), which plays a crucial role in the analysis of this control problem. The benefit of interpreting the training process in this manner results in the possibility of exploiting established results from the branch of mathematical control theory, to better understand this process. A key component of optimal control theory is a set of necessary conditions, known as Pontryagin Maximum Principle (PMP), that must be satisfied by any (local) minimizer \(\theta\). These conditions were introduced in [37] and have served as inspiration for the development of innovative algorithms [33] and network structures [12] within the machine learning community. This work specifically addresses a variant of the optimal control problem presented above, in which the focus is on the case of an infinitely large dataset. This formulation gives rise to what is commonly known as a _mean-field optimal control problem_, where the term "mean-field" emphasizes the description of a multiparticle system through its averaged effect. In this context, the focus is on capturing the collective behavior of the system rather than individual particle-level dynamics, by considering the population as a whole. As a consequence, the parameter \(\theta\) is shared by the entire population of input-target pairs, and the optimal control is required to depend on the initial distribution \(\mu_{0}(x,y)\in\mathcal{P}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) of the input-target pairs. Therefore, the optimal control problem needs to be defined over spaces of probability measures, and it is formulated as follows: \[\inf_{\theta\in L^{2}([0,T];\mathbb{R}^{m})}J(\theta):=\begin{cases}\int_{ \mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}(x,y)+\lambda\int_{0}^{T}|\theta(t)|^{2} \,dt,\\ \text{s.t.}\ \begin{cases}\partial_{t}\mu_{t}(x,y)+\nabla_{x}\cdot( \mathcal{F}(t,x,\theta)\mu_{t}(x,y))=0&t\in[0,T],\\ \mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y),\end{cases}\end{cases}\] This area of study has gained attention in recent years, and researchers have derived the corresponding Pontryagin Maximum Principle in various works, such as [19] and [7]. It is worth mentioning that there are other types of mean-field analyses of neural networks, such as the well-known work [36], which focus on mean-field at the parameter level, where the number of parameters is assumed to be infinitely large. However, our approach in this work takes a different viewpoint, specifically focusing on the control perspective in the case of an infinitely large dataset. One of the contributions of this paper is providing a more accessible derivation of the necessary conditions for optimality, such as the well-known Pontryagin Maximum Principle. Namely, we characterize the stationary points of the cost functional, and we are able to recover the PMP that was deduced in [7] under the assumption of large values of the regularization parameter \(\lambda\), and whose proof relied on an infinite-dimensional version of the Lagrange multiplier rule. This alternative perspective offers a clearer and more intuitive understanding of the PMP, making it easier to grasp and apply it in practical scenarios. In addition, we aim at generalizing the applicability of the results presented in [7] by considering a possibly non-convex regime, corresponding to small values of the parameter \(\lambda>0\). As mentioned earlier, the regularization coefficient \(\lambda\) plays a crucial role in determining the nature of the cost function. Indeed, when \(\lambda\) is sufficiently large, the cost function is convex on the sub-level sets, and it is possible to prove the existence and uniqueness of the solution of the optimal control problem that arises from training NeurODEs. Additionally, in this highly-regularized scenario, desirable properties of the solution, such as its continuous dependence on the initial data and a bound on the generalization capabilities of the networks, have been derived in [7]. However, in practical applications, a large regularization parameter may cause a poor performance of the trained NeurODE on the task. In other words, in the highly-regularized case, the cost functional is unbalanced towards the \(L^{2}\)-penalization, at the expenses of the term that promotes that each datum \(X_{0}^{i}\) is driven as close as possible to the corresponding target \(Y_{0}^{i}\). This motivated us to investigate the case of low Tikhonov regularization. While we cannot globally recover the same results as in the highly-regularized regime, we find interesting results concerning local minimizers. Moreover, we also show that the (mean field) optimal control problem related to the training of the NeurODE induces a gradient flow in the space of admissible controls. The perspective of the gradient flow leads us to consider the well-known minimizing movement scheme, and to introduce a proximal stabilization term to the cost function in numerical experiments. This approach effectively addresses the well-known instability issues (see [14]) that arise when solving numerically optimal control problems (or when training NeurODEs) with iterative methods based on the PMP. It is important to note that our stabilization technique differs from previous methods, such as the one introduced in [33]. From NeurODEs to AutoencODEs.Despite their huge success, it should be noted that NeurODEs (as well as ResNets, their discrete-time counterparts) in their original form face a limitation in capturing one of the key aspects of modern machine learning architectures, namely the discrepancy in dimensionality between consecutive layers. As observed above, the use of skip connections with identity mappings requires a "rectangular" shape of the network, where the width of the layers are all identical and constant with respect to the input's dimension. This restriction poses a challenge when dealing with architectures that involve layers with varying dimensions, which are common in many state-of-the-art models. Indeed, the inclusion of layers with different widths can enhance the network's capacity to represent complex functions and to capture intricate patterns within the data. In this framework, Autoencoders have emerged as a fundamental class of models specifically designed to learn efficient representations of input data by capturing meaningful features through an encoder-decoder framework. More precisely, the encoder compresses the input data into a lower-dimensional latent space, while the decoder reconstructs the original input from the compressed representation. The concept of Autoencoders was first introduced in the 1980s in [39], and since then, it has been studied extensively in various works, such as [30], among many others. Nowadays, Autoencoders have found numerous applications, including data compression, dimensionality reduction, anomaly detection, and generative modeling. Their ability to extract salient features and capture underlying patterns in an unsupervised manner makes them valuable tools in scenarios where labeled training data is limited or unavailable. Despite their huge success in practice, there is currently a lack of established theory regarding the performance guarantees of these models. Prior works, such as [20], have extended the control-theoretic analysis of NeurODEs to more general width-varying neural networks. Their model is based on an integro-differential equation that was first suggested in [34] in order to study the continuum limit of neural networks with respect to width and depth. In such an equation the state variable has a dependency on both time and space since the changing dimension over time is viewed as an additional spatial variable. In [20, Section 6] the continuous space-time analog of residual neural networks proposed in [34] has been considered and discretized in order to model variable width ResNets of various types, including convolutional neural networks. The authors assume a simple time-dependent grid, and use forward difference discretization for the time derivative and Newton-Cotes for discretizing the integral term, but refer to more sophisticated moving grids in order to possibly propose new types of architectures. In this setting, they are also able to derive some stability estimates and generalization properties in the overparametrized regime, making use of turnpike theory in optimal control [22]. In principle, there could be several different ways to model width-varying neural networks with dynamical systems, e.g., forcing some structure on the control variables, or formulating a viability problem. In this last case, a possibility could be to require admissible trajectories to visit some lower-dimensional subsets during the evolution. For an introduction to viability theory, we recommend the monograph [4], while we refer to [8, 9] for recent results on viability theory for differential inclusions in Wasserstein spaces. In contrast, our work proposes a simpler extension of the control-theoretical analysis. It is based on a novel design of the vector field that drives the dynamics, allowing us to develop a continuous-time model capable of accommodating various types of width-varying neural networks. This approach has the advantage of leveraging insights and results obtained from our previous work [7]. Moreover, the simplicity of our model facilitates the implementation of residual networks with variable width and allows us to test their performance in machine learning tasks. In order to capture width-varying neural networks, we need to extend the previous control-theoretical framework to a more general scenario, in particular we need to relax some of the assumptions of [7]. This is done in Subsection 2.2, where we introduce a discontinuous-in-time dynamics that can describe a wider range of neural network architectures. By doing so, we enable the study of Autoencoders (and, potentially, of other width-varying architectures) from a control-theoretic point of view, with the perspective of getting valuable insights into their behavior. Furthermore, we also generalize the types of activation functions that can be employed in the network. The previous work [7] primarily focused on sigmoid functions, which do not cover the full range of activations commonly employed in practice. Our objective is to allow for unbounded activation functions, which are often necessary for effectively solving certain tasks. By considering a broader set of activation functions, we aim at enhancing the versatility and applicability of our model. Furthermore, in contrast to [7], we introduce a stabilization method to allow the numerical resolution of the optimal control problem in the low-regularized regime, as previously discussed. This stabilization technique provides the means to test the architecture with our training approach on various tasks: from low-dimensional experiments, which serve to demonstrate the effectiveness of our method, to more sophisticated and high-dimensional tasks such as image reconstruction. In Section 5, we present all the experiments and highlight noteworthy behaviors that we observe. An in-depth exploration of the underlying reasons for these behaviors is postponed to future works. The structure of the paper is the following: Section 2 discusses the dynamical model of NeurODEs and extends it to the case of width-varying neural networks, including Autoencoders, which we refer to as AutoencODEs. In Section 3, we present our mean-field analysis, focusing on the scenario of an infinitely large dataset. We formulate the mean-field optimal control problem, we derive a set of necessary optimality conditions, and we provide a convergence result for the finite-particles approximation. At the end of this section, we compare our findings with the ones previously obtained in [7]. Section 4 covers the implementation and the description of the training procedure, and we compare it with other methods for NeurODEs existing in the literature. Finally, in Section 5, we present the results of our numerical experiments, highlighting interesting properties of the AutoencODEs that we observe. ### Measure-theoretic preliminaries Given a metric space \((X,d_{X})\), we denote by \(\mathcal{M}(X)\) the space of signed Borel measures in \(X\) with finite total variation, and by \(\mathcal{P}(X)\) the space of probability measures, while \(\mathcal{P}_{c}(X)\subset\mathcal{P}(X)\) represents the set of probability measures with compact support. Furthermore, \(\mathcal{P}_{c}^{N}(X)\subset\mathcal{P}_{c}(X)\) denotes the subset of empirical or atomic probability measures. Given \(\mu\in\mathcal{P}(X)\) and \(f:X\to Y\), with \(f\)\(\mu-\)measurable, we denote with \(f_{\#}\mu\in\mathcal{P}(Y)\) the push-forward measure defined by \(f_{\#}\mu(B)=\mu(f^{-1}(B))\) for any Borel set \(B\subset Y\). Moreover, we recall the change-of-variables formula \[\int_{Y}g(y)\,d\big{(}f_{\#}\mu\big{)}(y)=\int_{X}g\circ f(x)\,d\mu(x) \tag{1.3}\] whenever either one of the integrals makes sense. We now focus on the case \(X=\mathbb{R}^{d}\) and briefly recall the definition of the Wasserstein metrics of optimal transport in the following definition, and refer to [2, Chapter 7] for more details. **Definition 1.1**.: _Let \(1\leq p<\infty\) and \(\mathcal{P}_{p}(\mathbb{R}^{d})\) be the space of Borel probability measures on \(\mathbb{R}^{d}\) with finite \(p\)-moment. In the sequel, we endow the latter with the \(p\)-Wasserstein metric_ \[W_{p}^{p}(\mu,\nu):=\inf\left\{\int_{\mathbb{R}^{2d}}|z-\hat{z}|^{p}\ d\pi(z, \hat{z})\ \big{|}\ \pi\in\Pi(\mu,\nu)\right\},\] _where \(\Pi(\mu,\nu)\) denotes the set of transport plan between \(\mu\) and \(\nu\), that is the collection of all Borel probability measures on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with marginals \(\mu\) and \(\nu\) in the first and second component respectively._ It is a well-known result in optimal transport theory that when \(p=1\) and \(\mu,\nu\in\mathcal{P}_{c}(\mathbb{R}^{d})\), then the following alternative representation holds for the Wasserstein distance \[W_{1}(\mu,\nu)=\sup\left\{\int_{\mathbb{R}^{d}}\varphi(x)\,d\big{(}\mu-\nu \big{)}(x)\,\big{|}\,\varphi\in\mathrm{Lip}(\mathbb{R}^{d}),\ \mathrm{Lip}(\varphi)\leq 1 \right\}\,, \tag{1.4}\] by Kantorovich's duality [2, Chapter 6]. Here, \(\mathrm{Lip}(\mathbb{R}^{d})\) stands for the space of real-valued Lipschitz continuous functions on \(\mathbb{R}^{d}\), and \(\mathrm{Lip}(\varphi)\) is the Lipschitz constant of a mapping \(\varphi\) defined ad \[Lip(\varphi):=\sup_{x,y\in\mathbb{R}^{d},x\neq y}\frac{\|\varphi(x)-\varphi(y )\|}{\|x-y\|}\] ## 2 Dynamical Model of NeurODEs ### Notation and basic facts In this paper, we consider controlled dynamical systems in \(\mathbb{R}^{d}\), where the velocity field is prescribed by a function \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) that satisfies these basic assumptions. **Assumption 1**.: _The vector field \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) satisfies the following:_ 1. _For every_ \(x\in\mathbb{R}^{d}\) _and every_ \(\theta\in\mathbb{R}^{m}\)_, the map_ \(t\mapsto\mathcal{F}(t,x,\theta)\) _is measurable in_ \(t\)_._ 2. _For every_ \(R>0\) _there exists a constant_ \(L_{R}>0\) _such that, for every_ \(\theta\in\mathbb{R}^{m}\)_, it holds_ \[|\mathcal{F}(t,x_{1},\theta)-\mathcal{F}(t,x_{2},\theta)|\leq L_{R}(1+|\theta |)|x_{1}-x_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x_{1},x_{2}\in B_{R}(0),\] _from which it follows that_ \(|\mathcal{F}(t,x,\theta)|\leq L_{R}(1+|x|)(1+|\theta|)\) _for a.e._ \(t\in[0,T]\)_._ 3. _For every_ \(R>0\) _there exists a constant_ \(L_{R}>0\) _such that, for every_ \(\theta_{1},\theta_{2}\in\mathbb{R}^{m}\)_, it holds_ \[|\mathcal{F}(t,x,\theta_{1})-\mathcal{F}(t,x,\theta_{2})|\leq L_{R}(1+|\theta _{1}|+|\theta_{2}|)|\theta_{1}-\theta_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x\in B_{R}(0).\] The control system that we are going to study is \[\begin{cases}\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t)),\ \ \text{ a.e. in }[0,T],\\ x(0)=x_{0},\end{cases} \tag{2.1}\] where \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) is the control that drives the dynamics. Owing to Assumption 1, the classical Caratheodory Theorem (see [27, Theorem 5.3]) guarantees that, for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) and for every \(x_{0}\in\mathbb{R}^{d}\), the Cauchy problem (2.1) has a unique solution \(x:[0,T]\to\mathbb{R}^{d}\). Hence, for every \((t,\theta)\in[0,T]\times L^{2}([0,T],\mathbb{R}^{m})\), we introduce the flow map \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) defined as \[\Phi^{\theta}_{(0,t)}(x_{0}):=x(t), \tag{2.2}\] where \(t\mapsto x(t)\) is the absolutely continuous curve that solves (2.1), with Cauchy datum \(x(0)=x_{0}\) and corresponding to the admissible control \(t\mapsto\theta(t)\). Similarly, given \(0\leq s<t\leq T\), we write \(\Phi^{\theta}_{(s,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) to denote the flow map obtained by prescribing the Cauchy datum at the more general instant \(s\geq 0\). We now present the properties of the flow map defined in (2.2) that describes the evolution of the system: we show that is well-posed, and we report some classical properties. **Proposition 2.1**.: _For every \(t\in[0,T]\) and for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), let \(\mathcal{F}\) satisfy Assumption 1. Then, the flow \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is well-defined for any \(x_{0}\in\mathbb{R}^{d}\) and it satisfies the following properties._ * _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists a constant_ \(\bar{R}>0\) _such that_ \[|\Phi^{\theta}_{(0,t)}(x)|\leq\bar{R}\] _for every_ \(x\in B_{R}(0)\) _and every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(||\theta||_{L^{2}}\leq\rho\)_._ * _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists a constant_ \(\bar{L}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds_ \[|\Phi^{\theta}_{(0,t)}(x_{1})-\Phi^{\theta}_{(0,t)}(x_{2})|\leq\bar{L}|x_{1}-x _{2}|\] _for every_ \(x_{1},x_{2}\in B_{R}(0)\) _and every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(||\theta||_{L^{2}}\leq\rho\)_._ Even though the framework introduced in Assumption 1 is rather general, in this paper we specifically have in mind the case where the mapping \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) represents the feed-forward dynamics associated to residual neural networks. In this scenario, the parameter \(\theta\in\mathbb{R}^{m}\) encodes the _weights_ and _shifts_ of the network, i.e., \(\theta=(W,b)\), where \(W\in\mathbb{R}^{d\times d}\) and \(b\in\mathbb{R}^{d}\). Moreover, the mapping \(\mathcal{F}\) has the form: \[\mathcal{F}(t,x,\theta)=\sigma(Wx+b),\] where \(\sigma:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a nonlinear function acting component-wise, often called in literature _activation function_. In this work, we consider sigmoidal-type activation functions, such as the hyperbolic tangent function: \[\sigma(x)=\tanh(x),\] as well as smooth approximations of the Rectified Linear Unit (ReLU) function, which is defined as: \[\sigma(x)=\max\{0,x\}. \tag{2.3}\] We emphasize the need to consider smoothed versions of the ReLU function due to additional differentiability requirements on \(\mathcal{F}\), which will be further clarified in Assumption 2. Another useful activation function covered by Assumption 2 is the Leaky Rectified Linear Unit (Leaky ReLU) function: \[\sigma(x)=\max\{0,x\}-\max\{-\alpha x,0\} \tag{2.4}\] where \(\alpha\in[0,1]\) is a predetermined parameter that allows the output of the function to have negative values. The smooth approximations of (2.3) and (2.4) that we consider will be presented in Section 4. ### From NeurODEs to AutoencODEs As explained in the Introduction, NeurODEs and ResNets -their discrete-time counterparts- face the limitation of a "rectangular" shape of the network because of formulas (1.2) and (1.1), respectively. To overcome this fact, we aim at designing a continuous-time model capable of describing width-varying neural networks, with a particular focus on Autoencoders, as they represent the prototype of neural networks whose layers operate between spaces of different dimensions. Indeed, Autoencoders consist of an _encoding phase_, where the layers' dimensions progressively decrease until reaching the "latent dimension" of the network. Subsequently, in the _decoding phase_, the layers' widths are increased until the same dimensionality as the input data is restored. For this reason, Autoencoders are prominent examples of width-varying neural networks, since the changes in layers' dimensions lie at the core of their functioning. Sketches of encoders and Autoencoders are presented in Figure 1. Finally, we insist on the fact that our model can encompass as well other types of architectures. In this regard, in Remark 2.2 we discuss how our approach can be extended to U-nets. EncoderOur goal is to first model the case of a network which sequentially reduces the dimensionality of the layers' outputs. For this purpose, we artificially force some of the components not to evolve anymore, while we let the others be active part of the dynamics. More precisely, given an input variable \(x_{0}\in\mathbb{R}^{d}\), we denote with \((\mathcal{I}_{j})_{j=0,\dots,r}\) an increasing filtration, where each element \(\mathcal{I}_{j}\) contains the sets of indices whose corresponding components are _inactive_, i.e., they are constant and do not contribute to the dynamics. Clearly, since the layers' width will decrease sequentially, the filtration of inactive components \(\mathcal{I}_{j}\) will increase, i.e. \[\emptyset=:\mathcal{I}_{0}\subsetneq\mathcal{I}_{1}\subsetneq...\subsetneq \mathcal{I}_{r}\subsetneq\{1,\dots,d\},\quad r<d,\quad j=0,\dots,r.\] Figure 1: Left: network with an encoder structure. Right: Autoencoder. On the other hand, the sets of indices of _active_ components define an decreasing filtration \(\mathcal{A}_{j}:=\{1,\ldots,d\}\setminus\mathcal{I}_{j}\) for \(j=0,\ldots,r\). As opposed to before, the sets of active components \((\mathcal{A}_{j})_{j=0,\ldots,r}\) satisfy \[\{1,\ldots,d\}=:\mathcal{A}_{0}\supseteq\mathcal{A}_{1}\supseteq... \supseteq\mathcal{A}_{r}\supseteq\emptyset,\quad r<d,\quad j=0,\ldots,r.\] We observe that, for every \(j=0,\ldots,r\), the sets \(\mathcal{A}_{j}\) and \(\mathcal{I}_{j}\) provide a partition of \(\{1,\ldots,d\}\). A visual representation of this model for encoders is presented on the left side of Figure 2. Now, in the time interval \([0,T]\), let us consider \(r+1\) nodes \(0=t_{0}<t_{1}<...<t_{r}<t_{r+1}=T\). For \(j=0,\ldots,r\), we denote with \([t_{j},t_{j+1}]\) the sub-interval and, for every \(x\in\mathbb{R}^{d}\), we use the notation \(x_{\mathcal{I}_{j}}:=(x_{i})_{i\in\mathcal{I}_{j}}\) and \(x_{\mathcal{A}_{j}}:=(x_{i})_{i\in\mathcal{A}_{j}}\) to access the components of \(x\) belonging to \(\mathcal{I}_{j}\) and \(\mathcal{A}_{j}\), respectively. Hence, the controlled dynamics for any \(t\in[t_{j},t_{j+1}]\) can be described by \[\begin{cases}\dot{x}_{\mathcal{I}_{j}}(t)=0,\\ \dot{x}_{\mathcal{A}_{j}}(t)=\mathcal{G}_{j}(t,x_{\mathcal{A}_{j}}(t),\theta( t)),\end{cases} \tag{2.5}\] where \(\mathcal{G}_{j}:[t_{j},t_{j+1}]\times\mathbb{R}^{|\mathcal{A}_{j}|}\times \mathbb{R}^{m}\rightarrow\mathbb{R}^{|\mathcal{A}_{j}|}\), for \(j=0,\ldots,r\), and \(x(0)=x_{\mathcal{A}_{0}}(0)=x_{0}\). Furthermore, the dynamical system describing the encoding part is \[\begin{cases}\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t)),\quad\text{a.e }t\in[0,T],\\ x(0)=x_{0}\end{cases}\] where, for \(t\in[t_{j},t_{j+1}]\), we define the discontinuous vector field as follows \[\left(\mathcal{F}(t,x,\theta)\right)_{k}=\begin{cases}\left(\mathcal{G}(t,x_{ \mathcal{A}_{j}},\theta)\right)_{k},&\text{if }k\in\mathcal{A}_{j},\\ 0,&\text{if }k\in\mathcal{I}_{j}.\end{cases}\] **Remark 2.1**.: _Notice that \(\theta(t)\in\mathbb{R}^{m}\) for every \(t\in[0,T]\), according to the model that we have just described. However, it is natural to expect that, since \(x\) has varying active components, in a similar way the controlled dynamics \(\mathcal{F}(t,x,\theta)\) shall not explicitly depend at every \(t\in[0,T]\) on every component of \(\theta\)._ AutoencoderWe now extend the previous model to the case of networks which not only decrease the dimensionality of the layers, but they are also able to increase the layers' width in order to restore the original dimension of the input data. Here we denote by \(z_{0}\in\mathbb{R}^{\bar{d}}\) the input variable, and we fictitiously augment the input's dimension, so that we consider the initial datum \(x_{0}=(z_{0},\underline{0})\in\mathbb{R}^{d}=\ \mathbb{R}^{\bar{d}}\times \mathbb{R}^{\bar{d}}\), where \(\underline{0}\in\mathbb{R}^{\bar{d}}\). We make use of the following notation for every \(x\in\mathbb{R}^{d}\): \[x=\left((z_{i})_{i=1,\ldots,\bar{d}},(z^{H}_{i})_{i=1,\ldots,\bar{d}})\right)\] where \(z^{H}\) is the augmented (or _shadow_) part of the vector \(x\). In this model, the time horizon \([0,T]\) is splitted using the following time-nodes: \[0=t_{0}\leq t_{1}\leq...\leq t_{r}\leq...\leq t_{2r}\leq t_{2r+1}:=T\] where \(t_{r}\), which was the end of the encoder in the previous model, is now the instant corresponding to the bottleneck of the autoencoder. Similarly as before, we introduce two families of partitions of \(\{1,\ldots,\bar{d}\}\) modeling the active and Figure 2: Left: Embedding of an encoder into a dynamical system. Right: model for an Autoencoder. non-active components of, respectively, \(z\) and \(z^{H}\). The first filtrations are relative to the encoding phase and they involve the component of \(z\): \[\begin{cases}\mathcal{I}_{j-1}\subsetneq I_{j}&\text{if }1\leq j\leq r,\\ \mathcal{I}_{j}=\mathcal{I}_{j-1}&\text{if }j>r,\end{cases}\qquad\qquad \qquad\qquad\begin{cases}\mathcal{A}_{j-1}\supsetneq\mathcal{A}_{j}&\text{if }1\leq j \leq r,\\ \mathcal{A}_{j}=\mathcal{A}_{j-1}&\text{if }j>r.\end{cases}\] where \(\mathcal{I}_{0}:=\emptyset\), \(\mathcal{I}_{r}\subsetneq\{1,\ldots,\tilde{d}\}\) and \(\mathcal{A}_{0}=\{1,\ldots,\tilde{d}\}\), \(\mathcal{A}_{r}\supsetneq\emptyset\). The second filtrations, that aim at modeling the decoder, act on the shadow part of \(x\), i.e., they involve the components of \(z^{H}\): \[\begin{cases}\mathcal{I}_{j-1}^{H}=\{1,\ldots,d\}&\text{if }1\leq j\leq r,\\ \mathcal{I}_{j}^{H}\subsetneq\mathcal{I}_{j-1}^{H}&\text{if }r<j\leq 2r,\end{cases} \qquad\qquad\qquad\begin{cases}\mathcal{A}_{j-1}^{H}=\emptyset& \text{if }1\leq j\leq r,\\ \mathcal{A}_{j}^{H}\supsetneq\mathcal{A}_{j-1}^{H}&\text{if }r<j\leq 2r.\end{cases}\] While the encoder structure acting on the input data \(z_{0}\) is the same as before, in the decoding phase we aim at activating the components that have been previously turned off during the encoding. However, since the information contained in the original input \(z_{0}\) should be first compressed and then decompressed, we should not make use of the values of the components that we have turned off in the encoding and hence, we cannot re-activate them. Therefore, in our model the dimension is restored by activating components of \(z^{H}\), the shadow part of \(x\), which we recall was initialized equal to \(\underline{0}\in\mathbb{R}^{\tilde{d}}\). This is the reason why we introduce sets of active and inactive components also for the shadow part of the state variable. A sketch of this type of model is presented on the right of Figure 2. Moreover, in order to be consistent with the classical structure of an autoencoder, the following identities must be satisfied: 1. \(\mathcal{A}_{j}\cap\mathcal{A}_{j}^{H}=\emptyset\) for every \(j=1,\ldots,2r\), 2. \(\mathcal{A}_{2r}\cup\mathcal{A}_{2r}^{H}=\{1,\ldots,\tilde{d}\}\). The first identity formalizes the constraint that the active component of \(z\) and those of \(z^{H}\) cannot overlap and must be distinct, while the second identity imposes that, at the end of the evolution, the active components in \(z\) and \(z^{H}\) should sum up exactly to \(1,\ldots,\tilde{d}\). Furthermore, from the first identity we derive that \(\mathcal{A}_{j}\subseteq(\mathcal{A}_{j}^{H})^{C}=\mathcal{I}_{j}^{H}\) and, similarly, \(\mathcal{A}_{j}^{H}\subseteq\mathcal{I}_{j}\) for every \(j=1,\ldots,2r\). Moreover, \(\mathcal{A}_{r}\) satisfies the inclusion \(\mathcal{A}_{r}\subseteq\mathcal{A}_{j}\) for every \(j=1,\ldots,2r\), which is consistent with the fact that layer with the smallest width is located in the bottleneck, i.e., in the interval \([t_{r},t_{r+1}]\). Finally, from the first and the second assumption, we obtain that \(\mathcal{A}_{2r}^{H}=\mathcal{I}_{2r}\), i.e., the final active components of \(z^{H}\) coincide with the inactive components of \(z\), and, similarly, \(\mathcal{I}_{2r}^{H}=\mathcal{A}_{2r}\). Finally, to access the active components of \(x=(z,z^{H})\), we make use of the following notation: \[x_{\mathcal{A}_{j}}=(z_{k})_{k\in\mathcal{A}_{j}},\quad x_{\mathcal{A}_{j}^{H }}=(z_{k}^{H})_{k\in\mathcal{A}_{j}^{H}}\quad\text{and}\quad x_{\mathcal{A}_{j },\mathcal{A}_{j}^{H}}=(z_{\mathcal{A}_{j}},z_{\mathcal{A}_{j}^{H}}^{H}),\] and we do the same for the inactive components: \[x_{\mathcal{I}_{j}}=(z_{k})_{k\in\mathcal{I}_{j}},\quad x_{\mathcal{I}_{j}^{H }}=(z_{k}^{H})_{k\in\mathcal{I}_{j}^{H}}\quad\text{and}\quad x_{\mathcal{I}_{ j},\mathcal{I}_{j}^{H}}=(z_{\mathcal{I}_{j}},z_{\mathcal{I}_{j}^{H}}^{H}).\] We are now in position to write the controlled dynamics in the interval \(t_{j}\leq t\leq t_{j+1}\): \[\begin{cases}\dot{x}_{\mathcal{I}_{j},\mathcal{I}_{j}^{H}}(t)=0,\\ \dot{x}_{\mathcal{A}_{j},\mathcal{A}_{j}^{H}}(t)=\mathcal{G}_{j}(t,x_{\mathcal{ A}_{j},\mathcal{A}_{j}^{H}}(t),\theta(t)),\end{cases} \tag{2.6}\] where \(\mathcal{G}_{j}:[t_{j},t_{j+1}]\times\mathbb{R}^{|\mathcal{A}_{j}|+|\mathcal{A }_{j}^{H}|}\times\mathbb{R}^{m}\to\mathbb{R}^{|\mathcal{A}_{j}|+|\mathcal{A} _{j}^{H}|}\,,\) for \(j=0,\ldots,2r\), and \(x_{\mathcal{I}_{0}}^{H}(0)=\underline{0}\), \(x_{\mathcal{A}_{0}}(0)=x_{0}\). As before, we define the discontinuous vector field \(\mathcal{F}\) for \(t\in[t_{j},t_{j+1}]\), as follows \[\big{(}\mathcal{F}(t,x,\theta)\big{)}_{k}=\begin{cases}\big{(}\mathcal{G}(t,x _{\mathcal{A}_{j}},\theta)\big{)}_{k}\,,&\text{if }k\in\mathcal{A}_{j}\cup\mathcal{A}_{j}^{H}\\ 0,&\text{if }k\in\mathcal{I}_{j}\cup\mathcal{I}_{j}^{N}.\end{cases}\] Hence, we are now able to describe any type of width-varying neural network through a continuous-time model depicted by the following dynamical system \[\begin{cases}\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t))\quad\text{a.e. in }[0,T],\\ x(0)=x_{0}.\end{cases}\] It is essential to highlight the key difference between the previous NeurODE model in (2.6) and the current model: the vector field \(\mathcal{F}\) now explicitly depends on the time variable \(t\) to account for sudden dimensionality drops, where certain components are forced to remain constant. As a matter of fact, the resulting dynamics exhibit high discontinuity in the variable \(t\). To the best of our knowledge, this is the first attempt to consider such discontinuous dynamics in NeurODEs. Previous works, such as [26, 18], typically do not include an explicit dependence on the time variable in the right-hand side of NeurODEs, or they assume a continuous dependency on time, as in [7]. Furthermore, it is worth noting that the vector field \(\mathcal{F}\) introduced to model autoencoders satisfies the general assumptions outlined in Assumption 1 at the beginning of this section. **Remark 2.2**.: _The presented model, initially designed for Autoencoders, can be easily extended to accommodate various types of width-varying neural networks, including architectures with long skip-connections such as U-nets [38]. While the specific details of U-nets are not discussed in detail, their general structure is outlined in Figure 3. U-nets consist of two main components: the contracting path (encoder) and the expansive path (decoder). These paths are symmetric, with skip connections between corresponding layers in each part. Within each path, the input passes through a series of convolutional layers, followed by a non-linear activation function (often ReLU), and other operations (e.g., max pooling) which are not encompassed by our model. The long skip-connections that characterize U-nets require some modifications to the model of autoencoder described above. If we denote with \(\bar{d}_{i}\) for \(i=0,\ldots,r\) the dimensionality of each layer in the contracting path, we have that \(\bar{d}_{2r-i}=\bar{d}_{i}\) for every \(i=0,\ldots,r\). Then, given an initial condition \(z_{0}\in\mathbb{R}^{\bar{d}_{0}}\), we embed it into the augmented state variable_ \[x_{0}=(z_{0},\underline{0}),\text{ where }\underline{0}\in\mathbb{R}^{\bar{d}_{ 1}+\ldots+\bar{d}_{r}}.\] _As done in the previous model for autoencoder, we consider time-nodes \(0=t_{0}<\ldots<t_{2r}=T\), and in each sub-interval we introduce a controlled dynamics with the scheme of active/inactive components depicted in Figure 3._ ## 3 Mean-field Analysis In this section, we extend the dynamical model introduced in Section 2 to its mean-field limit, which corresponds to the scenario of an infinitely large dataset. Within this framework, we formulate the training of NeurODEs and AutoencODEs as a mean-field optimal control problem and provide the associated necessary optimality conditions. It is worth noting that our analysis covers both the high-regularized regime, as studied in previous work [7], as well as the low-regularized regime, which has not been extensively addressed before. In this regard, we dedicate a subsection to a detailed comparison with the results obtained in [7]. Additionally, we investigate the case of finite-particles approximation and we establish a quantitative bound on the generalization capabilities of these networks. ### Mean-field dynamical model In this section, we employ the same view-point as in [7], and we consider the case of a dataset with an infinite number of observations. In our framework, each datum is modeled as a point \(x_{0}\in\mathbb{R}^{d}\), and it comes associated to its corresponding label \(y_{0}\in\mathbb{R}^{d}\). Notice that, in principle, in Machine Learning applications the label (or _target_) datum Figure 3: Embedding of the U-net into a higher-dimensional dynamical system. \(y_{0}\) may have dimension different from \(d\). However, the labels' dimension is just a matter of notation and does not represent a limit of our model. Following [7], we consider the curve \(t\mapsto(x(t),y(t))\) which satisfies \[\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t))\qquad\text{and}\qquad\dot{y}(t)=0 \tag{3.1}\] for a.e. \(t\in[0,T]\), and \((x(0),y(0))=(x_{0},y_{0})\). We observe that the variable \(y\) corresponding to the labels is not changing, nor it is affecting the evolution of the variable \(x\). We recall that the flow associated to the dynamics of the variable \(x\) is denoted by \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) for every \(t\in[0,T]\), and it has been defined in (2.2). Moreover, in regards to the full dynamics prescribed by (3.1), for every admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) we introduce the extended flow \(\boldsymbol{\Phi}^{\theta}_{(0,t)}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to \mathbb{R}^{d}\times\mathbb{R}^{d}\), which reads \[\boldsymbol{\Phi}^{\theta}_{(0,t)}(x_{0},y_{0})=(\Phi^{\theta}_{(0,t)}(x_{0} ),y_{0}) \tag{3.2}\] for every \(t\in[0,T]\) and for every \((x_{0},y_{0})\in\mathbb{R}^{d}\times\mathbb{R}^{d}\). We now consider the case of an infinite number of labeled data \((X^{i}_{0},Y^{i}_{0})_{i\in I}\), where \(I\) is an infinite set of indexes. In our mathematical model, we understand this data distribution as a compactly-supported probability measure \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{d}\times\mathbb{R}^{d})\). Moreover, for every \(t\in[0,T]\), we denote by \(t\mapsto\mu_{t}\) the curve of probability measures in \(\mathcal{P}_{c}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) that models the evolution of the solutions of (3.1) corresponding to the Cauchy initial conditions \((X^{i}_{0},Y^{i}_{0})_{i\in I}\). In other words, the curve \(t\mapsto\mu_{t}\) satisfies the following continuity equation: \[\partial_{t}\mu_{t}(x,y)+\nabla_{x}\cdot\big{(}\mathcal{F}(t,x,\theta_{t})\mu _{t}(x,y)\big{)}=0,\qquad\mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y), \tag{3.3}\] understood in the sense of distributions, i.e. **Definition 3.1**.: _For any given \(T>0\) and \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), we say that \(\mu\in\mathcal{C}([0,T],\mathcal{P}_{c}(\mathbb{R}^{2d}))\) is a weak solution of (3.3) on the time interval \([0,T]\) if_ \[\int_{0}^{T}\int_{\mathbb{R}^{2d}}\big{(}\partial_{t}\psi(t,x,y)+\nabla_{x} \psi(t,x,y)\cdot\mathcal{F}(t,x,\theta_{t})\big{)}\,d\mu_{t}(x,y)\,dt=0, \tag{3.4}\] _for every test function \(\psi\in\mathcal{C}^{1}_{c}((0,T)\times\mathbb{R}^{2d})\)._ Let us now discuss the existence and the characterisation of the solution. **Proposition 3.2**.: _Under Assumptions 1, for every \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) we have that (3.3) admits a unique solution \(t\mapsto\mu_{t}\) in the sense of Definition 3.1. Moreover, we have that for every \(t\in[0,T]\)_ \[\mu_{t}=\boldsymbol{\Phi}^{\theta}_{(0,t)\#}\mu_{0}. \tag{3.5}\] Proof.: Existence and uniqueness of the measure solution of (3.3) follow from [1, Proposition 2.1, Theorem 3.1 and Remark 2.1]. From the characterisation of the solution of (3.3) provided in (3.5), it follows that the curve \(t\mapsto\mu_{t}\) inherits the properties of the flow map \(\Phi^{\theta}\) described in Proposition 2.1. These facts are collected in the next result. **Proposition 3.3**.: _Let us fix \(T>0\) and \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\), and let us consider \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) satisfying Assumption 1. Let \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) be an admissible control, and let \(t\mapsto\mu_{t}\) be the corresponding solution of (3.3). Then, the curve \(t\mapsto\mu_{t}\) satisfies the properties listed below._ * _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{R}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds that_ \[\operatorname{supp}(\mu_{t})\subset B_{R}(0)\] _for every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}}\leq\rho\)_, and for every_ \(\mu_{0}\) _such that_ \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\)_._ * _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{L}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds that_ \[W_{1}(\mu_{t},\nu_{t})\leq\bar{L}W_{1}(\mu_{0},\nu_{0})\] _for every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}}\leq\rho\)_, and for every initial conditions_ \(\mu_{0},\nu_{0}\) _such that the supports satisfy_ \(\operatorname{supp}(\mu_{0}),\operatorname{supp}(\nu_{0})\subset B_{R}(0)\)_, where_ \(\mu_{t}=\boldsymbol{\Phi}^{\theta}_{(0,t)\#}\mu_{0}\) _and_ \(\nu_{t}=\boldsymbol{\Phi}^{\theta}_{(0,t)\#}\nu_{0}\)_._ * _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{L}>0\) _such that, for every_ \(t_{1},t_{2}\in[0,T]\)_, it holds that_ \[W_{1}(\mu_{t_{1}},\mu_{t_{2}})\leq\bar{L}\cdot|t_{1}-t_{2}|^{\frac{1}{2}}\] _for every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}}\leq\rho\)_, and for every_ \(\mu_{0}\) _such that_ \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\)_._ * _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{L}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds that_ \[W_{1}(\mu_{t},\nu_{t})\leq\bar{L}\|\theta_{1}-\theta_{2}\|_{L^{2}}\] _for every_ \(\theta_{1},\theta_{2}\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}},\|\theta_{2}\|_{L^{2}}\leq\rho\)_, and for every initial condition_ \(\mu_{0}\) _such that_ \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\)_, where_ \(\mu_{t}=\boldsymbol{\Phi}_{(0,t)\#}^{\theta_{1}}\mu_{0}\) _and_ \(\nu_{t}=\boldsymbol{\Phi}_{(0,t)\#}^{\theta_{2}}\mu_{0}\)_._ Proof.: All the results follow from Proposition 3.2 and from the properties of the flow map presented in Proposition 2.1, combined with the Kantorovich duality (1.4) for the distance \(W_{1}\), and the change-of-variables formula (1.3). Since the argument is essentially the same for all the properties, we detail the computations only for the second point, i.e., the Lipschitz-continuous dependence on the initial distribution. Owing to (1.4), for any \(t\in[0,T]\), for any \(\varphi\in\operatorname{Lip}(\mathbb{R}^{2d})\) such that its Lipschitz constant \(\text{Lip}(\varphi)\leq 1\), it holds that \[W_{1}(\mu_{t},\nu_{t})\leq\int_{\mathbb{R}^{2d}}\varphi(x,y)\,d(\mu_{t}-\nu_{ t})(x,y)=\int_{\mathbb{R}^{2d}}\varphi(\Phi_{(0,t)}^{\theta}(x),y)\,d(\mu_{0}- \nu_{0})(x,y)\leq\bar{L}W_{1}(\mu_{0},\nu_{0}),\] where the equality follows from the definition of push-forward and from (3.2), while the constant \(\bar{L}\) in the second inequality descends from the local Lipschitz estimate of \(\Phi_{(0,t)}^{\theta}\) established in Proposition 2.1. ### Mean-field optimal control Using the transport equation (3.3), we can now formulate the mean-field optimal control problem that we aim to address. To this end, we introduce the functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\), defined as follows: \[J(\theta)=\left\{\begin{aligned} &\int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}(x, y)+\lambda\int_{0}^{T}|\theta(t)|^{2}\,dt,\\ &\text{s.t.}\ \left\{\begin{aligned} &\partial_{t}\mu_{t}(x,y)+ \nabla_{x}\cdot(\mathcal{F}(t,x,\theta_{t})\mu_{t}(x,y))=0\quad t\in[0,T],\\ &\mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y),\end{aligned}\right. \end{aligned}\right. \tag{3.6}\] for every admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\). The objective is to find the optimal control \(\theta^{*}\) that minimizes \(J(\theta^{*})\), subject to the PDE constraint (3.3) being satisfied by the curve \(t\mapsto\mu_{t}\). The term "mean-field" emphasizes that \(\theta\) is shared by an entire population of input-target pairs, and the optimal control must depend on the distribution of the initial data. We observe that when the initial measure \(\mu_{0}\) is empirical, i.e. \[\mu_{0}:=\mu_{0}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{(X_{\delta}^{i},Y_{\delta }^{i})},\] then minimization of (3.6) reduces to a classical finite particle optimal control problem with ODE constraints. We now state the further regularity hypotheses that we require, in addition to the one contained in Assumption 1. **Assumption 2**.: _For any given \(T>0\), the vector field \(\mathcal{F}\) satisfies the following._ * _For every_ \(R>0\) _there exists a constant_ \(L_{R}>0\) _such that, for every_ \(x_{1},x_{2}\in B_{R}(0)\)_, it holds_ \[|\nabla_{x}\mathcal{F}(t,x_{1},\theta)-\nabla_{x}\mathcal{F}(t,x_{2},\theta)| \leq L_{R}(1+|\theta|^{2})|x_{1}-x_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }\theta\in\mathbb{R}^{m}.\] * _There exists another constant_ \(L_{R}>0\) _such that, for every_ \(\theta_{1},\theta_{2}\in\mathbb{R}^{m}\)_, it holds_ \[|\nabla_{\theta}\mathcal{F}(t,x,\theta_{1})-\nabla_{\theta}\mathcal{F}(t,x, \theta_{2})|\leq L_{R}|\theta_{1}-\theta_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x\in B_{R}(0).\] _From this, it follows that_ \(|\nabla_{\theta}\mathcal{F}(t,x,\theta)|\leq L_{R}(1+|\theta|)\) _for every_ \(x\in B_{R}(0)\) _and for every_ \(\theta\in\mathbb{R}^{m}\)_._ * _There exists another constant_ \(L_{R}>0\) _such that, for every_ \(\theta_{1},\theta_{2}\in\mathbb{R}^{m}\)_, it holds_ \[|\nabla_{x}\mathcal{F}(t,x,\theta_{1})-\nabla_{x}\mathcal{F}(t,x,\theta_{2})| \leq L_{R}(1+|\theta_{1}|+|\theta_{2}|)|\theta_{1}-\theta_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x\in B_{R}(0).\] _From this, it follows that_ \(|\nabla_{x}\mathcal{F}(t,x,\theta)|\leq L_{R}(1+|\theta|^{2})\) _for every_ \(x\in B_{R}(0)\) _and for every_ \(\theta\in\mathbb{R}^{m}\)_._ * _There exists another constant_ \(L_{R}>0\) _such that_ \[|\nabla_{\theta}\mathcal{F}(t,x_{1},\theta)-\nabla_{\theta}\mathcal{F}(t,x_{2}, \theta)|\leq L_{R}(1+|\theta|)|x_{1}-x_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x_{1},x_{2}\in B_{R}(0).\] Additionally, it is necessary to specify the assumptions on the function \(\ell\) that quantifies the discrepancy between the output of the network and its corresponding label. **Assumption 3**.: _The function \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto\mathbb{R}_{+}\) is \(C^{1}\)-regular and non-negative. Moreover, for every \(R>0\), there exists a constant \(L_{R}>0\) such that, for every \(x_{1},x_{2}\in B_{R}(0)\), it holds_ \[|\nabla_{x}\ell(x_{1},y_{1})-\nabla_{x}\ell(x_{2},y_{2})|\leq L_{R}\left(|x_{1} -x_{2}|+|y_{1}-y_{2}|\right). \tag{3.7}\] Let us begin by establishing a regularity result for the reduced final cost, which refers to the cost function without the regularization term. **Lemma 3.4** (Differentiability of the cost).: _Let \(T,R>0\) and \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) be such that \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\), and let us consider \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) and \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) that satisfy, respectively, Assumptions 1-2 and Assumption 3. Then, the reduced final cost_ \[J_{t}:\theta\in L^{2}([0,T];\mathbb{R}^{m})\mapsto\left\{\begin{array}{l} \int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}^{\theta}(x,y),\\ \mathrm{s.t.}\begin{cases}\partial_{t}\mu_{t}^{\theta}(x,y)+\nabla_{x}\big{(} \mathcal{F}(t,x,\theta_{t})\mu_{t}^{\theta}(x,y)\big{)}=0,\\ \mu_{t}^{\theta}|_{t=0}(x,y)=\mu_{0}(x,y),\end{cases}\end{array}\right. \tag{3.8}\] _is Frechet-differentiable. Moreover, using the standard Hilbert space structure of \(L^{2}([0,T],\mathbb{R}^{m})\), we can represent the differential of \(J_{t}\) at the point \(\theta_{0}\) as the function:_ \[\nabla_{\theta}J_{\ell}(\theta):t\mapsto\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}\big{(}t,\Phi_{(0,t)}^{\theta_{0}}(x),\theta(t)\big{)}\cdot \mathcal{R}_{(t,T)}^{\theta}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi_{ (0,T)}^{\theta}(x),y\big{)}\,d\mu_{0}(x,y) \tag{3.9}\] _for a.e. \(t\in[0,T]\)._ Before proving the statement, we need to introduce the linear operator \(\mathcal{R}_{\tau,s}^{\theta}(x):\mathbb{R}^{d}\to\mathbb{R}^{d}\) with \(\tau,s\in[0,T]\), that is related to the linearization along a trajectory of the dynamics of the control system (2.1), and that appears in (3.9). Given an admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), let us consider the corresponding trajectory curve \(t\mapsto\Phi_{(0,t)}^{\theta}(x)\) for \(t\in[0,T]\), i.e., the solution of (2.1) starting at the point \(x\in\mathbb{R}^{d}\) at the initial instant \(t=0\). Given any \(\tau\in[0,T]\), we consider the following linear ODE in the phase space \(\mathbb{R}^{d\times d}\): \[\begin{cases}\frac{d}{ds}\mathcal{R}_{(\tau,s)}^{\theta}(x)=\nabla_{x} \mathcal{F}(s,\Phi_{(0,s)}^{\theta}(x),\theta(s))\cdot\mathcal{R}_{(\tau,s)}^ {\theta}(x)\quad\text{for a.e. }s\in[0,T],\\ \mathcal{R}_{(\tau,\tau)}^{\theta}(x)=\mathrm{Id}.\end{cases} \tag{3.10}\] We insist on the fact that, when we write \(\mathcal{R}_{(\tau,s)}^{\theta}(x)\), \(x\) denotes the starting point of the trajectory along which the dynamics has been linearized. We observe that, using Assumption 2-\((iv)-(vi)\) and Caratheodory Theorem, it follows that (3.10) admits a unique solution, for every \(x\in\mathbb{R}^{d}\) and for every \(\tau\in[0,T]\). Since it is an elementary object in Control Theory, the properties of \(\mathcal{R}^{\theta}\) are discussed in the Appendix (see Proposition A.7). We just recall here that the following relation is satisfied: \[\mathcal{R}_{\tau,s}^{\theta}(x)=\nabla_{x}\Phi_{(\tau,s)}^{\theta}|_{\Phi_{(0, \tau)}^{\theta}(x)}^{\theta} \tag{3.11}\] for every \(\tau,s\in[0,T]\) and for every \(x\in\mathbb{R}^{d}\) (see, e.g., [10, Theorem 2.3.2]). Moreover, for every \(\tau,s\in[0,T]\) the following identity holds: \[\mathcal{R}_{\tau,s}^{\theta}(x)\cdot\mathcal{R}_{s,\tau}^{\theta}(x)=\mathrm{ Id},\] i.e., the matrices \(\mathcal{R}_{\tau,s}^{\theta}(x),\mathcal{R}_{s,\tau}^{\theta}(x)\) are one the inverse of the other. From this fact, it is possible to deduce that \[\frac{\partial}{\partial\tau}\mathcal{R}_{\tau,s}^{\theta}(x)=-\mathcal{R}_{ \tau,s}^{\theta}(x)\cdot\nabla_{x}\mathcal{F}(\tau,\Phi_{(0,\tau)}^{\theta}(x ),\theta(\tau)) \tag{3.12}\] for almost every \(\tau,s\in[0,T]\) (see, e.g., [10, Theorem 2.2.3] for the details). Proof of Lemma 3.4.: Let us fix an admissible control \(\theta\in L^{2}([0,T];\mathbb{R}^{m})\) and let \(\mu_{\cdot}^{\theta}\in\mathcal{C}^{0}([0,T];\mathcal{P}_{c}(\mathbb{R}^{2d}))\) be the unique solution of the continuity equation (3.3), corresponding to the control \(\theta\) and satisfying \(\mu^{\theta}|_{t=0}=\mu_{0}\). According to Proposition 3.2, this curve can be expressed as \(\mu_{t}^{\theta}=\mathbf{\Phi}_{(0,t)\#}^{\theta}\mu_{0}\) for every \(t\in[0,T]\), where the map \(\mathbf{\Phi}_{(0,t)}^{\theta}=(\Phi_{(0,t)}^{\theta},\mathrm{Id}):\mathbb{R}^{2 d}\to\mathbb{R}^{2d}\) has been introduced in (3.2) as the flow of the extended control system (3.1). In particular, we can rewrite the terminal cost \(J_{\ell}\) defined in (3.8) as \[J_{\ell}(\theta)=\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)d\mu_{0}(x,y).\] In order to compute the gradient \(\nabla_{\theta}J_{\ell}\), we preliminarily need to focus on the differentiability with respect to \(\theta\) of the mapping \(\theta\mapsto\ell(\Phi^{\theta}_{(0,T)}(x),y)\), when \((x,y)\) is fixed. Indeed, given another control \(\vartheta\in L^{2}([0,T];\mathbb{R}^{m})\) and \(\varepsilon>0\), from Proposition A.6 it descends that \[\begin{split}\Phi^{\theta+\varepsilon\vartheta}_{(0,T)}(x)& =\Phi^{\theta}_{(0,T)}(x)+\varepsilon\xi^{\theta}(T)+o_{\theta}( \varepsilon)\\ &=\Phi^{\theta}_{(0,T)}(x)+\varepsilon\int_{0}^{T}\mathcal{R}^{ \theta}_{(s,T)}(x)\nabla_{\theta}\mathcal{F}(s,\Phi^{\theta}_{(0,s)}(x), \theta(s))\vartheta(s)ds+o_{\theta}(\varepsilon)\end{split}\qquad \text{ as }\varepsilon\to 0, \tag{3.13}\] where \(o_{\theta}(\varepsilon)\) is uniform for every \(x\in B_{R}(0)\subset\mathbb{R}^{d}\), and as \(\vartheta\) varies in the unit ball of \(L^{2}\). Owing to Assumption 3, for every \(x,y,v\in B_{R}(0)\) we observe that \[|\ell(x+\varepsilon v+o(\varepsilon),y)-\ell(x,y)-\varepsilon\nabla_{x}\ell (x,y)\cdot v|\leq|\nabla_{x}\ell(x,y)|o(\varepsilon)+\frac{1}{2}L_{R}| \varepsilon v+o(\varepsilon)|^{2}\qquad\text{as }\varepsilon\to 0. \tag{3.14}\] Therefore, combining (3.13) and (3.14), we obtain that \[\ell(\Phi^{\theta+\varepsilon\vartheta}_{(0,T)}(x),y)-\ell(\Phi^{\theta}_{(0,T)}(x),y)=\varepsilon\int_{0}^{T}\big{(}\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}( x),y)\cdot\mathcal{R}^{\theta}_{(s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s, \Phi^{\theta}_{(0,s)}(x),\theta(s))\big{)}\cdot\vartheta(s)ds+o_{\theta}( \varepsilon).\] Since the previous expression is uniform for \(x,y\in B_{R}(0)\), then if we integrate both sides of the last identity with respect to \(\mu_{0}\), we have that \[J_{\ell}(\theta+\varepsilon\vartheta)-J_{\ell}(\theta)=\varepsilon\int_{ \mathbb{R}^{2d}}\int_{0}^{T}\big{(}\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y) \cdot\mathcal{R}^{\theta}_{(s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s,\Phi^{ \theta}_{(0,s)}(x),\theta(s))\big{)}\cdot\vartheta(s)\,ds\,d\mu_{0}(x,y)+o_{ \theta}(\varepsilon). \tag{3.15}\] This proves the Frechet differentiability of the functional \(J_{\ell}\) at the point \(\theta\). We observe that, from Proposition 2.1, Proposition A.7 and Assumption 2, it follows that the function \(s\mapsto\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y)\cdot\mathcal{R}^{\theta}_{ (s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s,\Phi^{\theta}_{(0,s)}(x),\theta(s))\) is uniformly bounded in \(L^{2}\), as \(x,y\) vary in \(B_{R}(0)\subset\mathbb{R}^{d}\). Then, using Fubini Theorem, the first term of the expansion (3.15) can be rewritten as \[\int_{0}^{T}\left(\int_{\mathbb{R}^{2d}}\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}( x),y)\cdot\mathcal{R}^{\theta}_{(s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s, \Phi^{\theta}_{(0,s)}(x),\theta(s))\,d\mu_{0}(x,y)\right)\cdot\vartheta(s)\,ds.\] Hence, from the previous asymptotic expansion and from Riesz Representation Theorem, we deduce (3.9). We now prove the most important result of this subsection, concerning the Lipschitz regularity of the gradient \(\nabla_{\theta}J_{\ell}\). **Proposition 3.5**.: _Under the same assumptions and notations as in Lemma 3.4, we have that the gradient \(\nabla_{\theta}J_{\ell}:L^{2}([0,T],\mathbb{R}^{m})\to L^{2}([0,T],\mathbb{R}^ {m})\) is Lipschitz-continuous on every bounded set of \(L^{2}\). More precisely, given \(\theta_{1},\theta_{2}\in L^{2}([0,T];\mathbb{R}^{m})\), there exists a constant \(\mathcal{L}(T,R,\|\theta_{1}\|_{L^{2}},\|\theta_{2}\|_{L^{2}})>0\) such that_ \[\big{\|}\nabla_{\theta}J_{\ell}(\theta_{1})-\nabla_{\theta}J_{\ell}(\theta_{2}) \big{\|}_{L^{2}}\leq\mathcal{L}(T,R,\|\theta_{1}\|_{L^{2}},\|\theta_{2}\|_{L^{ 2}})\,\big{\|}\theta_{1}-\theta_{2}\big{\|}_{L^{2}}.\] Proof.: Let us consider two admissible controls \(\theta_{1},\theta_{2}\in L^{2}([0,T],\mathbb{R}^{m})\) such that \(\|\theta_{1}\|_{L^{2}},\|\theta_{1}\|_{L^{2}}\leq C\). In order to simplify the notations, given \(x\in B_{R}(0)\subset\mathbb{R}^{d}\), we define the curves \(x_{1}:[0,T]\to\mathbb{R}^{d}\) and \(x_{2}:[0,T]\to\mathbb{R}^{d}\) as \[x_{1}(t):=\Phi^{\theta_{1}}_{(0,t)}(x),\quad x_{2}(t):=\Phi^{\theta_{2}}_{(0,t)}(x)\] for every \(t\in[0,T]\), where the flows \(\Phi^{\theta_{1}},\Phi^{\theta_{2}}\) where introduced in (2.2). We recall that, in virtue of Proposition 2.1, \(x_{1}(t),x_{2}(t)\in B_{R}(0)\) for every \(t\in[0,1]\). Then, for every \(y\in B_{R}(0)\), we observe that \[\begin{split}\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1} (t),&\theta_{1}(t)\big{)}\mathcal{R}^{\theta_{1}}_{(t,T)}(x^{\top} )^{\top}\nabla_{x}\ell^{\top}\big{(}x_{1}(T),y\big{)}-\nabla_{\theta}\mathcal{F} ^{\top}\big{(}t,x_{1}(t),\theta_{2}(t)\big{)}\mathcal{R}^{\theta_{2}}_{(t,T)}(x ^{\top})^{\top}\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\\ &\leq\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t), \theta_{1}(t)\big{)}\Big{|}\Big{|}\mathcal{R}^{\theta_{1}}_{(t,T)}(x^{\top})^{ \top}\Big{|}\,\Big{|}\nabla_{x}\ell^{\top}\big{(}x_{1}(T),y\big{)}-\nabla_{x} \ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\\ &+\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{ 1}(t)\big{)}\Big{|}\mathcal{R}^{\theta_{1}}_{(t,T)}(x)^{\top}-\mathcal{R}^{ \theta_{2}}_{(t,T)}(x)^{\top}\Big{|}\,\Big{|}\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y \big{)}\Big{|}\\ &+\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{ 1}(t)\big{)}-\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{2}(t),\theta_{2}(t) \big{)}\Big{|}\,\Big{|}\mathcal{R}^{\theta_{2}}_{(t,T)}(x)^{\top}\Big{|}\, \Big{|}\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\end{split} \tag{3.16}\] for a.e. \(t\in[0,T]\). We bound separately the three terms at the right-hand side of (3.16). As regards the first addend, from Assumption 2-(\(v\)), Assumption 3, Proposition A.7 and Lemma A.4, we deduce that there exists a positive constant \(C_{1}>0\) such that \[\begin{split}\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1} (t),\theta_{1}(t)\big{)}\Big{|}\,\Big{|}\mathcal{R}^{\theta_{1}}_{(t,T)}(x)^{ \top}\Big{|}&\nabla_{x}\ell^{\top}\big{(}x_{1}(T),y\big{)}- \nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\\ &\leq C_{1}\left(1+|\theta_{1}(t)|\right)\|\theta_{1}-\theta_{2}\|_{L^ {2}}\end{split} \tag{3.17}\] for a.e. \(t\in[0,T]\). Similarly, using again Assumption 2-\((v)\), Assumption 3, and Proposition A.7 on the second addend at the right-hand side of (3.16), we obtain that there exists \(C_{2}>0\) such that \[\left|\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{1}(t)\big{)} \right|\Big{|}\mathcal{R}_{(t,T)}^{\theta_{1}}(x)^{\top}-\mathcal{R}_{(t,T)}^{ \theta_{2}}(x)^{\top}\Big{|}\left|\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)} \right|\leq C_{2}\left(1+|\theta_{1}(t)|\right)\|\theta_{1}-\theta_{2}\|_{L^{ 2}} \tag{3.18}\] for a.e. \(t\in[0,T]\). Moreover, the third term can be bounded as follows: \[\left|\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{1}(t)\big{)} \right|-\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{2}(t),\theta_{2}(t) \big{)}\Big{|}\left|\mathcal{R}_{(t,T)}^{\theta_{2}}(x)^{\top}\right|\Big{|} \nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|} \tag{3.19}\] for a.e. \(t\in[0,T]\), where we used Assumption 2-\((v)-(vii)\), Proposition A.7 and Lemma A.4. Therefore, combining (3.16)-(3.19), we deduce that \[\left|\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{1}(t)\big{)} \mathcal{R}_{(t,T)}^{\theta_{1}}(x)^{\top}\nabla_{x}\ell^{\top}\big{(}x_{1}( T),y\big{)}-\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{2}(t) \big{)}\mathcal{R}_{(t,T)}^{\theta_{2}}(x)^{\top}\nabla_{x}\ell^{\top}\big{(} x_{2}(T),y\big{)}\right| \tag{3.20}\] for a.e. \(t\in[0,T]\). We observe that the last inequality holds for every \(x,y\in B_{R}(0)\). Therefore, if we integrate both sides of (3.20) with respect to the probability measure \(\mu_{0}\), recalling the expression of the gradient of \(J_{\ell}\) reported in (3.9), we have that \[\left|\nabla_{\theta}J_{\ell}(\theta_{1})[t]-\nabla_{\theta}J_{\ell}(\theta_ {1})[t]\right|\leq\bar{C}\Big{[}(1+|\theta_{1}(t)|)\|\theta_{1}-\theta_{2}\|_ {L^{2}}+|\theta_{1}(t)-\theta_{2}(t)|\Big{]} \tag{3.21}\] for a.e. \(t\in[0,T]\), and this concludes the proof. From the previous result we can deduce that the terminal cost \(J_{\ell}:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) is locally semi-convex. **Corollary 3.6** (Local semiconvexity of the cost functional).: _Under the same assumptions and notations as in Lemma 3.4, let us consider a bounded subset \(\Gamma\subset L^{2}([0,T];\mathbb{R}^{m})\). Then, \(\nabla_{\theta}J:L^{2}([0,T])\to L^{2}([0,T])\) is Lipschitz continuous on \(\Gamma\). Moreover, there exists a constant \(\mathcal{L}(T,R,\Gamma)>0\) such that the cost functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) defined in (3.6) satisfies the following semiconvexity estimate:_ \[J\big{(}(1-\zeta)\theta_{1}+\zeta\theta_{2}\big{)}\leq(1-\zeta)J(\theta_{1})+ \zeta J(\theta_{2})-(2\lambda-\mathcal{L}(T,R,\Gamma))\tfrac{\zeta(1-\zeta)} {2}\|\theta_{1}-\theta_{2}\|_{L^{2}}^{2} \tag{3.22}\] _for every \(\theta_{1},\theta_{2}\in\Gamma\) and for every \(\zeta\in[0,1]\). In particular, if \(\lambda>\frac{1}{2}\mathcal{L}(T,R,\Gamma)\), the cost functional \(J\) is strictly convex over \(\Gamma\)._ Proof.: We recall that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), where \(J_{\ell}\) has been introduced in (3.8). Owing to Proposition 3.5, it follows that \(\nabla_{\theta}J_{\ell}\) is Lipschitz continuous on \(\Gamma\) with constant \(\mathcal{L}(T,R,\Gamma)\). This implies that \(J\) is Lipschitz continuous as well on \(\Gamma\). Moreover, it descends that \[J_{\ell}\big{(}(1-\zeta)\theta_{1}+\zeta\theta_{2}\big{)}\leq(1-\zeta)J_{\ell} (\theta_{1})+\zeta J_{\ell}(\theta_{2})+\mathcal{L}(T,R,\Gamma)\tfrac{\zeta(1- \zeta)}{2}\|\theta_{1}-\theta_{2}\|_{L^{2}}^{2}\] for every \(\theta_{1},\theta_{2}\in\Gamma\) and for every \(\zeta\in[0,1]\). On the other hand, recalling that \[\|(1-\zeta)\theta_{1}+\zeta\theta_{2}\|_{L^{2}}^{2}=(1-\zeta)\|\theta_{1}\|_{L ^{2}}^{2}+\zeta\|\theta_{2}\|_{L^{2}}^{2}-\zeta(1-\zeta)\|\theta_{1}-\theta_{2} \|_{L^{2}}^{2}\] for every \(\theta_{1},\theta_{2}\in L^{2}\), we immediately deduce (3.22). **Remark 3.1**.: _When the parameter \(\lambda>0\) that tunes the \(L^{2}\)-regularization is large enough, we can show that the functional \(J\) defined by (3.6) admits a unique global minimizer. Indeed, since the control identically \(0\) is an admissible competitor, we have that_ \[\inf_{\theta\in L^{2}}J(\theta)\leq J(0)=J_{\ell}(0),\] _where we observe that the right-hand side is not affected by the value of \(\lambda\). Hence, recalling that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), we have that the sublevel set \(\{\theta:J(\theta)\leq J_{\ell}(0)\}\) is included in the ball \(B_{\lambda}:=\{\theta:\|\theta\|_{L^{2}}^{2}\leq\frac{1}{\lambda}J_{\ell}(0)\}\). Since these balls are decreasing as \(\lambda\) increases, owing to Corollary 3.6, we deduce that there exists a parameter \(\bar{\lambda}>0\) such that the cost functional \(J\) is strongly convex when restricted to \(B_{\bar{\lambda}}\). Then, Lemma 3.4 guarantees that the functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) introduced in (3.6) is continuous with respect to the strong topology of \(L^{2}\), while the convexity implies that it is weakly lower semi-continuous as well. Being the ball \(B_{\bar{\lambda}}\) weakly compact, we deduce that the restriction to \(B_{\bar{\lambda}}\) of the functional \(J\) admits a unique minimizer \(\theta^{*}\). However, since \(B_{\bar{\lambda}}\) includes the sublevel set \(\{\theta:J(\theta)\leq J_{\ell}(0)\}\), it follows that \(\theta^{*}\) is actually the unique global minimizer. It is interesting to observe that, even though \(\lambda\) is chosen large enough to ensure existence (and uniqueness) of the global minimizer, it is not possible to conclude that the functional \(J\) is globally convex. This is essentially due to the fact that Corollary 3.6 holds only on bounded subsets of \(L^{2}\)._ Taking advantage of the representation of the gradient of the terminal cost \(J_{\ell}\) provided by (3.9), we can formulate the necessary optimality conditions for the cost \(J\) introduced in (3.6). In order to do that, we introduce the function \(p:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) as follows: \[p_{t}(x,y):=\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y)\cdot\mathcal{R}^{\theta }_{(t,T)}(x), \tag{3.23}\] where \(\mathcal{R}^{\theta}_{(t,T)}(x)\) is defined according to (3.10). We observe that \(p\) (as well as \(\nabla_{x}\ell\)) should be understood as a row vector. Moreover, using (3.12), we deduce that, for every \(x,y\in\mathbb{R}^{d}\), the \(t\mapsto p_{t}(x,y)\) is solving the following backward Cauchy problem: \[\frac{\partial}{\partial t}p_{t}(x,y)=-p_{t}(x,y)\cdot\nabla_{x}\mathcal{F}(t, \Phi^{\theta}_{(0,t)}(x),\theta(t)),\qquad p_{T}(x,y)=\nabla_{x}\ell(\Phi^{ \theta}_{(0,T)}(x),y). \tag{3.24}\] Hence, we can equivalently rewrite \(\nabla_{\theta}J_{\ell}\) using \(p\): \[\nabla_{\theta}J_{\ell}(\theta)[t]=\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}\big{(}t,\Phi^{\theta}_{(0,t)}(x),\theta(t)\big{)}\cdot p_ {t}^{\top}(x,y)\,d\mu_{0}(x,y) \tag{3.25}\] for almost every \(t\in[0,T]\). Therefore, recalling that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), we deduce that the stationary condition \(\nabla_{\theta}J(\theta^{*})=0\) can be rephrased as \[\begin{cases}\partial_{t}\mu_{t}^{*}(x,y)+\nabla_{x}\cdot\big{(}\mathcal{F}(t,x,\theta^{*}(t))\mu_{t}^{*}(x,y)\big{)}=0,&\mu_{t}^{*}|_{t=0}(x,y)=\mu_{0}(x,y ),\\ \partial_{t}p_{t}^{*}(x,y)=-p_{t}^{*}(x,y)\cdot\nabla_{x}\mathcal{F}(t,\Phi^{ \theta^{*}}_{(0,t)}(x),\theta^{*}(t)),&p_{t}^{*}|_{t=T}(x,y)=\nabla_{x}\ell( \Phi^{\theta^{*}}_{(0,T)}(x),y),\\ \theta^{*}(t)=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}\big{(}t,\Phi^{\theta^{*}}_{(0,t)}(x),\theta^{*}(t)\big{)} \cdot p_{t}^{*\top}(x,y)\,d\mu_{0}(x,y).\end{cases} \tag{3.26}\] **Remark 3.2**.: _The computation of \(p\) through the backward integration of (3.24) can be interpreted as the control theoretic equivalent of the "back-propagation of the gradients". We observe that, in order to check whether (3.26) is satisfied, it is sufficient to evaluate \(p^{*}\) only on \(\operatorname{supp}(\mu_{0})\). Moreover, the evaluation of \(p^{*}\) on different points \((x_{1},y_{1}),(x_{2},y_{2})\in\operatorname{supp}(\mu_{0})\) involves the resolution of two uncoupled backward ODEs. This means that, when dealing with a measure \(\mu_{0}\) that charges only finitely many points, we can solve the equation (3.24) in parallel for every point in \(\operatorname{supp}(\mu_{0})\)._ In virtue of Proposition 3.5, we can study the gradient flow induced by the cost functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) on its domain. More precisely, given an admissible control \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\), we consider the gradient flow equation: \[\begin{cases}\dot{\theta}(\omega)=-\nabla_{\theta}J(\theta(\omega))\quad \text{for $\omega\geq 0$},\\ \theta(0)=\theta_{0}.\end{cases} \tag{3.27}\] In the next result we show that the gradient flow equation (3.27) is well-posed and that the solution is defined for every \(\omega\geq 0\). In the particular case of linear-control systems, the properties of the gradient flow trajectories has been investigated in [42]. **Lemma 3.7**.: _Let \(T,R>0\) and \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) be a probability measure such that \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\), and let us consider \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) and \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) that satisfy, respectively, Assumptions 1-2 and Assumption 3. Then, for every \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\), the gradient flow equation (3.27) admits a unique solution \(\omega\mapsto\theta(\omega)\) of class \(C^{1}\) that is defined for every \(\omega\in[0,+\infty)\)._ Proof.: Let us consider \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\), and let us introduce the sub-level set \[\Gamma:=\{\theta\in L^{2}([0,T],\mathbb{R}^{m}):J(\theta)\leq J(\theta_{0})\},\] where \(J\) is the functional introduced in (3.6) defining the mean-field optimal control problem. Using the fact that the end-point cost \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}_{+}\) is non-negative, we deduce that \(\Gamma\subset\{\theta\in L^{2}([0,T],\mathbb{R}^{m}):\|\theta\|_{L^{2}}^{2}\leq \frac{1}{\lambda}J(\theta_{0})\}\). Hence, from Proposition 3.5 it follows that the gradient field \(\nabla_{\theta}J\) is Lipschitz (and bounded) on \(\Gamma\). Hence, using a classical result on ODE in Banach spaces (see, e.g., [31, Theorem 5.1.1]), it follows that the initial value problem (3.27) admits a unique small-time solution \(\omega\mapsto\theta(\omega)\) of class \(C^{1}\) defined for \(\omega\in[-\delta,\delta]\), with \(\delta>0\). Moreover, we observe that \[\frac{d}{d\omega}J(\theta(\omega))=\langle\nabla_{\theta}J(\theta(\omega)), \dot{\theta}(\omega))\rangle=-\|\nabla_{\theta}J(\theta(\omega))\|_{L^{2}}\leq 0,\] and this implies that \(\theta(\omega)\in\Gamma\) for every \(\omega\in[0,\delta]\). Hence, it is possible to recursively extend the solution to every interval of the form \([0,M]\), with \(M>0\) We observe that, under the current working assumptions, we cannot provide any convergence result for the gradient flow trajectories. This is not surprising since, when the regularization parameter \(\lambda>0\) is small, it is not even possible to prove that the functional \(J\) admits minimizers. Indeed, the argument presented in Remark 3.1 requires the regularization parameter \(\lambda\) to be sufficiently large. We conclude the discussion with an observation on a possible discretization of (3.27). If we fix a sufficiently small parameter \(\tau>0\), given an initial guess \(\theta_{0}\), we can consider the sequence of controls \((\theta_{k}^{\tau})_{k\geq 0}\subset L^{2}([0,T],\mathbb{R}^{m})\) defined through the Minimizing Movement Scheme: \[\theta_{0}^{\tau}=\theta_{0},\qquad\theta_{k+1}^{\tau}\in\arg\min_{\theta} \left[J(\theta)+\frac{1}{2\tau}\|\theta-\theta_{k}^{\tau}\|_{L^{2}}^{2}\right] \quad\text{for every }k\geq 0. \tag{3.28}\] **Remark 3.3**.: _We observe that the minimization problems in (3.28) are well-posed as soon as the functionals \(\theta\mapsto J_{\theta_{k}}^{\tau}(\theta):=J(\theta)+\frac{1}{2\tau}\| \theta-\theta_{k}^{\tau}\|_{L^{2}}^{2}\) are strictly convex on the bounded sublevel set \(K_{\theta_{0}}:=\{\theta:J(\theta)\leq J(\theta_{0}^{\tau})\}\), for every \(k\geq 0\). Hence, the parameter \(\tau>0\) can be calibrated by means of the estimates provided by Corollary 3.6, considering the bounded set \(K_{\theta_{0}}\). Then, using and inductive argument, it follows that, for every \(k\geq 0\), the functional \(J_{\theta_{k}}^{\tau}:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) admits a unique global minimizer \(\theta_{k+1}^{\tau}\). Also for \(J_{\theta_{k}}^{\tau}\) we can derive the necessary conditions for optimality satisfied by \(\theta_{k+1}\), that are analogous to the ones formulated in (3.26), and that descend as well from the identity \(\nabla_{\theta}J_{\theta_{k}}^{\tau}(\theta_{k+1}^{\tau})=0\):_ \[\begin{cases}\partial_{t}\mu_{t}(x,y)+\nabla_{x}\cdot\big{(}\mathcal{F}(t,x, \theta_{k+1}^{\tau}(t))\mu_{t}(x,y)\big{)}=0,&\mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y), \\ \partial_{t}p_{t}(x,y)=-p_{t}(x,y)\cdot\nabla_{x}\mathcal{F}(t,\Phi_{(0,t)}^{ \sigma_{t+1}^{\tau}}(x),\theta_{k+1}^{\tau}(t)),&p_{t}|_{t=T}(x,y)=\nabla_{x} \ell(\Phi_{(0,T)}^{\theta_{k+1}^{\tau}}(x),y),\\ \theta_{k+1}^{\tau}(t)=-\frac{1}{1+2\lambda\tau}\left(\theta_{k}^{\tau}(t)- \tau\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,\Phi_{(0, t)}^{\theta_{k+1}^{\tau}}(x),\theta_{k+1}^{\tau}(t)\big{)}\cdot p_{t}^{ \tau}(x,y)\,d\mu_{0}(x,y)\right).\end{cases} \tag{3.29}\] _Finally, we observe that the mapping \(\Lambda^{\tau}:L^{2}([0,T],\mathbb{R}^{m})\to L^{2}([0,T],\mathbb{R}^{m})\) defined for a.e. \(t\in[0,T]\) as_ \[\Lambda_{\theta_{k}^{\tau}}^{\tau}(\theta)[t]:=-\frac{1}{1+2\lambda\tau}\left( \theta_{k}^{\tau}(t)-\tau\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{ \top}\big{(}t,\Phi_{(0,t)}^{\theta}(x),\theta(t)\big{)}\cdot p_{t}^{\top}(x,y )\,d\mu_{0}(x,y)\right) \tag{3.30}\] _is a contraction on \(K_{\theta_{0}}\) as soon as_ \[\frac{\tau}{1+2\lambda\tau}\mathrm{Lip}\left(\nabla_{\theta}J_{\ell}|_{K_{ \theta_{0}}}\right)<1.\] For every \(\tau>0\) such that the sequence \((\theta_{k}^{\tau})_{k\geq 0}\) is defined, we denote with \(\tilde{\theta}^{\tau}:[0,+\infty)\to L^{2}([0,T],\mathbb{R}^{m})\) the piecewise affine interpolation obtained as \[\tilde{\theta}^{\tau}(\omega)=\theta_{k}^{\tau}+\frac{\theta_{k+1}^{\tau}- \theta_{k}^{\tau}}{\tau}(\omega-k\tau)\quad\text{for }\omega\in[k\tau,(k+1)\tau]. \tag{3.31}\] We finally report a classical result concerning the convergence of the piecewise affine interpolation \(\tilde{\theta}^{\tau}\) to the gradient flow trajectory solving (3.27). **Proposition 3.8**.: _Under the same assumptions and notations as in Lemma 3.7, let us consider an initial point \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\) and a sequence \((\tau_{j})_{j\in\mathbb{N}}\) such that \(\tau_{j}\to 0\) as \(j\to\infty\), and let \((\tilde{\theta}^{\tau_{j}})_{j\in\mathbb{N}}\) be the sequence of piecewise affine curves defined by (3.31). Then, for every \(\Omega>0\), there exists a subsequence \((\tilde{\theta}^{\tau_{j_{k}}})_{k\in\mathbb{N}}\) converging uniformly on the interval \([0,\Omega]\) to the solution of (3.27) starting from \(\theta_{0}\)._ Proof.: The proof follows directly from [41, Proposition 2.3]. ### Finite particles approximation In this section, we study the stability of the mean-field optimal control problem (3.6) with respect to finite-samples distributions. More precisely, assume that we are given samples \(\{(X_{0}^{i},Y_{0}^{i})\}_{i=1}^{N}\) of size \(N\geq 1\) independently and identically distributed according to \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\), and consider the empirical loss minimization problem \[\inf_{\theta\in L^{2}([0,T];\mathbb{R}^{m})}J^{N}(\theta):=\begin{cases}\frac{1 }{N}\sum_{i=1}^{N}\ell\big{(}X^{i}(T),Y^{i}(T)\big{)}+\lambda\int_{0}^{T}| \theta(t)|^{2}\,dt\\ \text{s.t.}\begin{cases}\dot{X}^{i}(t)=\mathcal{F}(t,X^{i}(t),\theta(t)),&\dot {Y}^{i}(t)=0,\\ \big{(}X^{i}(t),Y^{i}(t)\big{)}\big{|}_{t=0}=(X_{0}^{i},Y_{0}^{i}),&i\in\{1, \ldots,N\}.\end{cases}\end{cases} \tag{3.32}\] By introducing the empirical measure \(\mu_{0}^{N}\in\mathcal{P}_{c}^{N}(\mathbb{R}^{2d})\), defined as \[\mu_{0}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{(X_{0}^{i},Y_{0}^{i})},\] the cost function in (3.32) can be rewritten as \[J^{N}(\theta)=\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)\,d\mu_{0} ^{N}(x,y)+\lambda\|\theta\|_{L^{2}}^{2} \tag{3.33}\] for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), and the empirical loss minimization problem in (3.32) can be recast as a mean-field optimal control problem with initial datum \(\mu_{0}^{N}\). In this section we are interested in studying the asymptotic behavior of the functional \(J^{N}\) as \(N\) tends to infinity. More precisely, we consider a sequence of probability measures \((\mu_{0}^{N})_{N\geq 1}\) such that \(\mu_{0}^{N}\) charges uniformly \(N\) points, and such that \[W_{1}(\mu_{0}^{N},\mu_{0})\ \underset{N\to+\infty}{\longrightarrow}\ 0.\] Then, in Proposition 3.9 we study the uniform convergence of \(J^{N}\) and of \(\nabla_{\theta}J^{N}\) to \(J\) and \(\nabla_{\theta}J^{N}\), respectively, where \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) is the functional defined in (3.6) and corresponding to the limiting measure \(\mu_{0}\). Moreover, in Theorem 3.10, assuming the existence of a region where the functionals \(J^{N}\) are uniformly strongly convex, we provide an estimate of the so called _generalization error_ in terms of the distance \(W_{1}(\mu_{0}^{N},\mu_{0})\). **Proposition 3.9** (Uniform convergence of \(J^{N}\) and \(\nabla_{\theta}J^{N}\)).: _Let us consider a probability measure \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) and a sequence \((\mu_{0}^{N})_{N\geq 1}\) such that \(\mu_{0}^{N}\in\mathcal{P}_{c}^{N}(\mathbb{R}^{2d})\) for every \(N\geq 1\). Let us further assume that \(W_{1}(\mu_{0}^{N},\mu_{0})\to 0\) as \(N\to\infty\), and that there exists \(R>0\) such that \(\operatorname{supp}(\mu_{0}),\operatorname{supp}(\mu_{0}^{N})\subset B_{R}(0)\) for every \(N\geq 1\). Given \(T>0\), let \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) and \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) satisfy, respectively, Assumptions 1-2 and Assumption 3, and let \(J,J^{N}:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) be the cost functionals defined in (3.6) and (3.32), respectively. Then, for every bounded subset \(\Gamma\subset L^{2}([0,T],\mathbb{R}^{m})\), we have that_ \[\lim_{N\to\infty}\ \sup_{\theta\in\Gamma}|J^{N}(\theta)-J(\theta)|=0 \tag{3.34}\] _and_ \[\lim_{N\to\infty}\ \sup_{\theta\in\Gamma}\|\nabla_{\theta}J^{N}(\theta)-\nabla_{ \theta}J(\theta)\|_{L^{2}}=0, \tag{3.35}\] _where \(J\) was introduced in (3.6), and \(J^{N}\) is defined as in (3.33)._ Proof.: Since we have that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\) and \(J^{N}(\theta)=J_{\ell}^{N}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), it is sufficient to prove (3.34)-(3.35) for \(J_{\ell}\) and \(J_{\ell}^{N}\), where we set \[J_{\ell}^{N}(\theta):=\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)\, d\mu_{0}^{N}(x,y)\] for every \(\theta\in L^{2}\) and for every \(N\geq 1\). We first observe that, for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) such that \(\|\theta\|_{L^{2}}\leq\rho\), from Proposition 3.3 it follows that \(\operatorname{supp}(\mu_{0}),\operatorname{supp}(\mu_{0}^{N})\subset B_{R}(0)\), for some \(\bar{R}>0\). Then, denoting with \(t\mapsto\mu_{t}^{N}\) and \(t\mapsto\mu_{t}\) the solutions of the continuity equation (3.3) driven by the control \(\theta\) and with initial datum, respectively, \(\mu_{0}^{N}\) and \(\mu_{0}\), we compute \[\begin{split}|J_{\ell}^{N}(\theta)-J_{\ell}(\theta)|& =\left|\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)\, \big{(}d\mu_{0}^{N}-d\mu_{0}\big{)}(x,y)\right|=\left|\int_{\mathbb{R}^{2d}} \ell(x,y)\,\big{(}d\mu_{T}^{N}-d\mu_{T}\big{)}(x,y)\right|\\ &\leq\bar{L}_{1}\bar{L}_{2}W_{1}(\mu_{0}^{N},\mu_{0}),\end{split} \tag{3.36}\] where we have used (3.2) and Proposition 3.2 in the second identity, and we have indicated with \(\bar{L}_{1}\) the Lipschitz constant of \(\ell\) on \(B_{\bar{R}}(0)\), while \(\bar{L}_{2}\) descends from the continuous dependence of solutions of (3.3) on the initial datum (see Proposition 3.3). We insist on the fact that both \(\bar{L}_{1},\bar{L}_{2}\) depend on \(\rho\), i.e., the upper bound on the \(L^{2}\)-norm of the controls. We now address the uniform converge of \(\nabla_{\theta}J_{\ell}^{N}\) to \(\nabla_{\theta}J_{\ell}\) on bounded sets of \(L^{2}\). As before, let us consider an admissible control \(\theta\) such that \(\|\theta\|_{L^{2}}\leq\rho\). Hence, using the representation provided in (3.9), for a.e. \(t\in[0,T]\) we have: \[\begin{split}\big{|}\nabla_{\theta}J_{\ell}^{N}(\theta)[t]& -\nabla_{\theta}J_{\ell}(\theta)[t]\big{|}=\\ &\left|\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{\top} \big{(}t,\Phi_{(0,t)}^{\theta_{0}}(x),\theta_{0}(t)\big{)}\cdot\mathcal{R}_{(t,T)}^{\theta_{0}}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi_{(0,T)}^{\theta_ {0}}(x),y\big{)}\,\big{(}d\mu_{0}^{N}-d\mu_{0}\big{)}(x,y)\right|,\end{split} \tag{3.37}\] In order to prove uniform convergence in \(L^{2}\) norm, we have to show that the integrand is Lipschitz-continuous in \((x,y)\) for a.e. \(t\in[0,T]\), where the Lipschitz constant has to be \(L^{2}\)-integrable as a function of the \(t\) variable. First of all, combining Assumption 2\(-(v)\) and Lemma A.2, we can prove that there exists constants \(C_{1},\tilde{L}_{3}>0\) (depending on \(\rho\)) such that \[\begin{split}|\nabla_{\theta}\mathcal{F}(t,\Phi^{\theta}_{(0,t)} (x),\theta(t))|\leq& C_{1}(1+|\theta(t)|),\\ |\nabla_{\theta}\mathcal{F}(t,\Phi^{\theta}_{(0,t)}(x_{1}), \theta(t))-\nabla_{\theta}\mathcal{F}(t,\Phi^{\theta}_{(0,t)}(x_{2}),\theta( t))|\leq&\bar{L}_{3}\bar{L}_{2}(1+|\theta(t)|)|x_{1}-x_{2}|\end{split} \tag{3.38}\] for a.e. \(t\in[0,T]\). We recall that the quantity \(\bar{L}_{2}>0\) (that already appeared in (3.36)) represents the Lipschitz constant of the flow \(\Phi_{(0,t)}\) with respect to the initial datum. Moreover, from Proposition A.7, it descends that \[\begin{split}|\mathcal{R}^{\theta}_{(t,T)}(x)|\leq& C_{2},\\ |\mathcal{R}^{\theta}_{(t,T)}(x_{1})-\mathcal{R}^{\theta}_{(t,T)}( x_{2})|\leq&\bar{L}_{4}|x_{1}-x_{2}|\end{split} \tag{3.39}\] for every \(t\in[0,T]\), where the constants \(C_{2},\bar{L}_{4}\) both depend on \(\rho\). Finally, owing to Assumption 3 and Proposition 2.1, we deduce \[\begin{split}|\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y)|\leq& C_{3},\\ |\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x_{1}),y_{1})-\nabla_{x} \ell(\Phi^{\theta}_{(0,T)}(x_{2}),y_{2})|\leq&\bar{L}_{5}(\bar{L} _{2}|x_{1}-x_{2}|+|y_{1}-y_{2}|)\end{split} \tag{3.40}\] for every \(x,y\in B_{R}(0)\), where the constants \(C_{3},\bar{L}_{2}\) and the Lipschitz constant \(\bar{L}_{5}\) of \(\nabla_{x}\ell\) depend, once again, on \(\rho\). Combining (3.38), (3.39) and (3.40), we obtain that there exists a constant \(\tilde{L}_{\rho}>0\) such that \[|\nabla_{\theta}J^{N}[t]-\nabla_{\theta}J[t]|\leq\tilde{L}_{\rho}(1+|\theta( t)|)W_{1}(\mu_{0}^{N},\mu_{0}),\] for a.e. \(t\in[0,T]\). Observing that the right-hand side is \(L^{2}\)-integrable in \(t\), the previous inequality yields \[\|\nabla_{\theta}J^{N}-\nabla_{\theta}J\|_{L^{2}}\leq\tilde{L}_{\rho}(1+\rho) W_{1}(\mu_{0}^{N},\mu_{0}),\] and this concludes the proof. In the next result we provide an estimate of the _generalization error_ in terms of the distance \(W_{1}(\mu_{0}^{N},\mu_{0})\). In this case, the important assumption is that there exists a sequence \((\theta^{\star,N})_{N\geq 1}\) of local minimizers of the functionals \((J^{N})_{N\geq 1}\), and that it is contained in a region where \((J^{N})_{N\geq 1}\) are uniformly strongly convex. **Theorem 3.10**.: _Under the same notations and hypotheses as in Proposition 3.9, let us further assume that the functional \(J\) admits a local minimizer \(\theta^{\star}\) and, similarly, that, for every \(N\geq 1\), \(\theta^{\star,N}\) is a local minimizer for \(J^{N}\). Moreover, we require that there exists a radius \(\rho>0\) such that, for every \(N\geq\bar{N}\), \(\theta^{\star,N}\in B_{\rho}(\theta^{\star})\) and the functional \(J^{N}\) is \(\eta-\)strongly convex in \(B_{\rho}(\theta^{\star})\), with \(\eta>0\). Then, there exists a constant \(C>0\) such that, for every \(N\geq\bar{N}\), we have_ \[\bigg{|}\int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}^{\theta^{\star,N}}(x,y)- \int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}^{\theta^{\star}}(x,y)\bigg{|}\leq C \left(W_{1}(\mu_{0}^{N},\mu_{0})+\frac{1}{\sqrt{\eta}}\sqrt{W_{1}(\mu_{0}^{N}, \mu_{0})}\right). \tag{3.41}\] Proof.: According to our assumptions, the control \(\theta^{\star,N}\in B_{\rho}(\theta^{\star})\) is a local minimizer for \(J^{N}\), and, being \(J^{N}\) strongly convex on \(B_{\rho}(\theta^{\star})\) for \(N\geq\bar{N}\), we deduce that \(\{\theta^{\star,N}\}=\arg\min_{B_{\rho}(\theta^{\star})}J^{N}\). Furthermore, from the \(\eta\)-strong convexity of \(J^{N}\), it follows that for every \(\theta_{1},\theta_{2}\in B_{\rho}(\theta^{\star})\), it holds \[\langle\nabla_{\theta}J^{N}(\theta_{1})-\nabla_{\theta}J^{N}(\theta_{2}),\, \theta_{1}-\theta_{2}\rangle\geq\eta||\theta_{1}-\theta_{2}||_{L^{2}}^{2}.\] According to Proposition 3.9, we can pass to the limit in the latter and deduce that \[\langle\nabla_{\theta}J(\theta_{1})-\nabla_{\theta}J(\theta_{2}),\,\theta_{1}- \theta_{2}\rangle\geq\eta||\theta_{1}-\theta_{2}||_{L^{2}}^{2}\] for every \(\theta_{1},\theta_{2}\in B_{\rho}(\theta^{\star})\). Hence, \(J\) is \(\eta\)-strongly convex in \(B_{\rho}(\theta^{\star})\) as well, and that \(\{\theta^{\star}\}=\arg\min_{B_{\rho}(\theta^{\star})}J\). Therefore, from the \(\eta\)-strong convexity of \(J^{N}\) and \(J\), we obtain \[J^{N}(\theta^{\star})-J^{N}(\theta^{\star,N})\geq\frac{\eta}{2}|| \theta^{\star,N}-\theta^{\star}||_{L^{2}}^{2}\] \[J(\theta^{\star,N})-J^{N}(\theta^{\star})\geq\frac{\eta}{2}|| \theta^{\star,N}-\theta^{\star}||_{L^{2}}^{2}.\] Summing the last two inequalities, we deduce that \[\eta||\theta^{\star,N}-\theta^{\star}||_{L^{2}}^{2}\leq\left(J^{N}(\theta^{\star})- J(\theta^{\star})\right)+\left(J^{N}(\theta^{\star,N})-J(\theta^{\star,N})\right) \leq 2C_{1}W_{1}(\mu_{0}^{N},\mu_{0}), \tag{3.42}\] where the second inequality follows from the local uniform convergence of Proposition 3.9. We are now in position to derive a bound on the generalization error: \[\begin{split}\left|\int_{\mathbb{R}^{2d}}\ell(x,y)\left(d\mu_{T}^{ \theta^{*,N}}-d\mu_{T}^{\theta^{*}}\right)(x,y)\right|&=\left| \int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta^{*,N}}(x),y)\,d\mu_{0}^{N}(x,y) -\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta^{*}}(x),y)\,d\mu_{0}(x,y) \right|\\ &\leq\int_{\mathbb{R}^{2d}}\left|\ell(\Phi_{(0,T)}^{\theta^{*,N}}( x),y)-\ell(\Phi_{(0,T)}^{\theta^{*}}(x),y)\right|\,d\mu_{0}^{N}(x,y)\\ &\quad+\left|\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta^{*}} (x),y)\big{(}d\mu_{0}^{N}(x,y)-d\mu_{0}(x,y)\big{)}\right|\\ &\leq\bar{L}\sup_{x\in\mathrm{supp}(\mu_{0}^{N})}\left|\Phi_{(0, T)}^{\theta^{*,N}}(x)-\Phi_{(0,T)}^{\theta^{*}}(x)\right|+\bar{L}_{R}W_{1}( \mu_{0}^{N},\mu_{0}),\end{split} \tag{3.43}\] where \(\bar{L}\) and \(\bar{L}_{R}\) are constants coming from Assumption 3 and Proposition 2.1. Then, we combine Proposition 2.1 with the estimate in (3.42), in order to obtain \[\sup_{x\in\mathrm{supp}(\mu_{0}^{N})}\left|\Phi_{(0,T)}^{\theta^{*,N}}(x)- \Phi_{(0,T)}^{\theta^{*}}(x)\right|\leq C_{2}\|\theta^{*,N}-\theta^{*}\|_{L^{2 }}\leq C_{2}\sqrt{\frac{2C_{1}}{\eta}W_{1}(\mu_{0}^{N},\mu_{0})}.\] Finally, from the last inequality and (3.43), we deduce (3.41). **Remark 3.4**.: _Since the functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) defined in (3.6) is continuous (and, in particular, lower semi-continuous) with respect to the strong topology of \(L^{2}\), the locally uniform convergence of the functionals \(J^{N}\) to \(J\) (see Proposition 3.9) implies that \(J^{N}\) is \(\Gamma\)-converging to \(J\) with respect to the strong topology of \(L^{2}\). However, this fact is of little use, since the functionals \(J,J^{N}\) are not strongly coercive. On the other hand, if we equip \(L^{2}\) with the weak topology, in general the functional \(J\) is not lower semi-continuous. In our framework, the only circumstance where one can hope for \(\Gamma\)-convergence with respect to the weak topology corresponds to the highly-regularized scenario, i.e., when the parameter \(\lambda>0\) is sufficiently large. Therefore, in the situations of practical interest when \(\lambda\) is small, we cannot rely on this tool, and the crucial aspect is that the dynamics (2.1) is nonlinear with respect to the control variable. Indeed, in the case of affine-control systems considered in [44], it is possible to establish \(\Gamma\)-convergence results in the \(L^{2}\)-weak topology (see [43] for an application to diffeomorphisms approximation). Finally, we report that in [46], in order to obtain the \(L^{2}\)-strong equi-coercivity of the functionals, the authors introduced in the cost the \(H^{1}\)-seminorm of the controls._ ### Convex regime and previous result In order to conclude our mean-field analysis, we now compare our results with the ones obtained in the similar framework of [7], where the regularization parameter \(\lambda\) was assumed to be _sufficiently large_, leading to a convex regime in the sublevel sets (see Remark 3.1). We recall below the main results presented in [7]. **Theorem 3.11**.: _Given \(T,R,R_{T}>0\), and an initial datum \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) with \(\mathrm{supp}(\mu_{0})\subset B_{R}(0)\), let us consider a terminal condition \(\psi_{T}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) such that \(\mathrm{supp}(\psi_{T})\subset B_{R_{T}}(0)\) and \(\psi_{T}(x,y)=\ell(x,y)\ \forall x,y\in B_{R}(0)\). Let \(\mathcal{F}\) satisfy [7, Assumptions 1-2] and \(\ell\in C^{2}(\mathbb{R}^{d}\times\mathbb{R}^{d},\mathbb{R})\). Assume further that \(\lambda>0\) is large enough. Then, there exists a triple \((\mu^{*},\theta^{*},\psi^{*})\in\mathcal{C}([0,T],\mathcal{P}_{c}(\mathbb{R}^ {2d}))\times Lip([0,T],\mathbb{R}^{m})\times\mathcal{C}^{1}([0,T],\mathcal{C} _{c}^{2}(\mathbb{R}^{2d}))\) solution of_ \[\begin{cases}\partial_{t}\mu_{t}^{*}(x,y)+\nabla_{x}\cdot(\mathcal{F}(t,x, \theta^{*}(t))\mu_{t}^{*}(x,y))=0,&\mu_{t}^{*}|_{t=0}(x,y)=\mu_{0}(x,y),\\ \partial_{t}\psi_{t}^{*}(x,y)+\nabla_{x}\psi_{t}^{*}(x,y)\cdot\mathcal{F}(t,x,\theta^{*}(t))=0,&\psi_{t}^{*}|_{t=T}(x,y)=\ell(x,y),\\ \theta^{*^{\top}}(t)=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{x}\psi_{t }^{*}(x,y)\cdot\nabla_{\theta}\mathcal{F}(t,x,\theta^{*}(t))\,d\mu_{t}^{*}(x,y ),\end{cases} \tag{3.44}\] _where \(\psi^{*}\in\mathcal{C}^{1}([0,T],\mathcal{C}_{c}^{2}(\mathbb{R}^{2d}))\) is in characteristic form. Moreover, the control solution \(\theta^{*}\) is unique in a ball \(\Gamma_{C}\subset L^{2}([0,T],\mathbb{R}^{m})\) and continuously dependent on the initial datum \(\mu_{0}\)._ We observe that the condition on \(\lambda>0\) to be large enough is crucial to obtain local convexity of the cost functional and, consequently, existence and uniqueness of the solution. However, in the present paper we have not done assumptions on the magnitude of \(\lambda\), hence, as it was already noticed in Remark 3.1, we might end up in a non-convex regime. Nevertheless, in Proposition 3.12 we show that, in the case of \(\lambda\) sufficiently large, the previous approach and the current one are "equivalent". **Proposition 3.12**.: _Under the same hypotheses as in Theorem 3.11, let \(J:L^{2}([0,T])\to\mathbb{R}\) be the functional defined in (3.6). Then, \(\theta^{*}\) satisfies (3.44) if and only if it is a critical point for \(J\)._ Proof.: According to Lemma (3.4), the gradient of the functional \(J\) at \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) is defined for a.e. \(t\in[0,T]\) as \[\nabla_{\theta}J(\theta)[t]=\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{ \top}\big{(}t,\Phi^{\theta}_{(0,t)}(x),\theta(t)\big{)}\cdot\mathcal{R}^{ \theta}_{(t,T)}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi^{\theta}_{(0,T )}(x),y\big{)}\,d\mu_{0}(x,y)+2\lambda\theta(t).\] Hence, if we set the previous expression equal to zero, we obtain the characterization of the critical point \[\theta(t)=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^ {\top}\big{(}t,\Phi^{\theta}_{(0,t)}(x),\theta(t)\big{)}\cdot\mathcal{R}^{ \theta}_{(t,T)}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi^{\theta}_{(0,T )}(x),y\big{)}\,d\mu_{0}(x,y) \tag{3.45}\] for a.e. \(t\in[0,T]\). On the other hand, according to Theorem 3.11, the optimal \(\theta\) satisfies for a.e. \(t\in[0,T]\) the following \[\theta(t) =-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\big{(}\nabla_{x}\psi_ {t}(x,y)\cdot\nabla_{\theta}\mathcal{F}(t,x,\theta(t))\big{)}^{\top}\,d\mu_{t} (x,y) \tag{3.46}\] \[=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}(t,\Phi^{\theta}_{(0,t)}(x),\theta(t))\cdot\nabla_{x}\psi_{ t}^{\top}(\Phi^{\theta}_{(0,t)}(x),y)\,d\mu_{0}(x,y).\] Hence, to conclude that \(\nabla_{\theta}J=0\) is equivalent to condition stated in Theorem 3.11, we are left to show that \[\mathcal{R}^{\theta}_{(t,T)}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi^ {\theta}_{(0,T)}(x),y\big{)}=\nabla_{x}\psi_{t}^{\top}(\Phi^{\theta}_{(0,t)}( x),y), \tag{3.47}\] where the operator \(\mathcal{R}^{\theta}_{(t,T)}(x)\) is defined as the solution of (3.10). First of all, we recall that \((t,x,y)\mapsto\psi(t,\Phi^{\theta}_{(0,t)}(x),y)\) is defined as the characteristic solution of the second equation in (3.44) and, as such, it satisfies \[\psi_{t}(x,y)=\ell(\Phi^{\theta}_{(t,T)}(x),y),\] for every \(t\in[0,T]\) and for every \(x,y\in B_{R_{T}}(0)\). By taking the gradient with respect to \(x\), we obtain that \[\nabla_{x}\psi_{t}(x,y)=\nabla_{x}\ell(\Phi^{\theta}_{(t,T)}(x),y))\cdot \nabla_{x}\Phi^{\theta}_{(t,T)}\big{|}_{x},\] for all \(x,y\in B_{R_{T}}(0)\). Hence, using (3.11), we deduce that \[\nabla_{x}\psi_{t}(\Phi^{\theta}_{(0,t)}(x),y)=\nabla_{x}\ell(\Phi^{\theta}_{ (0,t)}\circ\Phi^{\theta}_{(0,t)}(x),y))\cdot\nabla_{x}\Phi^{\theta}_{(t,T)} \big{|}_{\Phi^{\theta}_{(0,t)}(x)}=\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y) \cdot\mathcal{R}^{\theta}_{(t,T)}(x)\] which proves (3.47). ## 4 Algorithm In this section, we present our training procedure, which is derived from the necessary optimality conditions related to the minimizing movement scheme (see (3.29)). Since the mean-field optimal control problem as presented in (3.6) is numerically intractable (especially in high-dimension), in the practice we always consider the functional corresponding to the finite particles approximation (see (3.32)). For its resolution, we employ an algorithm belonging to the family of shooting methods, which consists in the forward evolution of the trajectories, in the backward evolution of the adjoint variables, and in the update of the controls. Variants of this method have already been employed in different works, e.g. [33, 26, 6, 7, 14], with the name of _method of successive approximations_, and they have been proven to be an alternative way of performing the training of NeurODEs for a range of tasks, including high-dimensional problems. In our case, we start with a random guess for the control parameter \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\). Subsequently, we solve the necessary optimality conditions specified in equation (3.29) for a suitable \(\tau>0\) to obtain an updated control parameter \(\theta_{1}\). More precisely, since the last identity in (3.29) has the form \(\theta_{1}=\Lambda^{\tau}_{\theta_{0}}(\theta_{1})\), the computation of \(\theta_{1}\) is performed via fixed-point iterations of the mapping \(\Lambda^{\tau}_{\theta_{0}}\), which is defined as in (3.30). In this regard, we recall that \(\Lambda^{\tau}_{\theta_{0}}\) is a contraction if \(\tau\) is small enough. The scheme that we implemented is presented is Algorithm 1. **Remark 4.1**.: _It is interesting to observe that, in the highly-regularized regime considered in [7], the authors managed to obtain a contractive map directly from the necessary conditions for optimality, and they did not need to consider the minimizing movements scheme. This is rather natural since, when the parameter \(\lambda>0\) that tunes the \(L^{2}\)-penalization is large enough, the functional associated to the optimal control problem is strongly convex in the sublevel set corresponding to the control \(\theta\equiv 0\), as discussed in Remark 3.1. However, as reported in [7], determining the appropriate value for \(\lambda\) in each application can be challenging. On the other hand, from the practitioners' perspective, dealing with high regularization is not always desirable, since the machine learning task that the system should learn is encoded in the final-time cost. The authors highlighted the complexity involved in selecting a regularization parameter that is large enough to achieve contractivity, while ensuring that the resulting controls are not excessively small (due to high regularization) and of little use._ These considerations motivated us to consider a scenario where the regularization parameter does not need to be set sufficiently large. From a numerical perspective, the parameter \(\tau\) in Equation (3.29) (coming from the minimizing movement scheme) plays the role of the _learning rate_, and it provides the lacking amount of convexity, addressing the stability issues related to the resolution of optimal control problems in non-convex regime. These kinds of instabilities were already known in the Soviet literature of numerical optimal control (see the review paper [14]), and various solutions have been proposed to address them. For example, in [40] the authors proposed an iterative method based on the Maximum Principle and on an augmented Hamiltonian, with an approach that is somehow reminiscent of minimizing movements. More recently, in the framework of NeurODEs, in [33] it was proposed another stabilization strategy, which is different from ours since it enforces similarity between the evolution of state and co-state variables after the control update. Implicitly, the approach of [33] leads to a penalization of significant changes in the controls. On the other hand, in our approach this penalization is more explicit, and it is enforced via the memory term of the minimizing movement scheme. To the best of our knowledge, this is the first instance where a regularization based on the minimizing movement scheme is employed for training NeurODEs. **Remark 4.2**.: _Although we formulate and analyze theoretically our problem within the mean-field framework, it is not advantageous to numerically solve the forward equation as a partial differential equation. In [7], various numerical methods for solving PDEs were employed and compared. However, these methods encounter limitations when applied to high-dimensional data, which is often the case in Machine Learning scenarios. Therefore, in this study, we employ a particle method to solve both the forward partial differential equation and the backward dynamics. This particle-based approach involves reformulating the PDE as a system of ordinary differential equations in which particles represent mathematical collocation points that discretize the continuous fields. By employing this particle method, we address the challenges associated with high-dimensional data, enabling efficient numerical solutions for the forward and backward dynamics._ To conclude this section, we briefly present the forward and the backward systems that are solved during the execution of the method. For the sake of simplicity, we will focus on the case of an encoder. The objective is to minimize the following function: \[J(\theta)=\frac{1}{N}\sum_{i=1}^{N}\ell(X_{\mathcal{A}_{r}}^{i}(T),Y^{i}(0))+ \frac{\lambda}{2}\left\|\theta\right\|_{2}^{2}, \tag{4.1}\] where \(\mathcal{A}_{r}\) denotes the active indices in the bottleneck, i.e. at \(t_{r}=T\), of the state-vector \(X^{i}(T)\). The latter denotes the encoded output at time \(T\) for the \(i\)-th particle, while \(Y^{i}(0)\) represents the corresponding target at time \(0\) (which we recall is the same at time \(T\), being \(\dot{Y}^{i}\equiv 0\) for every \(i=1,\ldots,N\)). For each \(i\)-th particle and every \(t\) such that \(t_{j}\leq t\leq t_{j+1}\), the forward dynamics can be described as follows: \[\begin{cases}\dot{X}_{Z_{j}}^{i}(t)=0,\\ \dot{X}_{\mathcal{A}_{j}}^{i}(t)=\mathcal{G}_{j}\big{(}t,X_{\mathcal{A}_{j}}^{i} (t),\theta(t)\big{)},\end{cases} \tag{4.2}\] subject to the initial condition \(X^{i}(0)=X_{\mathcal{A}_{0}}^{i}(0)=X_{0}^{i}\in\mathbb{R}^{d}\). In the same interval \(t_{j}\leq t\leq t_{j+1}\), the backward dynamics reads \[\begin{cases}\dot{P}_{\mathcal{I}_{j}}^{i}(t)=0,\\ \dot{P}_{\mathcal{A}_{j}}^{i}(t)=-P_{\mathcal{A}_{j}}^{i}(t)\cdot\nabla_{x_{ \mathcal{A}_{j}}}\mathcal{G}_{j}\big{(}t,X_{\mathcal{A}_{j}}^{i}(t),\theta(t) \big{)},\end{cases} \tag{4.3}\] where the final co-state is \[P^{i}(T)=\begin{cases}-\partial_{t}\ell(X_{\mathcal{A}_{r}}^{i}(T),Y^{i}(0)), &\text{if }k\in\mathcal{A}_{r},\\ 0,&\text{if }k\notin\mathcal{A}_{r}.\end{cases}\] We notice that, for \(t_{j}\leq t\leq t_{j+1}\) and every \(i\in\{0,\dots,N\}\), we have \[\mathcal{F}(t,X^{i}(t),\theta(t))=\mathcal{F}(t,(X_{\mathcal{A}_{r}}^{i},X_{ \mathcal{I}_{j}}^{i})(t),\theta(t))=\begin{pmatrix}\mathcal{G}_{j}(t,X_{ \mathcal{A}_{j}}^{i}(t),\theta(t))\\ 0\end{pmatrix}, \tag{4.4}\] and, consequently, we deduce that \[\nabla_{x}\mathcal{F}(t,X^{i}(t),\theta(t))=\begin{pmatrix}\nabla_{x_{ \mathcal{A}_{j}}}\mathcal{G}_{j}(t,X_{\mathcal{A}_{j}}^{i}(t),\theta(t))&0\\ 0&0\end{pmatrix}, \tag{4.5}\] where the null blocks are due to the fact that, for \(t_{j}\leq t\leq t_{j+1}\), \(\nabla_{x}\mathcal{F}_{k}(t,x,\theta)=0\) if \(k\in\mathcal{I}_{j}\), and \(\nabla_{x_{\mathcal{I}_{j}}}\mathcal{G}_{j}(t,x,\theta)=0\). In the case of an Autoencoder, the structure of the forward and backward dynamics is analogous. **Remark 4.3**.: _From the calculations reported above it is evident that the matrices and the vectors involved in our forward and backward dynamics are quite sparse (see (4.5) and (4.4)), and that the state and co-state variables contain components that are constant in many sub-intervals (see (4.2) and (4.3)). Hence, in the practical implementation, especially when dealing with an Autoencoder, we do not actually need to double the original state variables and to introduce the shadow ones, but we can simply overwrite those values and, in this way, we obtain a more memory-efficient code. A similar argument holds as well for the co-state variables. Moreover, we expect the control variable \(\theta\) to have several null components during the evolution. This relates to Remark 2.1 and descends from the fact that, even though in our model \(\theta\in\mathbb{R}^{m}\) for every \(t\in[0,T]\), in the internal sub-intervals \([t_{j},t_{j+1}]\) only few of its components are influencing the dynamics. Hence, owing to the \(L^{2}\)-squared regularization on \(\theta\), it results that, if in an interval \([t_{j},t_{j+1}]\) a certain component of \(\theta\) is not affecting the velocity, then it is convenient to keep it null._ ## 5 Numerical Experiments In this section, we present a series of numerical examples to illustrate the practical application of our approach. We consider datasets of varying dimensions, ranging from low-dimensional data to a more typical Machine Learning dataset such as MNIST. Additionally, we provide justifications and insights into some of the choices made in our theoretical analysis. For instance, we examine the process of choosing the components to be deactivated during the modeling phase, and we investigate whether this hand-picked selection can lead to any issues or incorrect results. In this regard, in our first experiment concerning a classification task, we demonstrate that this a priori choice does not pose any problem, as the network effectively learns to separate the dataset into two classes before accurately classifying them. Furthermore, as we already pointed out, we have extended some of the assumptions from [7] to accommodate the use of a smooth approximation of the ReLU function. This extension is not merely a theoretical exercise, since in our second numerical example we show how valuable it is to leverage unbounded activation functions. While both of these examples involve low-dimensional data and may not be representative of typical tasks for an Autoencoder architecture, we address this limitation in our third experiment by performing a reconstruction task on the MNIST dataset. Lastly, we present noteworthy results obtained from analyzing the performance on MNIST, highlighting specific behaviors that warrant further investigation in future research. The layers of the networks that we employ in all our experiments have the form: \[\mathbb{R}^{d}\ni X=\big{(}X_{\mathcal{A}_{j}},X_{\mathcal{I}_{j}}\big{)}^{ \top}\mapsto\phi_{n}^{W,b}(X)=\big{(}X_{\mathcal{A}_{j}},X_{\mathcal{I}_{j}} \big{)}^{\top}+h\Big{(}\sigma\left(W_{\mathcal{A}_{j}}\cdot X_{\mathcal{A}_{j }}+b_{\mathcal{A}_{j}}\right),0\Big{)}^{\top},\] where \(\mathcal{A}_{j},\mathcal{I}_{j}\) are, respectively, the sets of active and inactive components at the layer \(n\), \(b_{\mathcal{A}_{j}}\) are the components of \(b\in\mathbb{R}^{d}\) belonging to \(\mathcal{A}_{j}\), while \(W_{\mathcal{A}_{j}}\) is the square sub-matrix of \(W\in\mathbb{R}^{d\times d}\) corresponding to the active components. Finally, the activation function \(\sigma\) will be specified case by case. ### Bidimensional Classification In our initial experiment, we concentrate on a bidimensional classification task that has been extensively described in [7]. Although this task deviates from the typical application of Autoencoders, where the objective is data reconstruction instead of classification, we believe it gives valuable insights on how our model works. The objective is to classify particles sampled from a standard Gaussian distribution in \(\mathbb{R}^{2}\) based on the sign of their first component. Given an initial data point \(x_{0}\in\mathbb{R}^{2}\), denoted by \(x_{0}[i]\) with \(i=1,2\) representing its \(i-\)th component, we assign a positive label \(+1\) to it if \(x_{0}[1]>0\), and a negative label \(-1\) otherwise. To incorporate the labels into the Autoencoder framework, we augment the labels to obtain a positive label \([1,0]\) and a negative one \([-1,0]\). In such a way, we obtain target vectors in \(\mathbb{R}^{2}\), i.e., with the same dimension as the input data-points in the first layer. The considered architecture is an Autoencoder comprising twenty-one layers, corresponding to \(T=2\) and \(dt=0.05\). The first seven layers maintain a constant active dimension equal to \(2\), followed by seven layers of active dimension \(1\). Finally, the last seven layers, representing the prototype of a decoder, have again constant active dimension \(2\), restoring the initial one. A sketch of the architecture is presented on the right side of Figure 4. We underline that we make use of the observation presented in Remark 4.3 to construct the implemented network, and we report that we employ the hyperbolic tangent as activation function. The next step is to determine which components to deactivate, i.e., we have to choose the sets \(\mathcal{I}_{j}\) for \(j=1,\ldots,2r\): the natural choice is to deactivate the second component, since the information which the classification is based on is contained in the first component (the sign) of the input data-points. Since we use the memory-saving regime of Remark 4.3, we observe that, in the decoder, the particles are "projected" onto the \(x\)-axis, as their second component is deactivated and set equal to \(0\). Then, in the decoding phase, both the components have again the possibility of evolving. This particular case is illustrated on the left side of Figure 4. Now, let us consider a scenario where the network architecture remains the same, but instead of deactivating the second component, we turn-off the first component. This has the effect of "projecting" the particles onto the \(y\)-axis in the encoding phase. The results are presented in Figure 5, where an interesting effect emerges. In the initial phase (left), where the particles can evolve in the whole space \(\mathbb{R}^{2}\), the network is capable of rearranging the particles in order to separate them. More precisely, in this part, the relevant information for the classification (i.e., the sign of the first Figure 4: Left: Classification task performed when the turned off component is the natural one. Right: sketch of the AutoencODE architecture considered. Figure 5: Left: Initial phase, i.e., separation of the data along the \(y\)-axis. Center: Encoding phase, i.e., only the second component is active. Right: Decoding phase and classification result after the “unnatural turn off”. Notice that, for a nice clustering of the classified data, we have increased the number of layers from \(20\) to \(40\). However, we report that the network accomplishes the task even if we use the same structure as in Figure 4. component), is transferred to the second component, that will not be deactivated. Therefore, once the data-points are projected onto the \(y\)-axis in the bottleneck (middle), two distinct clusters are already formed, corresponding to the two classes of particles. Finally, when the full dimension is restored, the remaining task consists in moving these clusters towards the respective labels, as demonstrated in the plot on the right of Figure 5. This numerical evidence confirms that our a priori choice (even when it is very unnatural) of the components to be deactivated does not affect the network's ability to learn and classify the data. Finally, while studying this low-dimensional numerical example, we test one of the assumptions that we made in the theoretical setting. In particular, we want to check if it is reasonable to assume that the cost landscape is convex around local minima, as assumed in Theorem 3.10. In Table 1, we report the smallest and highest eigenvalues of the Hessian matrix of the loss function recorded during the training process, i.e., starting from a random initial guess, until the convergence to an optimal solution. ### Parabola Reconstruction In our second numerical experiment, we focus on the task of reconstructing a two-dimensional parabola. To achieve this, we sample points from the parabolic curve and we use them as the initial data for our network. The network architecture consists of a first block of seven layers with active dimension 2, followed by seven additional layers with active dimension 1. Together, these two blocks represent the encoding phase in which the set of active components are \(\mathcal{A}_{j}=\{0\}\) for \(j=7,\ldots,14\). Similarly as in the previous example, the points at the 7-th layer are "projected" onto the \(x-\)axis, and for the six subsequent layers they are constrained to stay in this subspace. After the 14-th layer, the original active dimension is restored, and the particles can move in the whole space \(\mathbb{R}^{2}\), aiming at reaching their original positions. Despite the low dimensionality of this task, it provides an interesting application that allows us to observe the distinct phases of our mode, which are presented in Figure 6. Notably, in the initial seven layers, the particles show quite tiny movements (top left of Figure 6). This is since the relevant information to reconstruct the position is encoded in the first component, which is kept active in the bottleneck. On the other hand, if in the encoder we chose to deactivate the first component instead of the second one, we would expect that the points need to move considerably before the projection takes place, as it was the case in the previous classification task. During the second phase (top right of Figure 6), the particles separate along the \(x\)-axis, preparing for the final decoding phase, which proves to be the most challenging to learn (depicted in the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Epochs & 0 & 80 & 160 & 240 & 320 & 400 & 480 & 560 & 640 & 720 \\ \hline Min Eingenvalue & -1.72e-2 & -1.19e-2 & -1.09e-2 & -8.10e-3 & -3.44e-3 & -6.13e-3 & 6.80e-4 & 7.11e-4 & 7.25e-4 & 7.33e-4 \\ Max Eigenvalue & 3.78e-2 & 2.84e-1 & 7.30e-1 & 9.34e-1 & 1.11 & 1.18 & 1.22 & 1.25 & 1.26 & 1.27 \\ \hline \end{tabular} \end{table} Table 1: Minimum and maximum eigenvalues of the Hessian matrix across epochs. Figure 6: Top Left: Initial phase. Top Rights: Encoding phase. Bottom Left: Decoding phase. Bottom Right: network’s reconstruction with alternative architecture. bottom left of Figure 6). Based on our theoretical knowledge and the results from initial experiments, we attempt to improve the performance of the AutoencODE network by modifying its structure. One possible approach is to design the network in a way that allows more time for the particles to evolve during the decoding phase, while reducing the time spent in the initial and bottleneck phases. Indeed, we try to use 40 layers instead of 20, and most of the new ones are allocated in the decoding phase. The result is illustrated in the bottom right of Figure 6, where we observe that changing the network's structure has a significant positive impact on the reconstruction quality, leading to better results. This result is inspired by the heuristic observation that the particles "do not need to move" in the first two phases. On this point, a more theoretical analysis of the network's structure will be further discussed in the next paragraph, where we perform a sanity check, and we relate the need for extra layers to the Lipschitz constant of the trained network. This experiment highlights an important observation regarding the choice of activation functions. Specifically, it becomes evident that certain bounded activation functions, such as the hyperbolic tangent, are inadequate for moving the particles back to their original positions during the decoding phase. The bounded nature of these activation functions limits their ability to move a sufficiently large range of values, which can lead to the points getting stuck at suboptimal positions and failing to reconstruct the parabolic curve accurately. To overcome this limitation and achieve successful reconstruction, it is necessary to employ unbounded activation functions that allow for a wider range of values, in particular the well-known Leaky Relu function. An advantage of our approach is that our theory permits the use of smooth approximations for well-known activation functions, such as the Leaky ReLU (2.4). Specifically, we employ the following smooth approximation of the Leaky ReLU function: \[\sigma_{smooth}(x)=\alpha x+\frac{1}{s}\log\big{(}1+e^{sx}\big{)}, \tag{5.1}\] where \(s\) approaching infinity ensures convergence to the original Leaky ReLU function. While alternative approximations are available, we employed (5.1) in our study. This observation emphasizes the importance of considering the characteristics and properties of activation functions when designing and training neural networks, and it motivates our goal in this work to encompass unbounded activation functions in our working assumptions. ### MNIST Reconstruction In this experiment, we apply the AutoencODE architecture and our training method to the task of reconstructing images from the MNIST dataset. The MNIST dataset contains 70000 grayscale images of handwritten digits ranging from zero to nine. Each image has a size of \(28\times 28\) pixels and has been normalized. This dataset is commonly used as a benchmark for image classification tasks or for evaluating image recognition and reconstruction algorithms. However, our objective in this experiment is not to compare our reconstruction error with state-of-the-art results, but rather to demonstrate the applicability of our method to high-dimensional data, and to highlight interesting phenomena that we encounter. In general, when performing an autoencoder reconstruction task, the goal is to learn a lower-dimensional representation of the data that captures its essential features. On the other hand, determining the dimension of the lower-dimensional representation, often referred to as the _latent dimension_, requires setting a hyperparameter, i.e., the width of the bottleneck's layers, which might depend on the specific application. We now discuss the architecture we employed and the choice we made for the latent dimension. Our network consists of twenty-three layers, with the first ten layers serving as encoder, where the dimension of the layers is gradually reduced from the initial value \(d_{0}=784\) to a latent dimension of \(d_{r}=32\). Then, this latent dimension is kept in the bottleneck for three layers, and the last ten layers act as decoder, and, symmetrically to the encoder, it increases the width of the layers from 32 back to \(d_{2r}=784\). Finally, for each layer we employ a smooth version of the Leaky Relu, see (5.1), as activation function. The architecture is visualized in Figure 7, while the achieved reconstruction results are presented in Figure 8. We observe that, once again, we made use of Remark 4.3 for the implementation of the AutoencODE-based model. Latent dimensionality in the bottleneck:One of the first findings that we observe in our experiments pertains to the latent dimension of the network and to the intrinsic dimension of the dataset. The problem of determining the intrinsic dimension has been object of previous studies such as [47, 16, 17], where it was estimated to be approximately equal to 13 in the case of MNIST dataset. On this interesting topic, we also report the paper [32], where a maximum likelihood estimator was proposed and datasets of images were considered, and the recent contribution [35]. Finally, the model of the _hidden manifold_ has been formulated and studied in [23]. Notably, our network exhibits an interesting characteristic in which, starting from the initial guess of weights and biases initialized at 0, the training process automatically identifies an intrinsic dimensionality of 13. Namely, we observe that the latent vectors of dimension 32 corresponding to each image in the dataset are sparse vectors with 13 non-zero components, forming a consistent support across all latent vectors derived from the original images. To further analyze this phenomenon, we compute the means of all the latent vectors for each digit and we compare them, as depicted in the left and middle of Figure 9. These mean vectors always have exactly the same support of dimension 13, and, interestingly, we observe that digits that share similar handwritten shapes, such as the numbers 4 and 9 or digits 3 and 5, actually have latent means that are close to each other. Additionally, we explore the generative capabilities of our network by allowing the latent means to evolve through the decoding phase, aiming to generate new images consistent with the mean vector. On the right of Figure 9, we present the output of the network when using a latent vector corresponding to the mean of all latent vectors representing digit 3. This intriguing behavior of our network warrants further investigation into its ability to detect the intrinsic dimension of the input data, and into the exploration of its generative potential. Previous studies have demonstrated that the ability of neural networks to converge to simpler solutions is significantly influenced by the initial parameter values (see e.g. [15]). Indeed, in our case we have observed that this phenomenon only occurs when initializing the parameters with zeros. Moreover, it is worth mentioning that this behavior does not seem to appear in standard Autoencoders without residual connections. Sanity check of the network's architecture.An advantage of interpreting neural networks as discrete approximations of dynamical systems is that we can make use of typical results of numerical resolutions of ODEs in Figure 8: Reconstruction of some numbers achieved by the Autoencoder. Figure 7: Architecture used for the MNIST reconstruction task. The inactive nodes are marked in green. order to better analyze our results. Indeed, we notice that, according to well-known results, in order to solve a generic ODEs we need to take as discretization step-size \(dt\) a value smaller than the inverse of the lipschitz constant of the vector field driving the dynamics. We recall that the quantity \(dt\) is related to the number of layers of the network through the relation \(n_{\text{layers}}=\frac{T}{dt}\), where \(T\) is the right-extreme of the evolution interval \([0,T]\). In our case, we choose _a priori_ the amplitude of \(dt\), we train the network and, once we have computed \(\theta^{*}\), we can compare _a posteriori_ the discretization step-size chosen at the beginning with the quantity \(\Delta=\frac{1}{Lip(\mathcal{F}(t,X,\theta^{*})}\) for each time-node \(t\) and every datum \(x\). In Figure 5, we show the time discretization \(dt\) in orange and in blue the quantity \(\Delta\), for the case of a wrongly constructed autoencoder (on the left) and the correct one (on the right). From this plots, we can perform a "sanity check" and we can make sure that the number of layers that we chose is sufficient to solve the task. Indeed, in the wrong autoencoder on the left, we see that in the last layer the quantity \(\Delta\) is smaller than \(dt\), and this violates the condition that guarantees the stability of the explicit Euler discretization. Indeed, the introduction of two symmetric layers to the network (corresponding to the plot on the right of Figure 5) allows the network to satisfy everywhere the relation \(\Delta>dt\). Moreover, we also notice that during the encoding phase the inverse of the Lipschitz constant of \(\mathcal{F}\) is quite high, which means that the vector field does not need to move a lot the points. This suggests that we could get rid of some of the layers in the encoder and only keep the necessary ones, i.e., the ones in the decoder where \(\Delta\) is small and a finer discretization step-size is required. We report that this last observation is consistent with the results recently obtained in [11]. Finally, we also draw attention to the work [45], which shares a similar spirit with our experiments, since the Lipschitz constant of the layers is the main subject of investigation. In their study, the authors employ classical results on the numerical integration of ordinary differential equations in order to understand how to constrain the weights of the network with the aim of designing stable architectures. This approach leads to networks with non-expansive properties, which is highly advantageous for mitigating instabilities in various scenarios, such as testing adversarial examples [25], training generative adversarial networks [3], or solving inverse problems using deep learning. Entropy across layers.We present our first experiments on the study of the information propagation within the network, where some intriguing results appear. This phenomenon is illustrated in Figure 5, where we examine the entropy across the layers after the network has been trained. We introduce two different measures of entropy, depicted in the two graphs of the figure. In first place, we consider the well-known Shannon entropy, denoted as \(H(E)\), which quantifies the information content of a discrete random variable \(E\), distributed according to a discrete Figure 10: Left: wrong autoencoder detected with \(\Delta\). Right: correct version of the same autoencoder. Figure 9: Left: Comparing two similar latent means. Center: again two similar latent means. Right: Output of the decoding of one latent mean. probability measure \(p:\Omega\to[0,1]\) such that \(p(e)=p(E=e)\). The Shannon entropy is computed as follows: \[H(E)=\mathbb{E}\big{[}-\log(p(E)\big{]}=\sum_{e\in E}-p(e)\log(p(e))\] In our context, the random variable of interest is \(E=\sum_{j=1}^{N}\mathds{1}_{|X_{0}^{i}-X_{0}^{j}|\leq\epsilon}\), where \(X_{0}^{i}\) represents a generic image from the MNIST dataset. Additionally, we introduce another measure of entropy, denoted as \(\mathcal{E}\), which quantifies the probability that the dataset can be partitioned into ten clusters corresponding to the ten different digits. This quantity has been introduced in [21] and it is defined as \[\mathcal{E}=\mathbb{P}\left(X\in\bigcup_{i=1}^{k}B_{\varepsilon}(X_{0}^{i}) \right),\] where \(\varepsilon>0\) is a small radius, and \(X_{0}^{1},\ldots,X_{0}^{k}\) are samplings from the dataset. Figure 5 suggests the existence of a distinct pattern in the variation of information entropy across the layers, which offers a hint for further investigations. Let us first focus on the Shannon entropy: as the layers' dimensionality decreases in the encoding phase, there is an expected decrease of entropy, reflecting the compression and reduction of information in the lower-dimensional representation. The bottleneck layer, where the dimension is kept constant, represents a critical point where the entropy reaches a minimum. This indicates that the information content is highly concentrated and compressed in this latent space. Then, during the decoding phase, the Shannon entropy do not revert to its initial value but instead exhibit a slower increase. This behavior suggests that the network retains some of the learned structure and information from the bottleneck layer. Something similar happens for the second measure of entropy: at the beginning, the data is unlikely to be highly clustered, since two distinct images of the same digit may be quite distant one from the other. In the inner layers, this probability increases until it reaches its maximum (rather close to 1) in the bottleneck, where the data can then be fully partitioned into clusters of radius \(\epsilon\). As for the Shannon entropy, the information from the bottleneck layer is retained during the decoding phase, which is why the entropy remains constant for a while and then decreases back in a slower manner. It is worth noticing that in both cases the entropy does not fully return to its initial level. This might be attributed to the phenomenon of mode collapse, where the network fails to capture the full variability in the input data and instead produces similar outputs for different inputs, hence inducing some sort of _implicit bias_. Mode collapse is often considered undesirable in generative models, as it hinders the ability to generate diverse and realistic samples. However, in the context of understanding data structure and performing clustering, the network's capability to capture the main modes or clusters of the data can be seen as a positive aspect. The network learns to extract salient features and represent the data in a compact and informative manner, enabling tasks such as clustering and classification. Further investigation is needed to explore the relationship between the observed entropy patterns, mode collapse, and the overall performance of the network on different tasks. ## Acknowledgments The authors would like to thank Prof. Giuseppe Savare for the fruitful discussions during his permanence in Munich. Moreover, the authors are grateful to Dr. Oleh Melnyk for the suggestion on the extension of the dynamical model to the U-net architecture. This work has been funded by the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. C.C. and M.F. acknowledge also the partial support of the project "Online Firestorms And Resentment Propagation On Social Media: Dynamics, Predictability and Mitigation" of the TUM Figure 11: Left: Shannon entropy across layers. Right: Our measure of entropy across layers. Institute for Ethics in Artificial Intelligence and of the DFG Project "Implicit Bias and Low Complexity Networks" within the DFG SPP 2298 "Theoretical Foundations of Deep Learning". A.S. acknowledges the partial support from INdAM-GNAMPA. ## Appendix A Appendix **Lemma A.1** (Boundedness of trajectories).: _Let us consider the controlled system_ \[\dot{x}=\mathcal{F}(t,x,\theta),\quad x(0)=x_{0},\] _where \(\mathcal{F}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) satisfies Assumptions 1, and \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\). Then, for every \(R>0\) and any \(x_{0}\in B_{R}(0)\), we have that \(x(t)\in B_{R}(0)\) for every \(t\in[0,T]\), where \(\bar{R}=(R+L_{R}(1+\|\theta\|_{L^{1}}))e^{L_{R}(1+\|\theta\|_{L^{1}})}\)._ Proof.: According to Assumption 1\(-(ii)\) on \(\mathcal{F}\), the trajectories can be bounded as follows: \[|x(t)|\leq|x_{0}|+\int_{0}^{t}|\mathcal{F}(s,x(s),\theta(s))|\,ds\leq|x_{0}|+L _{R}\int_{0}^{t}(1+|x(s)|)(1+|\theta(s)|)\,ds\] for every \(t\in[0,T]\). Using Gronwall's lemma, it follows that \[|x(t)|\leq\Big{(}|x_{0}|+L_{R}(1+\|\theta\|_{L^{1}})\Big{)}e^{L_{R}(1+\| \theta\|_{L^{1}})}.\] **Lemma A.2** (Flow's dependency on initial datum).: _For every \(t\in[0,T]\), let us consider the flow mapping \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) defined in (2.2) and driven by the control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\). Let us assume that the controlled dynamics \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) satisfies Assumption 1. Then, for every \(R>0\), and every \(x_{1},x_{2}\in B_{R}(0)\), it follows that_ \[|\Phi^{\theta}_{(0,t)}(x_{1})-\Phi^{\theta}_{(0,t)}(x_{2})|\leq e^{L_{R}(1+\| \theta\|_{L^{1}})}|x_{1}-x_{2}|,\] _where \(\bar{R}\) is defined as in Lemma A.1, and \(L_{R}\) is prescribed by Assumption 1-(ii)._ Proof.: Let us denote with \(t\mapsto x_{1}(t),t\mapsto x_{2}(t)\) the solutions of (2.1) driven by \(\theta\) and starting, respectively, from \(x_{1}(0)=x_{1},x_{2}(0)=x_{2}\). Then, for every \(t\in[0,T]\), we have \[|x_{1}(t)-x_{2}(t)| \leq|x_{1}-x_{2}|+\int_{0}^{t}|\mathcal{F}(s,x_{1}(s),\theta(s))- \mathcal{F}(s,x_{2}(s),\theta(s))|\,ds\] \[\leq|x_{1}-x_{2}|+L_{R}\int_{0}^{t}(1+|\theta(s)|)|x_{1}(s)-x_{2} (s))|\,ds,\] by using Assumption 1\(-(ii)\). As before, the statement follows from Gronwall's Lemma. **Lemma A.3** (Flow's dependency on time).: _Under the same assumptions and notations as in Lemma A.2, for every \(R>0\), for every \(x\in B_{R}(0)\) and for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), we have that_ \[|\Phi^{\theta}_{(0,t_{2})}(x)-\Phi^{\theta}_{(0,t_{1})}(x)|\leq L_{R}(1+\bar{ R})(1+\|\theta\|_{L^{2}})|t_{2}-t_{1}|^{\frac{1}{2}}\] _for every \(0\leq t_{1}<t_{2}\leq T\), where \(\bar{R}\) is defined as in Lemma A.1, and \(L_{R}\) is prescribed by Assumption 1-(\(ii)\). Moreover, if \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\cap L^{\infty}([0,T],\mathbb{R}^{m})\), then, for every \(0\leq t_{1}<t_{2}\leq T\), it holds:_ \[|\Phi^{\theta}_{(0,t_{2})}(x)-\Phi^{\theta}_{(0,t_{1})}(x)|\leq L_{\bar{R}}(1+ \bar{R})(1+\|\theta\|_{L^{2}})|t_{2}-t_{1}|.\] Proof.: If we denote by \(t\mapsto x(t)\) the solution of (2.1) driven by the control \(\theta\), then \[|x(t_{2})-x(t_{1})|\leq\int_{t_{1}}^{t_{2}}|\mathcal{F}(s,x(s),\theta(s)|\,ds \leq\int_{t_{1}}^{t_{2}}L_{\bar{R}}(1+\bar{R})(1+|\theta(s)|)\,ds.\] The thesis follows by using Cauchy-Schwarz for \(\theta\in L^{2}\), or from basic estimates if \(\theta\in L^{\infty}\). **Lemma A.4** (Flow's dependency on controls).: _For every \(t\in[0,T]\), let \(\Phi^{\theta_{1}}_{(0,t)},\Phi^{\theta_{2}}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be the flows defined in (2.2) and driven, respectively, by \(\theta_{1},\theta_{2}\in L^{2}([0,T],\mathbb{R}^{m})\). Let us assume that the controlled dynamics \(\mathcal{F}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) satisfies Assumption 1. Then, for every \(R>0\) and for every \(x\in B_{R}(0)\), it holds that_ \[|\Phi^{\theta_{1}}_{(0,t)}(x)-\Phi^{\theta_{2}}_{(0,t)}(x)|\leq L_{\bar{R}}(1+ \|\theta_{1}\|_{L^{2}}+\|\theta_{2}\|_{L^{2}})e^{L_{R}(1+\|\theta_{1}\|_{L^{1}}) }\left\|\theta_{1}-\theta_{2}\right\|_{L^{2}},\] _where \(\bar{R}\) is defined as in Lemma A.1, and \(L_{\bar{R}}\) is prescribed by Assumption 1-(ii)._ Proof.: By using Assumption 1\(-(ii),(iii)\) and the triangle inequality, we obtain that \[|\Phi^{\theta_{1}}_{(0,t)}(x)-\Phi^{\theta_{2}}_{(0,t)}(x)| \leq\int_{0}^{t}|\mathcal{F}(s,x_{1}(s),\theta_{1}(s))-\mathcal{F}( s,x_{2}(s),\theta_{2}(s))|\,ds\] \[\leq\int_{0}^{t}|\mathcal{F}(s,x_{1}(s),\theta_{1}(s))-\mathcal{F }(s,x_{2}(s),\theta_{1}(s))|\,ds\] \[\quad+\int_{0}^{t}|\mathcal{F}(s,x_{2}(s),\theta_{1}(s))-\mathcal{ F}(s,x_{2}(s),\theta_{2}(s))|\,ds\] \[\leq L_{R}\int_{0}^{t}(1+\theta_{1}(s))|x_{1}(s)-x_{2}(s)|\,ds+L_ {\bar{R}}(1+\left\|\theta_{1}\right\|_{L^{2}}+\left\|\theta_{2}\right\|_{L^{2} })\left\|\theta_{1}-\theta_{2}\right\|_{L^{2}}.\] The statement follows again by applying Gronwall's Lemma. **Proposition A.5** (Differentiability with respect to trajectories perturbations).: _Let us assume that the controlled dynamics \(\mathcal{F}\) satisfies Assumptions 1-2. Given an admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) and a trajectory \(t\mapsto x(t)=\Phi^{\theta}_{(0,t)}(x_{0})\) with \(x_{0}\in B_{R}(0)\), let \(\xi:[0,T]\to\mathbb{R}^{d}\) be the solution of the linearized problem_ \[\begin{cases}\dot{\xi}(t)=\nabla_{x}\mathcal{F}(t,x(t),\theta(t))\xi(t),\\ \xi(t)=v,\end{cases}\] (A.1) _where \(\bar{t}\in[0,T]\) is the instant of perturbation and \(v\) is the direction of perturbation of the trajectory. Then, for every \(t\in(\bar{t},T)\), it holds_ \[|\Phi^{\theta}_{(\bar{t},t)}(x(\bar{t})+\epsilon v)-\Phi^{\theta}_{(\bar{t},t )}(x(\bar{t}))-\epsilon\xi(t)|\leq C|v|^{2}\epsilon^{2}\] _where \(C\) is a constant depending on \(T,R,\left\|\theta\right\|_{L^{2}}\)._ Proof.: For \(t\geq\bar{t}\), let us denote with \(t\mapsto y(t):=\Phi^{\theta}_{(\bar{t},t)}(x(\bar{t})+\epsilon v)\) the solution of the modified problem, obtained by perturbing the original trajectory with \(\epsilon v\) at instant \(\bar{t}\). Then, since \(\xi\) solves (A.1), we can write \[|y(t)-x(t)-\epsilon\xi(t)| =|\Phi^{\theta}_{(\bar{t},t)}(x(\bar{t})+\epsilon v)-\Phi^{ \theta}_{(\bar{t},t)}(x(\bar{t}))-\epsilon\xi(t)|\] \[\leq\int_{\bar{t}}^{t}|\mathcal{F}(s,y(s),\theta(s))-\mathcal{F} (s,x(s),\theta(s))-\epsilon\nabla_{x}\mathcal{F}(s,x(s),\theta(s))\xi(s)|ds\] \[\leq\int_{\bar{t}}^{t}|\mathcal{F}(s,y(s),\theta(s))-\mathcal{F} (s,x(s),\theta(s))-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))(y(s)-x(s))|ds\] \[\quad+\int_{\bar{t}}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s) )||y(s)-x(s)-\epsilon\xi(s)|ds\] \[\leq\int_{\bar{t}}^{t}\left[\int_{0}^{1}|\nabla_{x}\mathcal{F}(s,x(s)+\tau(y(s)-x(s)),\theta(s)-\nabla_{x}\mathcal{F}(s,x(s),\theta(s)||y(s)- x(s)|d\tau\right]ds\] \[\quad+\int_{\bar{t}}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s)) ||y(s)-x(s)-\epsilon\xi(s)|ds\] for every \(t\geq\bar{t}\). We now address the two integrals separately. Using Assumption 2\(-(iv)\) and the result of Lemma A.4, we obtain the following bound \[\int_{\bar{t}}^{t}\left[\int_{0}^{1}|\nabla_{x}\mathcal{F}(s,x(s )+\tau(y(s)-x(s))),\theta(s)-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x( s)|d\tau\right]ds\] \[\leq\int_{\bar{t}}^{t}L_{R}(1+|\theta(s)|^{2})\frac{1}{2}|y(s)-x (s)|^{2}ds\] \[\leq\frac{1}{2}L_{\bar{R}}\left(1+\|\theta\|_{L^{2}}^{2}\right)e^{ 2L_{R}(1+\|\theta\|_{L^{1}})}|\epsilon v|^{2}\] Similarly, for the second integral, owing to Assumption 2\(-(iv)\), we can compute: \[\int_{\bar{t}}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)- \epsilon\xi(s)|ds\leq\int_{\bar{t}}^{t}L_{R}(1+|\theta(s)|^{2})(1+\bar{R})|y(s )-x(s)-\epsilon\xi(s)|ds\] Finally, by combining the two results together and using Gronwall's Lemma, we prove the statement. **Proposition A.6** (Differentiability with respect to control perturbations).: _Consider the solution \(\xi\) of the linearized problem_ \[\begin{cases}\dot{\xi}(t)=\nabla_{x}\mathcal{F}(t,x^{\theta}(t),\theta(t))\xi(t) +\nabla_{\theta}\mathcal{F}(t,x^{\theta}(t),\theta(t))\nu(t)\\ \xi(0)=0\end{cases}\] (A.2) _where the control \(\theta\) is perturbed at the initial time with \(\theta+\epsilon\nu\), when starting with an initial datum \(x_{0}\in B_{R}(0)\). Then,_ \[|\Phi^{\theta+\epsilon\nu}_{(0,t)}(x_{0})-\Phi^{\theta}_{(0,t)}(x_{0})- \epsilon\xi(t)|\leq C||\nu||_{L^{2}}^{2}\epsilon^{2}\] (A.3) _where \(C\) is a constant depending on \(T,\bar{R},L_{R},\left\|\theta\right\|_{L^{1}}\). Moreover, we have that for every \(t\in[0,T]\)_ \[\xi(t)=\int_{0}^{t}\mathcal{R}^{\theta}_{(s,t)}(x_{0})\cdot\nabla_{\theta} \mathcal{F}(s,x^{\theta}(s),\theta(s))\nu(s)\,ds,\] (A.4) _where \(\mathcal{R}^{\theta}_{(s,t)}(x_{0})\) has been defined in (3.10)._ Proof.: We first observe that the dynamics in (A.2) is affine in the \(\xi\) variable. Moreover, Assumptions 1-2 guarantee that the coefficients are \(L^{1}\)-regular in time. Hence, from the classical Caratheodory Theorem we deduce the existence and the uniqueness of the solution of (A.2). Finally, the identity (A.4) follows as a classical application of the resolvent map (see,e.g., in [10, Theorem 2.2.3]). Let us denote with \(t\mapsto x(t)\) and \(t\mapsto y(t)\) the solutions of Cauchy problem (2.1) corresponding, respectively, to the admissible controls \(\theta\) and \(\theta+\epsilon\nu\). We observe that, in virtue of Lemma A.1, we have that there exists \(\bar{R}>0\) such that \(x(t),y(t)\in B_{\bar{R}}(0)\) for every \(t\in[0,T]\). Then, recalling the definition of the flow map provided in (2.2), we compute \[|y(t)-x(t)-\epsilon\xi(t)| =|\Phi^{\theta+\epsilon\nu}_{(0,t)}(x_{0})-\Phi^{\theta}_{(0,t)}( x_{0})-\epsilon\xi(t)|\] \[\leq\int_{0}^{t}|\mathcal{F}(s,y(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s))-\epsilon\dot{\xi}(s)|\,ds\] \[\leq\int_{0}^{t}|\mathcal{F}(s,y(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))\] \[\qquad\qquad-\epsilon\nabla_{x}\mathcal{F}(s,x(s),\theta(s)+ \epsilon\nu(s))\cdot(y(s)-x(s))|\,ds\] \[\quad+\int_{0}^{t}|\mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s))-\epsilon\nabla_{\theta}\mathcal{F}(s,x(s), \theta(s))\cdot\nu(s)|\,ds\] \[\quad+\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s)+ \epsilon\nu(s))-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)|ds\] \[\quad+\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)- x(s)-\epsilon\xi(s)|\,ds.\] We now handle each term separately: \[\int_{0}^{t}|\mathcal{F}(s,y(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))-\epsilon\nabla_{x}\mathcal{F}(s, x(s),\theta(s)+\epsilon\nu(s))(y(s)-x(s))|\,ds\] (A.5) \[\leq\int_{0}^{t}\left[\int_{0}^{1}L_{\bar{R}}(1+|\theta(s)+ \epsilon\nu(s)|^{2})\tau|y(s)-x(s)|^{2}d\tau\right]ds\] \[\leq L_{R}^{3}(1+\left\|\theta\right\|_{L^{2}}+\epsilon\left\| \nu\right\|_{L^{2}})^{4}e^{2L_{R}(1+\left\|\theta\right\|_{L^{1}})}\left\|\nu \right\|_{L^{2}}^{2}\epsilon^{2}\] where we used Assumption 2\(-(iv)\) and Lemma A.4. By using Assumption 2\(-(v)\), we obtain the following bounds for the second integral: \[\int_{0}^{t}|\mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s))-\nabla_{\theta}\mathcal{F}(s,x(s),\theta(s)) \cdot\epsilon\nu(s)|\,ds\] (A.6) \[\leq\int_{0}^{t}\left[\int_{0}^{1}L_{R}|\nu(s)|^{2}\epsilon^{2} \tau d\tau\right]ds=\frac{1}{2}L_{R}\left\|\nu\right\|_{L^{2}}^{2}\epsilon^{2}.\] Similarly, the third integral can be bounded by using Assumption 2\(-(vi)\) and Lemma A.4, and it yields \[\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s ))-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)|\,ds\] (A.7) \[\leq\int_{0}^{t}L_{R}(1+|\theta(s)|+\epsilon|\nu(s)|)\epsilon|y(s)- x(s)||\nu(s)|\,ds\] \[\leq L_{R}^{2}(1+\left\|\theta\right\|_{L^{2}}+\epsilon\left\| \nu\right\|_{L^{2}})^{2}e^{L_{R}(1+\left\|\theta\right\|_{L^{1}})}\left\|\nu \right\|_{L^{2}}^{2}\epsilon^{2}.\] Finally, the fourth integral can be bounded using Assumption 2\(-(iv)\) as follows: \[\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)-\epsilon\xi(s)|ds \leq\int_{0}^{t}L_{R}(1+\tilde{R})(1+|\theta(s)|^{2})|y(s)-x(s)-\epsilon\xi(s)| \,ds.\] (A.8) Hence, by combining (A.5), (A.6), (A.7) and (A.8), the thesis follows from Gronwall Lemma. **Proposition A.7** (Properties of the resolvent map).: _Let us assume that the controlled dynamics \(\mathcal{F}\) satisfies Assumptions 1-2. Given an admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) and a trajectory \(t\mapsto x(t)=\Phi^{\theta}_{(0,t)}(x)\) with \(x\in B_{R}(0)\), for every \(\tau\in[0,T]\) the resolvent map \(\mathcal{R}^{\theta}_{(\tau,\cdot)}(x):[0,T]\to\mathbb{R}^{d\times d}\) is the curve \(s\mapsto\mathcal{R}^{\theta}_{(\tau,s)}(x_{0})\) that solves_ \[\begin{cases}\frac{d}{ds}\mathcal{R}^{\theta}_{(\tau,s)}(x)=\nabla_{x} \mathcal{F}(s,\Phi^{\theta}_{(0,s)}(x),\theta(s))\cdot\mathcal{R}^{\theta}_{( \tau,s)}(x)\quad\text{for $a.e.$ $s\in[0,T]$},\\ \mathcal{R}^{\theta}_{(\tau,\tau)}(x)=\mathrm{Id}.\end{cases}\] (A.9) _Then for every \(\tau,s\in[0,T]\), there exists a constant \(C_{1}\) depending on \(T,R,\left\|\theta\right\|_{L^{2}}\) such that_ \[|\mathcal{R}^{\theta}_{(\tau,s)}(x)|:=\sup_{v\neq 0}\frac{|\mathcal{R}^{ \theta}_{(\tau,s)}(x)\cdot v|}{|v|}\leq C_{1}.\] (A.10) _Moreover, for every \(x,y\in B_{R}(0)\), there exists a constant \(C_{2}\) depending on \(T,R,\left\|\theta\right\|_{L^{2}}\) such that_ \[|\mathcal{R}^{\theta}_{(\tau,s)}(x)-\mathcal{R}^{\theta}_{(\tau,s)}(y)|:=\sup _{v\neq 0}\frac{|\mathcal{R}^{\theta}_{(\tau,s)}(x)\cdot v-\mathcal{R}^{\theta }_{(\tau,s)}(y)\cdot v|}{|v|}\leq C_{2}|x-y|.\] (A.11) _Finally, if \(\theta_{1},\theta_{2}\) satisfy \(\left\|\theta_{1}\right\|,\left\|\theta_{2}\right\|\leq\rho\), then there exists a constant \(C_{3}\) depending on \(T,R,\rho\) such that_ \[|\mathcal{R}^{\theta_{1}}_{(\tau,s)}(x)-\mathcal{R}^{\theta_{2}}_{(\tau,s)}(x )|:=\sup_{v\neq 0}\frac{|\mathcal{R}^{\theta_{1}}_{(\tau,s)}(x)\cdot v- \mathcal{R}^{\theta_{2}}_{(\tau,s)}(x)\cdot v|}{|v|}\leq C_{3}\|\theta_{1}- \theta_{2}\|_{L^{2}}.\] (A.12) Proof.: We first prove the boundedness of the resolvent map. Let us fix \(v\in\mathbb{R}^{d}\) with \(v\neq 0\), and let us define \(\xi(s):=\mathcal{R}^{\theta}_{(\tau,s)}(x)\cdot v\) for every \(s\in[0,T]\). Then, in virtue of Assumption 2\(-(vi)\), we have: \[|\xi(s)|\leq|\xi(\tau)|+\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma, \Phi^{\theta}_{(0,\sigma)}(x),\theta(\sigma))\right|\left|\xi(\sigma)\right|d \sigma\leq|v|+L_{R}\int_{0}^{t}(1+\theta(\sigma)^{2})|\xi(\sigma)|\,d\sigma,\] and, by Gronwall's Lemma, we deduce (A.10). Similarly as before, given \(x,y\in B_{R}(0)\) and \(v\neq 0\), let us define \(\xi^{x}(s):=\mathcal{R}^{\theta}_{(\tau,s)}(x)\cdot v\) and \(\xi^{y}(s):=\mathcal{R}^{\theta}_{(\tau,s)}(y)\cdot v\) for every \(s\in[0,T]\). Then, we have that \[|\xi^{x}(s)-\xi^{y}(s)| \leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(x),\theta(\sigma))\xi^{x}(\sigma)-\nabla_{x}\mathcal{F}( \sigma,\Phi^{\theta}_{(0,\sigma)}(y),\theta(\sigma))\xi^{y}(\sigma)\right|\,d\sigma\] \[\leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(x),\theta(\sigma))-\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(y),\theta(\sigma))\right|\left|\xi^{y}(\sigma)\right|d\sigma\] \[\quad+\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(x),\theta(\sigma))\right|\left|\xi^{x}(\sigma)-\xi^{y}( \sigma)\right|d\sigma\] \[\leq C_{1}|v|\int_{\tau}^{s}L_{R}(1+\theta(\sigma)^{2})\left|\Phi^ {\theta}_{(0,\sigma)}(x)-\Phi^{\theta}_{(0,\sigma)}(y)\right|\,d\sigma\] \[\quad+\int_{\tau}^{s}L_{R}(1+\theta(\sigma)^{2})|\xi^{x}(\sigma)- \xi^{y}(\sigma)|\,d\sigma,\] where we used (A.10) and Assumption 2\(-(iv)\). Hence, combining Lemma A.2 with Gronwall's Lemma, we deduce (A.11). Finally, we prove the dependence of the resolvent map on different controls \(\theta_{1},\theta_{2}\in L^{2}([0,T];\mathbb{R}^{m})\). Given \(x\in B_{R}(0)\) and \(v\neq 0\), let us define \(\xi^{\theta_{1}}(s):=\mathcal{R}^{\theta_{1}}_{(\tau,s)}(x)\cdot v\) and \(\xi^{\theta_{2}}(s):=\mathcal{R}^{\theta_{2}}_{(\tau,s)}(x)\cdot v\) for every \(s\in[0,T]\). Then, we compute \[|\xi^{\theta_{1}}(s)-\xi^{\theta_{2}}(s)| \leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi_{(0, \sigma)}^{\theta_{1}}(x),\theta_{1}(\sigma))\xi^{\theta_{1}}(\sigma)-\nabla_{x} \mathcal{F}(\sigma,\Phi_{(0,\sigma)}^{\theta_{2}}(x),\theta_{2}(\sigma))\xi^{ \theta_{2}}(\sigma)\right|\,d\sigma\] \[\leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi_{(0, \sigma)}^{\theta_{1}}(x),\theta_{1}(\sigma))-\nabla_{x}\mathcal{F}(\sigma,\Phi _{(0,\sigma)}^{\theta_{2}}(x),\theta_{2}(\sigma))\right|\left|\xi^{\theta_{1}} (\sigma)\right|d\sigma\] \[\quad+\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi_{(0, \sigma)}^{\theta_{2}}(y),\theta(\sigma))\right|\left|\xi^{\theta_{1}}(\sigma) -\xi^{\theta_{2}}(\sigma)\right|d\sigma\] \[\leq C_{1}|v|\int_{\tau}^{s}L_{R}(1+\theta_{1}(\sigma)^{2})\left| \Phi_{(0,\sigma)}^{\theta_{1}}(x)-\Phi_{(0,\sigma)}^{\theta_{2}}(x)\right|\,d\sigma\] \[\quad+C_{1}|v|\int_{\tau}^{s}L_{R}(1+|\theta_{1}(\sigma)|+| \theta_{2}(\sigma)|)|\theta_{1}(\sigma)-\theta_{2}(\sigma)|\,d\sigma\] \[\quad+\int_{\tau}^{s}L_{R}(1+\theta(\sigma)^{2})|\xi^{\theta_{1} }(\sigma)-\xi^{\theta_{2}}(\sigma)|\,d\sigma,\] where we used Assumption 2\(-(iv)\)-\((vi)\).
2305.14262
A partially stripped massive star in a Be binary at low metallicity: A missing link towards Be X-ray binaries and double neutron star mergers
Standard binary evolutionary models predict a significant population of core helium-burning stars that lost their hydrogen-rich envelope after mass transfer via Roche-lobe overflow. However, there is a scarcity of observations of such stripped stars in the intermediate mass regime (~1.5 - 8$ M_{\odot}$), which are thought to be prominent progenitors of SN Ib/c. Especially at low metallicity, a significant fraction of these stars is expected to be only partially stripped, retaining a significant amount of hydrogen on their surfaces. For the first time, we discovered a partially stripped massive star in a binary with a Be-type companion located in the Small Magellanic Cloud (SMC) using a detailed spectroscopic analysis. The stripped-star nature of the primary is revealed by the extreme CNO abundance pattern and very high luminosity-to-mass ratio, which suggest that the primary is likely shell-hydrogen burning. Our target SMCSGS-FS 69 is the most luminous and most massive system among the known stripped star + Be binaries, with Mstripped ~3$ M_{\odot}$ and MBe ~17$ M_{\odot}$. Binary evolutionary tracks suggest an initial mass of Mini $\gtrsim 12 M_{\odot}$ for the stripped star and predict it to be in a transition phase towards a hot compact He star, which will eventually produce a stripped-envelope supernova. Our target marks the first representative of a so-far missing evolutionary stage in the formation pathway of Be X-ray binaries and double neutron star mergers.
V. Ramachandran, J. Klencki, A. A. C. Sander, D. Pauli, T. Shenar, L. M. Oskinova, W. -R. Hamann
2023-05-23T17:15:35Z
http://arxiv.org/abs/2305.14262v2
# A partially stripped massive star in a Be binary at low metallicity+ ###### Abstract Standard binary evolutionary models predict a significant population of core helium-burning stars that lost their hydrogen-rich envelope after mass transfer via Roche-lobe overflow. However, there is a scarcity of observations of such stripped stars in the intermediate-mass regime (\(\sim 1.5-8\,M_{\odot}\)), which are thought to be prominent progenitors of SN Ib/c. Especially at low metallicity, a significant fraction of these stars are expected to be only partially stripped, retaining a significant amount of hydrogen on their surfaces. For the first time, we discovered a partially stripped massive star in a binary with a Be-type companion located in the Small Magellanic Cloud (SMC) using a detailed spectroscopic analysis. The stripped-star nature of the primary is revealed by the extreme CNO abundance pattern and very high luminosity-to-mass ratio, which suggest that the primary is likely shell-hydrogen burning. Our target SMCSGS-FS 69 is the most luminous and most massive system among the known stripped star + Be binaries, with \(M_{\rm stripped}\sim 3\,M_{\odot}\) and \(M_{\rm Be}\sim 17\,M_{\odot}\). Binary evolutionary tracks suggest an initial mass of \(M_{\rm ini}\gtrsim 12\,M_{\odot}\) for the stripped star and predict it to be in a transition phase towards a hot compact He star, which will eventually produce a stripped-envelope supernova. Our target marks the first representative of an as-yet-missing evolutionary stage in the formation pathway of Be X-ray binaries and double neutron star mergers. ## 1 Introduction Massive stars are frequently found in binary or multiple systems where components are in close proximity to one another, making the interaction between the two stars inevitable as the primary grows and evolves (Sana et al. 2012, 2014; Moe & Di Stefano 2017). The components will be subjected to substantial interactions involving the transfer of mass and angular momentum, which will have profound effects on the fundamental parameters and final fates of both stars. Such interactions frequently lead to the stripping of the primary's envelope (Kippenhahn & Weigert 1967; Paczynski 1967), which can produce hot and compact He-core stars with a thin layer of hydrogen on top (see, e.g., Yoon et al. 2010, 2017; Claeys et al. 2011). Depending on their initial masses, the stripped-envelope primaries would have spectral characteristics ranging from hot subdwarfs to Wolf-Rayet (WR) stars (e.g., Paczynski 1967; Vanbeveren 1991; Eldridge et al. 2008; Gotberg et al. 2017). Secondaries, on the other hand, would evolve into rapidly rotating stars (e.g., de Mink et al. 2013; Renzo & Gotberg 2021), which can have disk emission features similar to Be stars (Pols et al. 1991; Shao & Li 2014; Bodensteiner et al. 2020b; Hastings et al. 2021). Interestingly, stripped-envelope stars with masses between low-mass subdwarfs and classical WR stars are rarely observed. This intermediate-mass regime \(\sim 1.5-8\,M_{\odot}\) at solar metallicity (Gotberg et al. 2017) gets wider up to \(\lesssim 17\,M_{\odot}\) at SMC metallicity (Z=0.2\(\,Z_{\odot}\); Shenar et al. 2020b). Moreover, the intermediate-mass stripped-envelope stars (hereafter stripped stars) are predicted to be a long-lived core He-burning phase. They are considered to be the progenitors of Ib/c supernovae and major sources of far-UV ionizing flux (Gotberg et al. 2017). The only known intermediate-mass hot He star is the qWR star HD 45166 (Groh et al. 2008), whereas other hot and compact stripped star candidates in the Galaxy are in the subdwarf mass range of \(<1.5\,M_{\odot}\) (Wang et al. 2021; Schootemeijer et al. 2018; Gilkis & Shenar 2023). For HD 45166, a new study yields a strong magnetic field, meaning that this star likely does not follow standard binary evolution (Shenar et al. 2023). In recent literature, there is growing evidence for stripped stars, but many of them are partially stripped OB-type giants of a few solar radii, often with a significant residual H-layer (\(X_{\rm H}>50\%\)) on their surface. These include originally proposed X-ray quiet black hole + Be binaries, such as LB1 and HR6819 (Liu et al. 2019; Rivinius et al. 2020), which have later been disputed (e.g., Abdul-Masih et al. 2020; Bodensteiner et al. 2020a; Shenar et al. 2020a; El-Badry & Quataert 2021) or were revealed to be partially stripped star + Be binaries (Frost et al. 2022). Another similar object is HD 15124, which is currently undergoing mass transfer (El-Badry et al. 2022b). All these objects are in the Milky Way and have masses \(\lesssim 1.5\,M_{\odot}\). In contrast, Irrgang et al. (2022) reported \(\gamma\) Columbae to be a partially stripped pulsating core (\(\sim 4-5\,M_{\odot}\)) of a massive star that has a spectral appearance similar to that of a B subgiant but with altered surface abundances. However, there is no evidence of a mass-accreted secondary in this system. In the Large Magellanic Cloud (LMC), notable systems are NGC1850 BH1 (El-Badry & Burdge 2022; Saracino et al. 2023) and NGC 2004#115 (Lennon et al. 2022; El-Badry et al. 2022a), which are speculated to contain a black hole or low-mass stripped star. Growing observational evidence for these partially stripped stars compared to the apparent absence of fully stripped He stars in the intermediate-mass regime raises questions about our understanding of binary evolution. Recent evolutionary models by Klencki et al. (2022) suggest that binary evolution at low metallicity favors partial envelope stripping and slow mass transfer, leading to a large population of partially stripped donors. Due to their predicted surface properties, these systems are likely hiding among OB binaries as apparent main sequence (MS) or supergiant stars (e.g., Pauli et al. 2022). Identifying and characterizing such systems at low metallicity would yield sharp constraints on binary evolution and, in turn, on the origin of gravitational wave sources, ultra-luminous X-ray sources, stripped-envelope supernovae, and ionization in the early Universe. In this paper we report the first observational evidence of a partially stripped star + Be binary at low metallicity. ## 2 Observations In a previous study (Ramachandran et al. 2019) we analyzed the optical spectra of OB stars in the Wing of the SMC taken in 2010 with the Fiber Large Array Multi-Element Spectrograph (FLAMES) on the ESO Very Large Telescope (VLT). Three of the standard settings of the Giraffe spectrograph LR02 (resoloving power \(R=6000\), 3960-4567 A), LR03 (\(R=7500\), 4501-5071 A), and HR15N (\(R=19200\), 6442-6817 A) were used for this survey. Details of the observation, data reduction, and extraction of the spectra are described in Ramachandran et al. (2019). As a spectroscopic follow-up, we collected additional epochs in 2019 for most of this sample. In this work we carefully inspected the spectra of Be stars and other fast rotators in this sample and discovered that SMCSGS-FS 69 shows significant radial velocity (RV) variations up to 45 km s\({}^{-1}\) (see Fig. 1 and more details in Appendix A). Based on a single epoch optical spectrum the star was initially classified as B0.5 (II)e (Ramachandran et al. 2019). With a detailed inspection of the spectra, we found that SMCSGS-FS 69 is a double-line spectroscopic binary, consisting of both broad- and narrow-line components (Fig. 2), indicating that it is a potential post-mass-transfer binary. In addition, high-resolution H-band (1.51-1.70 \(\mu\)m) spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey (Majewski et al. 2017) are available in the archive, which we used to study the Be companion. Furthermore, our target was observed with Gaia and has listed proper motions and parallaxes. The radial velocities from the optical spectra and the Gaia proper motions agree well with that of SMC Wing. The negative values for the Gaia parallaxes support that this object is not a foreground Galactic star. In addition to the spectra, we used various photometric data (from UV to infrared) from VizieR to construct the spectral energy distribution (SED). We also utilized data from the Transiting Exoplanet Survey Satellite (TESS) for this system. We extracted the light curves and found that the variability cannot be consistent with the orbital period, but rather indicates rotational modulations induced by the Be star (see Appendix B for details). ## 3 Analysis In the optical range the overall spectrum of SMCSGS-FS 69 resembles an early B-type supergiant except for the following a) disk emission features in H\(\alpha\) and H\(\beta\) ; b) the presence of extended broad wings and narrow absorption in H\(\gamma\) and H\(\delta\) ; c) a combination of strong narrow absorption and weak broad components in multiple He i lines; and d) the strength of CNO absorption lines different from typical supergiant spectra. These features imply that the observed spectrum is a composite of a slowly rotating partially stripped B supergiant-like star and a fast-rotating MS star with disk emission (Fig. 2). All the metal lines are narrow, suggesting they are mainly from the stripped star. Although H\(\alpha\) is in emission, it is not clear whether this is entirely from the Be secondary or if there is a contribution from the stripped primary. Despite the low S/N, all the Brackett series lines in the APOGEE spectra (Fig. 3) display broad disk emission with multiple peaks, which are mostly from the secondary Be star. We performed the spectral analysis of SMCSGS-FS 69 using the PoWR model atmosphere code (see Grafener et al. 2002; Hamann & Grafener 2004; Sander et al. 2015, for more details). Initially, we chose PoWR SMC grid models (Hainich et al. 2019) as a starting point for the investigation and further computed ad Figure 1: Radial velocity variation in Si iii\(\lambda\)4553 (left) and H\(\beta\) (right). The colors indicate the two observation epochs in Julian dates (see legend). Figure 2: Observed spectra of SMCSGS-FS 69 (blue solid lines) displaying narrow and broad components. The composite model (dashed red) is the weighted sum of the stripped star primary (dotted brown) and the rapidly rotating Be star secondary (dashed green) model spectra with effective temperatures of 24 kK and 28 kK, respectively. The weaker absorption core in H\(\gamma\) likely results from filled-in disk emission. ditional models with tailored parameters for the primary and secondary. The full spectral and SED fit is shown in Fig. 11. In the spectral fitting method, we started with the analysis of the primary since it has a major contribution in the optical. First, we estimated the projected rotation velocity (\(v\sin i\)) of narrow-lined primary from metal absorption lines. We used a combined Fourier transform (FT) and goodness-of-fit (GOF) analysis employing the iacob-broad tool (Simon-Diaz & Herrero 2014). We applied this method to several metal lines and found the overall mean to deduce \(v\sin i\) and macro-turbulent (\(v_{\rm mac}\)) velocities. Subsequently, these values, along with instrumental broadening, were used to convolve the model spectra to match the observations. The main diagnostic we use to constrain the temperature of the primary is the He and Si ionization balance based on He i/He ii and Si iii /Si iv line ratios. The pressure-broadened wings of the Balmer lines are the primary diagnostics for the surface gravity. We considered H\(\gamma\) and H\(\delta\) since they are less impacted by wind and disk emissions. However, in our case the H\(\gamma\) and H\(\delta\) have contributions from both primary and secondary. Thus we simultaneously adjusted the luminosity ratios and surface gravities in the primary and the secondary models to match the observations. Since the ionization balance also reacts to gravity, we simultaneously re-adjusted \(T\), and log \(g\), to achieve a good fit to the observed spectra. The final uncertainty in the primary parameters reflects the overall fit quality and is limited by the model grid resolution. Constraining the secondary parameters is challenging, but we find that assuming cooler temperatures results in stronger broader components in the metal and He i lines, whereas hotter temperatures lead to pronounced He ii lines in the composite spectra. To account for the broad lines from the secondary, we had to use a very high \(v\) sin \(i\). Although we initially used grid models computed with typical SMC abundances (Trundle et al. 2007), they do not reproduce satisfactorily the observed CNO lines. To match the observed strength of CNO absorption lines, we had to drastically increase N and decrease C and O in the primary model. Most B supergiants in the SMC do not show such pronounced abundance variations. This is illustrated in Fig 3, where we compare the optical spectra of SMCSGS-FS 69 to that of an SMC B supergiant. The N abundance is determined by analyzing multiple N ii and N iii lines, while the CO abundance is just an upper limit as most of the CO lines are either very faint or within the noise. In addition to the CNO abundances, we varied the H mass fraction (\(X_{\rm H}\)) in the primary models between 0.5 and 0.73 and found that slightly He-enriched models better represent the observations. The remaining elements either had their abundance values scaled to one-fifth of solar or adopted typical SMC abundances derived from OB stars (Trundle et al. 2007). We also checked for the overall broadening of metal lines by varying the micro-turbulence \(\xi\) in the range of 10-20 km s\({}^{-1}\). Since a UV spectrum is not available for this object, we can only constrain the wind parameters (\(\dot{M}\) and \(v_{\infty}\)) from H\(\alpha\). The H\(\alpha\) profile shows a single but asymmetric emission peak (Fig. 12). If this emission is only contributed by the primary stripped star, it can be modeled as a result of a strong and slow stellar wind (log \(\dot{M}\) \(\approx\) -6.2 and \(v_{\infty}\)\(\approx\) 600 km s\({}^{-1}\)), as illustrated in Fig. 13 (left). However, since infrared spectra (Fig. 14) clearly showcase multi-peak disk emission features, we cannot exclude disk emission components in H\(\alpha\). We can alternatively reproduce the asymmetric H\(\alpha\) profile with a combination of strong disk emission from the Be star and a weak absorption component (log \(\dot{M}\) = -7.2) from the stripped star. Consequently, the wind mass-loss rate could be lower or higher depending on the \begin{table} \begin{tabular}{l c c} \hline \hline & Stripped star & Be star \\ \hline \(T_{*}\) (kK) & 24\({}^{+2}_{-1}\) & 28\({}^{+2}_{-3}\) \\ \(T_{2/3}\) (kK) & 21\({}^{+2}_{-1}\) & 28\({}^{+2}_{-3}\) \\ \(\log g_{*}\) (cm s\({}^{-2}\)) & 2.65\({}^{+0.2}_{-0.1}\) & 3.7\({}^{+0.2}_{-0.2}\) \\ \(\log g_{2/3}\) (cm s\({}^{-2}\)) & 2.4\({}^{+0.2}_{-0.1}\) & 3.7\({}^{+0.2}_{-0.2}\) \\ flux \(f/f_{\rm tot}\) (V) & 0.65 & 0.35 \\ \(\log L\) (\(L_{\odot}\)) & 4.7\({}^{+0.1}_{-0.1}\) & 4.7\({}^{+0.15}_{-0.15}\) \\ \(R_{*}\) (\(R_{\odot}\)) & 13\({}^{+1}_{-1.5}\) & 8.7\({}^{+2}_{-1.5}\) \\ \(R_{2/3}\) (\(R_{\odot}\)) & 17\({}^{+1.5}_{-2}\) & 8.7\({}^{+2}_{-1.5}\) \\ \(v\sin i\) (km s\({}^{-1}\)) & 50\({}^{+10}_{-10}\) & 400\({}^{+100}_{-100}\) \\ \(v_{\rm mac}\) (km s\({}^{-1}\)) & 20\({}^{+20}_{-10}\) & 50 (fixed) \\ \(\xi\) (km s\({}^{-1}\)) & 12\({}^{+3}_{-3}\) & 14 (fixed) \\ \(X_{\rm H}\) (mass fr.) & 0.59\({}^{+0.1}_{-0.1}\) & 0.737\({}^{*}\) \\ \(X_{\rm He}\) (mass fr.) & 0.40\({}^{+0.1}_{-0.1}\) & 0.26\({}^{*}\) \\ \(X_{\rm C}/10^{-5}\) (mass fr.) & \(\lesssim\) 1 & 21\({}^{*}\) \\ \(X_{\rm N}/10^{-5}\) (mass fr.) & 120\({}^{+20}_{-20}\) & 3\({}^{*}\) \\ \(X_{\rm O}/10^{-5}\) (mass fr.) & \(\lesssim\) 7 & 113\({}^{*}\) \\ \(X_{\rm Si}/10^{-5}\) (mass fr.) & 11\({}^{+2}_{-2}\) & 13\({}^{*}\) \\ \(X_{\rm Mg}/10^{-5}\) (mass fr.) & 19\({}^{+3}_{-3}\) & 10\({}^{*}\) \\ \(E_{\rm B-V}\) (mag) & 0.13\({}^{+0.03}_{-0.03}\) & \\ \(M_{\rm spec}\) (\(M_{\odot}\)) & 2.8\({}^{+1.5}_{-0.8}\) & 17\({}^{+9}_{-7}\) \\ log \(Q_{\rm H}\) (s\({}^{-1}\)) & 47.3 & 47.5 \\ log \(Q_{\rm He\,H}\) (s\({}^{-1}\)) & 32.9 & 37.5 \\ \hline \end{tabular} * \({}^{(*)}\) Abundances of the Be star are adopted from Trundle et al. (2007) which corresponds to typical values for OB stars in the SMC \end{table} Table 1: Fundamental parameters and abundances derived for SMCSGS-FS 69 using spectroscopic analysis. Figure 3: Comparison of CNO lines in the spectra of SMCSGS-FS 69 (red) and an SMC B supergiant (AV 242) of similar \(T_{\rm eff}\) (black). The spectra demonstrate that carbon and oxygen are substantially reduced, while nitrogen is highly enriched in SMCSGS-FS 69. Be disk emission strength. To precisely constrain the wind parameters, it is necessary to obtain UV spectra and to disentangle the components using multi-epoch optical spectra. We determine the luminosity \(L\) and color excess \(E_{\rm B-V}\) by fitting the composite model SED to the photometry (first panel of Fig. 1). The model flux is diluted with the SMC Wing distance modulus of 18.7 mag (Cignoni et al. 2009). By fitting the normalized spectra, we are able to place constraints on the luminosity ratio, thus the SED fitting by composite model yields both primary and secondary luminosities. Both components were found to have the same luminosity, even though the primary stripped star contributes approximately 60-65% of the light in the optical range. ## 4 Results and discussion Our spectroscopic analysis reveals that while the estimated temperature and gravity of the primary are in the range of B supergiants, the luminosity, radius, and consequently mass are considerably lower. The spectroscopic mass of the primary (\(2-4.3M_{\odot}\)) is strikingly low compared to what is expected for an early B supergiant. Notably, the partially stripped star's He ii flux contribution is much smaller than expected from fully stripped stars (e.g., Gotberg et al. 2017), which is mainly due to the much lower temperature. The spectroscopic mass of the Be star secondary is less constrained (\(10-26M_{\odot}\)) but consistent with a MS star. A complete overview of the derived parameters of both the primary and the secondary is given in Table 1. In Fig. 4, we illustrate that, unlike other SMC B stars including supergiants, our primary star shows an extreme surface abundance pattern that cannot be explained via rotational mixing (e.g., as calculated in the tracks by Brott et al. 2011). The derived nitrogen abundance is a factor of 4 higher than for typical SMC B supergiants and ten times that for average B stars (Dufton et al. 2005; Hunter et al. 2007). On the other hand, carbon and oxygen are depleted by more than a factor of \(\sim\) 15 compared to typical SMC stars. While silicon abundances are in agreement, a slight enrichment in magnesium is detected. A comparison of the derived surface abundances with typical B stars and B supergiants in the SMC is given in the Appendix Table D.1. The unusual abundance pattern can be explained by the removal of external layers via mass transfer, exposing the inner CNO-processed layers of the star. This would be in line with the exceptionally high luminosity-to-mass ratio of the primary, which we illustrate in Fig. 5 and compare it to relations of pure He stars and (early) MS stars. Notably, its luminosity (\(\log L/L_{\odot}\sim 4.7\)) is much higher than that of a pure He star of the same mass (\(\log L/L_{\odot}\sim 4.0\)), which suggests that most of the luminosity is produced in a leftover envelope layer via H-shell burning. Our primary is located in the upper left corner of Fig. 5, close to the Eddington limit, which is characteristic for stars that have lost a significant amount of mass via mass transfer. Altogether, these clues indicate a partially stripped star nature for the primary. In the Hertzsprung-Russell (HR) diagram in Fig. 6 we compare SMCSGS-FS 69 with the other stripped star candidates reported in the literature, including fully and partially stripped objects. The newly discovered SMCSGS-FS 69 stands out as the most luminous (and massive) of all and is the first one detected at SMC metallicity. The HRD in Fig. 6 shows again that its mass is much lower than that of a MS star with a similar luminosity (\(\sim 15M_{\odot}\)). To interpret the evolutionary status of the system, we employed the stellar evolution code MESA (Paxton et al. 2015, 2018, 2019) to compute a binary model for SMCSGS-FS 69 (see Appendix E for details). We found several solutions that are able to reproduce several of the observed parameters. Notably, based on a grid of \(\sim 2300\) binary evolution models, we find that the luminosity of the stripped primary can only be reproduced by models with the current mass stripped star\(\gtrsim 3M_{\odot}\). The closest match can be obtained with a Case B model in which the primary has an initial mass of \(12.2M_{\odot}\) and is stripped via Case B mass transfer to produce a partially stripped star of \(4.6M_{\odot}\) (with a He core of \(3.6M_{\odot}\)). This is illustrated by the magenta line in Figures 7 and 3. The Case B model (marginally) matches most of the properties derived in our spectral analysis, except for the surface O abundance (which is a factor of \(\sim 3\) too large). In or Figure 4: N/C vs. N/O abundances of the stripped star (yellow star) compared to SMC B stars (triangles for giants and crosses for main sequence) from Hunter et al. (2009). The dashed lines are tracks from Brott et al. (2011) for the SMC with different initial rotations for \(M_{\rm ini}=12\). The location of the stripped star at the top right corner demonstrates that its surface CNO pattern is too extreme to be explained by standard stellar evolution. Figure 5: Location of the SMCSGS-FS 69 components compared to mass-luminosity relations for pure He stars (green line) and early MS stars (\(X_{\rm H}=0.6\), blue line), from Grüfener et al. (2011). The position of the partially stripped primary is shown by a filled star, while the Be secondary is shown by an open star along with their respective uncertainties. For comparison, Galactic hot subdwarfs in binaries (blue triangles) and partially stripped stars (red triangles) are indicated. The stripped primary is located in the upper left corner, which is a characteristic of a star that has lost most of its envelope via mass transfer. der to reproduce the luminosity of the Be star companion, we find that at least \(\sim 40\%\) of the mass transferred during a Case B interaction needs to be accreted by the secondary. (see the example of an \(11.7M_{\odot}\) secondary accreting \(3M_{\odot}\) in Fig. 11). This may be in tension with models where the accretion efficiency is regulated via surface rotation of the accretor, which generally predicts negligible accretion during Case B evolution (Sen et al., 2022; Pauli et al., 2022), but may reach higher values depending on the assumed angular momentum budget (\(\sim 30\%\) in the model by Renzo and Gotberg, 2021). We find that an alternative solution in which no mass accretion is strictly required could possibly be obtained if the stripped star originates from a more massive primary (\(16.5M_{\odot}\)) that interacts already during the MS (i.e., Case A mass transfer), and we observe the stripped product while it is still core-H burning. This scenario is illustrated by the green line in Fig. 7 and in Fig. 2. The Case A model matches the observed surface abundances and rotation velocity of the stripped star very well, but is inconsistent regarding the current mass (by \(\gtrsim 4M_{\odot}\)) and surface gravity (by \(\gtrsim 0.45\) dex). While the determination of the surface gravity is affected by the rotationally broadened lines of secondary, the discrepancy in \(\log g\) is so large that we consider the Case A scenario (under the current evolutionary calculation scheme) to be less likely. We estimate the relative rate of the two scenarios to be comparable in the population of the SMC within a factor of \(\sim\)2 (see Appendix E). Regardless of whether the stripped star in SMCSGS-FS 69 is a product of Case A or Case B mass transfer evolution, its pre-interaction mass of \(\gtrsim 12M_{\odot}\) guarantees that it sits firmly in the mass regime for the formation of neutron stars (NSs), making it the first sub stripped star found to date. Our binary evolution models suggest that in \(1-1.5\) Myr the primary will explode as a type IIb/Ib supernova (with \(\sim 0.02M_{\odot}\) of H left at the surface) and form a NS. If the system remains bound, it will later evolve into a Be X-ray binary. The favored Case B model tentatively suggests a long orbital period of hundreds of days at this stage. Such wide Be X-ray binaries are thought to be the direct progenitors of common-envelope events, leading to double NS mergers (Tauris et al., 2017; Vigna-Gomez et al., 2020; Klencki et al., 2021). The properties of Be X-ray binaries in the SMC seem to point toward moderate accretion efficiencies in their prior mass transfer evolution (Vinciguerra et al., 2020; Igoshev et al., 2021), in agreement with our Case B evolutionary model for SMCSGS-FS 69. These considerations emphasize the significance of SMCSGS-FS 69 as a newly discovered intermediate evolutionary stage in the formation pathway of Be X-ray binaries and double NS mergers. The discovery of a massive stripped star + Be binary in a transition phase allows binary evolution to be constrained and hints at the existence of more such binaries at low metallicity. Moreover, SMCSGS-FS 69 can act as a template for identifying hidden systems in typical OB populations. These partially stripped transition stages are more luminous due to their H-shell burning and are visible in the optical due to the cooler surface temperatures. Even though the transition phase is short-lived (\(\lesssim 10\%\) He burning lifetime), binary evolution models predict tens or more of similar objects to be hiding among the known OB star population of the SMC (Klencki et al., in prep), motivating a large-scale search for binary-interaction products in this low-metallicity environment. The growing population of partially stripped stars, including SMCSGS-FS 69, further raises the question of why we discover systems during a short transition stage, but so far do not observe intermediate-mass stripped stars in their hot, compact stage. Evolutionary models usually predict that stripped stars settle at hotter temperatures, but with lower luminosities (see, e.g., Fig. 7). It is presently unclear whether these objects are just very hard to observe or this stage might not be regularly reached by binary evolution, contrary to current predictions. The presence or absence of such a compact stripped-star population will have a binary impact on population synthesis predictions, for example, due to the different evolutionary fates and ionizing fluxes. ###### Acknowledgements. We thank the anonymous referee for useful suggestions. We would like to thank Ylva Gotberg and Cole Johnston for helpful discussions. VR and AACS are supported by the Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) in the form of an Emmy Noether Research Group - Project-ID 445674056 (S.A.406/1-1, PI Sander) and funding from the Federal Ministry of Education and Research (BMBF) and the Baden-Wurttemberg Ministry of Science as part of the Excellence Strategy of the German Federal and State Governments. JK acknowledges support from the ESO Fellowship. DP acknowledges financial support by the Deutsches Zentrum fur Luft and Raumfahrt (DLR) grant FKZ 50OR2005. TS acknowledges support from the European Figure 6: Location of SMCSGS-FS 69 on the HR diagram compared to other stripped star candidates from the literature. Bonn SMC tracks are plotted for comparison (Brott et al., 2011). The symbols are the same as in Fig. 5 Figure 7: Two potential binary evolution pathways leading to the formation of a stripped star in SMCSGS-FS 69. The Case B mass transfer evolution roughly matches most of the derived surface properties, though it notably requires \(\gtrsim 40\%\) accretion efficiency to reproduce the secondary (Fig. 11). The Case A mass transfer evolution does not have that requirement, but it overpredicts the current mass of the stripped star. Union's Horizon 2020 under the Marie Sklodowska-Curie grant agreement No 101024605. This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #512 "Multiwavelength view on massive stars in the era of multimessenger astronomy".
2304.03437
Echo disappears: momentum term structure and cyclic information in turnover
We extract cyclic information in turnover and find it can explain the momentum echo. The reversal in recent month momentum is the key factor that cancels out the recent month momentum and excluding it makes the echo regress to a damped shape. Both rational and behavioral theories can explain the reversal. This study is the first explanation of the momentum echo in U.S. stock markets.
Haoyu Wang, Junpeng Di, Yuegu Xie
2023-04-07T01:34:43Z
http://arxiv.org/abs/2304.03437v1
# Echo disappears: momentum term structure and cyclic information in turnover ###### Abstract We extract cyclic information in turnover and find it can explain the momentum "echo". The reversal in recent month momentum is the key factor that cancels out the recent month momentum and excluding it makes the "echo" regress to a damped shape. Both rational and behavioral theories can explain the reversal. This study is the first explanation of the momentum "echo" in U.S. stock markets. momentum echo, turnover, wavelet method. 1 H ayyu Wang, Xiamen University, School of Management, 422 S. South Road, Siming District, Xiamen, Fujian Province, China. Zip code: 361005. Email: [email protected]. Phone: +86 18217137360. ## 1 Introduction It seemingly becomes a widely accepted empirical cognition that momentum has an echo-like structure, indicating that the intermediate month return is dominant in price momentum. As Novy-Manc (2012) claims that firms' return from 12 to 7 months prior to portfolio formation primarily drives momentum but strategies based on recent past performance generate less profitable returns, this empirical finding yields theoretical research a confusing situation, because both rational theory and behavior theory predict a damped momentum and the echo-like structure is quite difficult to be theoretically reached. Several possible explanations, like the 12-month effect (Jegadeesh, 1993; Heston and Sadka, 2008), earnings momentum (Chan et al., 1996), and disposition effects (Grinblatt and Han, 2005), fail to explain this phenomenon empirically (Novy-Manc, 2012). Consequently, this unreasonable inconsistency between empirical findings and theories has not been solved but is a central challenge to momentum research. We find the "echo" is caused by a combination of reversal effects in recent month momentum. The reversals are agglomerated in high cyclic turnover stocks in short horizons (or scales) and cannot be detected if using overall turnover. Excluding the reversals in momentum, the momentum "echo" regresses to a damped shape. The reversals, spanned by Fama-French three factors plus Pastor and Stambaugh (2003) liquidity factor, have two potential explanations, a rational one and a behavioral one. The rational one attributes the reversals to hedging the market risk because the reversals have negative significant loadings on the market risk factor, for which Barinov (2014) has claimed that aggregate volatility risk embeds in turnover premiums. Behavior one is more plausible that the reversals are resilience from overconfidence (Statman et al., 2006; Chou et al., 2013). Overconfident investors having past good performance trade more but realize their biased self-attribution when the price returns to average and as the market performs better, their biased self-attribute becomes more
2306.16125
A Framework for Identifying Depression on Social Media: MentalRiskES@IberLEF 2023
This paper describes our participation in the MentalRiskES task at IberLEF 2023. The task involved predicting the likelihood of an individual experiencing depression based on their social media activity. The dataset consisted of conversations from 175 Telegram users, each labeled according to their evidence of suffering from the disorder. We used a combination of traditional machine learning and deep learning techniques to solve four predictive subtasks: binary classification, simple regression, multiclass classification, and multi-output regression. We approached this by training a model to solve the multi-output regression case and then transforming the predictions to work for the other three subtasks. We compare the performance of two modeling approaches: fine-tuning a BERT-based model directly for the task or using its embeddings as inputs to a linear regressor, with the latter yielding better results. The code to reproduce our results can be found at: https://github.com/simonsanvil/EarlyDepression-MentalRiskES
Simon Sanchez Viloria, Daniel Peix del Río, Rubén Bermúdez Cabo, Guillermo Arturo Arrojo Fuentes, Isabel Segura-Bedmar
2023-06-28T11:53:07Z
http://arxiv.org/abs/2306.16125v2
# A Framework for Identifying Depression on Social Media: MentalRiskES@lberLEF 2023 ###### Abstract This paper describes our participation in the MentalRiskES task at IberLEF 2023. The task involved predicting the likelihood of an individual experiencing depression based on their social media activity. The dataset consisted of conversations from 175 Telegram users, each labeled according to their evidence of suffering from the disorder. We used a combination of traditional machine learning and deep learning techniques to solve four predictive subtasks: binary classification, simple regression, multiclass classification, and multi-output regression. We approached this by training a model to solve the multi-output regression case and then transforming the predictions to work for the other three subtasks. We compare the performance of two modeling approaches: fine-tuning a BERT-based model directly for the task or using its embeddings as inputs to a linear regressor, with the latter yielding better results. The code to reproduce our results can be found at: [https://github.com/simonsanvil/EarlyDepression-MentalRiskES](https://github.com/simonsanvil/EarlyDepression-MentalRiskES) Mental Health, Natural Language Processing, Depression, Social Media, Machine Learning, Deep Learning, Transformers, Sentence Embeddings ## 1 Introduction Mental health is a growing concern in our society. According to the World Health Organization (WHO), 1 in 4 people will be affected by mental disorders at some point in their lives [1]. In addition, the COVID-19 pandemic has had a negative impact on the mental health of the general population, with an increase in the number of people suffering from mental disorders [2]. Thus, it is becoming increasingly important to evaluate the use of new technologies to assess the risk of mental illness and the healthcare needs of the population [3]. At the same time, social media platforms such as _Telegram_ have become a popular way for people to express their feeling and emotions. Telegram is a free, end-to-end encrypted messaging service that allows users to send and receive messages and media files in private chats or groups that can be focused on particular topics and allow any user to observe or actively participate. These characteristics make Telegram a suitable source for text-mining [4]. With this context, an interesting approach is to use Natural Language Processing (NLP) techniques to analyze the language used by people who suffer from mental illness and discover patterns that can be used to identify them and provide the necessary support. The MentalRiskES task at IberLEF 2023 [5] aims to promote the development of NLP solutions specifically for Spanish-speaking social media. They propose three main areas of focus for early-risk detection: eating disorders (Task 1), depression (Task 2), and non-defined disorders (Task 3). In this work, we present our proposed solution to Task 2 of the 2023 edition of MentalRiskES. This task involves evaluating the likelihood of a Telegram user experiencing depression based on their comments within mental-health-focused groups. The task is split into four predictive subtasks (2a, 2b, 2c, 2d) according to the type of output required. Our main contributions and findings can be then summarized threefold: 1. We conducted experiments using various language models based on BERT [6] to solve the task. We found that a RoBERTa model [7] that had been previously fine-tuned on a Spanish corpus to identify suicide behavior [8] tended to yield the most accurate results. This suggests that fine-tuning for an intermediate task can improve results for related tasks, which is supported by existing literature [9, 10]. 2. Our approach to solving the task consisted of training only with the labels of the regression subtasks (2b, 2d), as we deemed them the most informative. Additionally, we show that you can use the labels of 2d to recover the labels of the other three subtasks. The models trained to target task 2d achieved the best results across all subtasks, even outperforming those that targeted 2b in the simple regression metrics. 3. We attempted two different predictive modeling approaches to solve the task using the language model (LM) mentioned above. The first one involved extracting the _sentence embeddings_ of the messages of each user and using them as features to train and evaluate classic linear and non-linear machine-learning regressors. In the second one, we fine-tuned the LM directly for the subtask. The first approach proved advantageous in terms of allowing for quicker, more comprehensive experimentation and resulted in models that achieved the best overall performance when evaluated on the test set. The rest of the paper is organized as follows: In the next section, we analyze the dataset used for the task (Section 2). Then, we describe in detail our methodology for training and evaluating the models (Section 3). Finally, we discuss the results obtained (Section 4) and present our conclusions and future lines of work (Section 5). ## 2 Dataset Analysis The dataset given for the task consisted of a total of _6,248_ individual messages from 175 Telegram users, each with a variable number of messages (see figure 1). The annotation process consisted of labeling each user based on the evidence from their conversation history of suffering from depression. Thus, a total of 10 annotators were used for the tasks. Each was asked to assign one of the following four labels to each user: * **suffer+in favour**: Indicates evidence (from text messages) of the user suffering from depression but is also receptive/willing to help and overcome it. * **suffer+against**: Indicates evidence of the user suffering from depression but is against receiving or providing help to overcome it. * **suffer+other**: Indicates evidence of the user suffering from depression, but there's not enough information to assign them to any further category (against or in favour) * **control**: Indicates no evidence of the user suffering from depression. Furthermore, these labels were represented differently to support each of the four subtasks of MentalRiskES: simple classification (_task 2a_), binary regression (_task 2b_), multiclass classification (_task 2c_), and multi-output regression (_task 2d_). In the classification tasks (2a, 2c), the label assigned to each user was the class that obtained the majority vote from the annotators, with the labels being "1" for the "suffer" classes and "0" for the control in the case of task 2a. For the regression tasks (2b, 2d), the values of the labels were presented as numeric probabilities in \([0,1]\) representing the confidence of the respective class. They were calculated by adding the number of annotators who gave the classification and dividing by 10 (the total number of annotators). For task 2b, this was presented as one number representing the probability of suffering from depression, while for task 2d, each subject label was presented with four numbers representing the probability of each class. Appendix A shows examples of how this data was given. The following figure displays the label distribution for each task in the training set. We can see how over 94 (\(\sim\)54%) of users were classified as having depression. Furthermore, there is an imbalance in the labels for the classification tasks due to the "suffer" label being divided into different categories (leading to an over-representation of the "control" label). Additionally, the "suffer+other" category is underrepresented when compared to the other three. Figure 1: Plot of the count of users by the number of messages in the training set and their assigned label. x-axis: number of messages per user. y-axis: count of users with that amount of messages. ## 3 Methodology We proceeded to evaluate different techniques to solve each of the four subtasks. Two main predictive-modeling approaches were explored: The first one involved fine-tuning a pre-trained language model on each subtask and the second was about training a standard ML regressor using sentence embeddings encoded from the user's messages as features. The following section describes the steps taken for each approach, first describing how the data was pre-processed and later explaining the training and evaluation process done for each subtask. ### Data Processing and Augmentation Independent of the approach taken to train the models, the data was pre-processed and augmented in the same way. The first thing we did was group all the messages by the user they belonged to and concatenated them into a single string, obtaining a total of 175 messages (one per user). This was done to obtain a single representation of each user's conversation history (from which the labels were assigned) to be able to use it as input for the models. To prepare for training, the data was split into training and validation sets, leaving a random 26 (15%) users in the latter for stratified cross-validation, where each set receives the same proportion of samples of each class [11]. The stratification was done using the labels of task c to ensure equal representation of the classes in both sets. To increase the amount of data available for training and, at the same time, attempt to model early detection (obtaining predictions early on in the lifetime of the message history), we augmented the training set by adding observations that only contained _half_ of their messages. Figure 2: Distribution of labels assigned to the users for each task. In tasks a and d: 1 = evidence of suffering from depression. Task d: The sum of the four labels adds up to 1. This was done by first sorting the messages of each user in the training set by its date and then only taking the first half, the resulting dataset was then appended to the original training set to obtain a new one with twice the number of observations to be used for training. ### Solving all substaks by solving for regression By the discussion in section 2, it should be clear to see that not all labels of the subtasks give the same amount of information about the condition of the subject and the likelihood of predicting it based on the available data. Indeed, it's clear that the probability values of task 2b give more information about confidence in predicting depression than the simple binary labels of task 2a. For the same reasons, the labels of task 2d are more informative than those of task 2c as they give the full probability distribution across the four classes. Furthermore, we can show that it's possible to use the multi-output regression labels (2d) to recover the labels of the other three subtasks. To illustrate, the multiclass classification labels of task 2c can be recovered by selecting the class in the distribution that has the highest probability. Moreover, we can obtain the labels of task 2a by simply converting these classes into binary (1 for the "suffer" classes and 0 for all others). Lastly, the labels of task 2b can be obtained by summing the probabilities of the "suffer" classes in the distribution. We have confirmed this by applying these modifications to the labels of the training set for task 2d and comparing them to the original labels of the other three tasks. This observation led us to consider using models that solve for more than one subtask by only training it with the labels of task 2b or 2d. This allowed us to reduce the number of models that had to be trained and focus on solving for a single data modality (regression on \([0,1]\)). We approached simple regression in a standard way training models, training models to minimize the Mean Squared Error between the output values and the real ones. Additionally, we included the post-processing step of clipping the output predictions of models of this type to the [0,1] range to ensure that they were valid probabilities. Multi-output regression using standard machine learning regression, on the other hand, wasn't as trivial as in the simple regression case. The models we worked on didn't support multi-output regression out of the box. The approach we did involve training four regressors for each model, one for each class, and then combining the predictions. We explored two methods for this: training independent regressors or training them in a chain as explained by figure 3. The full details of the process are described in appendix D. Finally, similar to the simple regression case, the predictions of the multi-output models were post-processed by dividing each of the four values by their sum to obtain a vector whose values add up to one. That is, \(\hat{y}_{i}=\frac{\hat{y}_{i}}{\sum_{i}\hat{y}_{i}}\) for each \(i\)-th class. This was done to ensure that the predictions were valid probability distributions over the classes. ### Modeling Approaches #### 3.3.1 Training a regressor with sentence embeddings A sentence embedding is a semantically meaningful real-valued vector representation of a sentence, obtained from the outputs of the hidden layers of a language model. The properties of this representation are so that sentences that express similar meanings are mapped (encoded) closer to each other in the vector space [13]. In this way, the process of encoding text as numeric vectors can be used directly to extract features for a classifier or regressor, which will try to learn from the semantic information of these encodings to predict the label of their corresponding messages. Note, however, that this approach requires the need to have a pre-trained model to perform this encoding. Furthermore, it assumes that the model will be good enough at capturing the semantic information of the texts given as input, enough for the classifier/regressor to learn from it. Assuming that this is the case, this approach has the advantage that it is much faster to train these kinds of regressors with regular CPUs, with the most time-consuming part being obtaining the embeddings of the training/evaluation messages, which only has to be done once. However, it is necessary to evaluate different encoding models and different classifiers/regressors (prediction models) to find the best combination for the task at hand. As such, we conducted experiments using different language models to find the best encoding model. Particularly, we tested three different versions of BERT [6] trained with different corpora in Spanish. These versions are described in table 1. Additionally, we experimented with over 10 different regressors, including Least Squares Linear regression [14], Random Forest [15], and Gradient Boosting [16], among others. These models were chosen due to their ease of implementation and the fact that they are commonly used in the literature [17]. The process of training and evaluating these models proceeded then as follows: First, the training set was encoded using the language model and the resulting embeddings were used as features for a regressor. The regressor was then trained using the labels of task 2d (the most informative ones) and the resulting model was used to predict the labels of the validation set. The predictions were then evaluated with the root mean squared error (RMSE). This process was repeated for each combination of language model and regressor. Appendix B contains the results of this experiment. Based on that, roberta-suicide-es was deemed to be the best model for encoding the texts. Additionally, appendix C shows a detailed report of the evaluation of the best regression model with these embeddings. Figure 3: Graphical representation of the two methods used to implement multi-output regression taken from [12]. In (a), the regressors are trained independently with the same input, while in (b) they are trained in a chain with the predictions of the previous ones being passed as features to the next. #### 3.3.2 Fine-tuning a Language Model for Regression Apart from the approach mentioned above, we also experimented with the pure Deep Learning (DL) approach of taking a language model and fine-tuning it with the labels of the corresponding subtask. The model we fine-tuned was a version of RoBERTa pre-trained for detecting suicidal behavior from texts in Spanish [8]. We chose this model due to the fact of having been trained previously for a task that shares similar characteristics to ours. Intermediate fine-tuning has been proven to improve the results of downstream tasks by prior literature [9, 10]. The HuggingFace Transformers [20] and Pytorch [21] libraries in Python were utilized for loading the model weights and implementing the training loop. We changed the head of the pre-trained model to a linear layer consisting of output dimension 1 for simple regression or dimension 4 for multi-output regression. The models were trained using an NVIDIA T4 GPU for a total of 30 epochs, where the weights of the pre-trained model remained fully frozen for the first half and then were progressively unfrozen each epoch after that as in [22]. We used an Adam Optimizer with Mean-Squared Error (MSE) for the simple regression models and a Cross-Entropy loss function for multi-regression (since the labels consisted of numeric probabilities). Furthermore, since the output for task 2d consisted of a probability distribution over the four classes, we experimented with a custom loss function that adds a term to the standard cross-entropy loss to penalize outputs whose sum is different from one. However, this did not improve the results empirically as compared with simply normalizing the outputs of the predictions after inference. The formula of this loss is shown in equation 1. Other hyperparameters are shown in table 2. \[\mathcal{L}_{\text{custom}}=\mathcal{L}_{cross-entropy}+\epsilon(1-\sum_{i \in[1,4]}\hat{y}_{i})^{2} \tag{1}\] In the equation above, \(\hat{y}\) is the output of the model, \(y\) is the target label, \(\epsilon\) is a hyperparameter that controls the weight of the penalty term, and \(y_{i}\) is the \(i\)-th element of the target label. \begin{table} \begin{tabular}{c c} \hline \hline Model & Description \\ \hline RoBERTa-base-bne [7] & RoBERTa model [18] trained with data from Spain’s National Library. \\ RoBERTa-suicide-es [8] & RoBERTa-base-bne fine-tuned for suicide detection. \\ BETO [19] & Variant of BERT [6] trained with Spanish corpora. \\ \hline \hline \end{tabular} \end{table} Table 1: Pre-trained BERT-based models used in our experiments. \begin{table} \begin{tabular}{c c} \hline \hline Hyperparameters & Value \\ \hline Optimizer & AdamW \\ Learning rate & \(1e^{-5}\) \\ Max Tokens & 1024 \\ Num Epochs & 30 \\ Batch Size & 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameters for fine-tuning a RoBERTa model for regression tasks. ## 4 Results Using the approaches mentioned in the prior section, we came up with different models to solve the four subtasks of Task 2 of MentalRiskES. The results in this section are obtained from selecting the best-performing models after evaluating the different approaches and hyperparameters on the validation set. The final predictions were obtained from a test set of messages from 149 subjects never observed during the training process and evaluated against the task's true labels. In the tables below, we report the relevant metrics obtained for each subtask and compare them against the ones obtained from baseline models provided by the organizers of the competition. In particular, we report both _absolute_ metrics, obtained after observing all the messages of each subject, and _early detection_ metrics, obtained after incrementally observing the messages across several rounds. Additionally, table 11 displays the inference-time CO\({}_{2}\) emissions and energy consumption of each model, based on computing their _absolute_ predictions on the test set. These values were estimated using the _codecarbon_ python library [23]. For the absolute metrics, we show the accuracy, precision, recall, and F1 scores for the classification tasks (2a and 2c) and the root mean squared error (RMSE) and coefficient of determination (\(R^{2}\)) for the regression tasks (2b and 2d). The early detection metrics include the _early-risk detection_ metric (erde) computed after observing different rounds of messages as well as other metrics (more details are provided in the competition guidelines [5]). The metrics are shown along with the name of the model used to obtain them. The models are named as follows: _[task name][model name][approach]_. For example, _task2b_roberta-suicide-es_fine-tuning_ refers to the model trained with the task 2b (binary classification) labels by fine-tuning the Roberta model pre-trained for suicide detection. The "_approach_" can be either _embeddings_ or _fine-tuning_ for the two approaches described in section 3. Furthermore, all ML regressors trained with embeddings as features were Ridge regressors, and all embeddings were obtained using roberta-suicide-es encodings as this combination yielded the best results in the evaluation set. The _embeddings_ approaches for task 2d also include the multi-regression method used (_ind_ indicating that independent regressors were used and _chain_ for chained regressors). ### Results for task 2a: binary classification \begin{table} \begin{tabular}{l c c c c} \hline \hline & accuracy & macro\_precision & macro\_recall & macro\_f1 \\ \hline **2d\_roberta\_embeddings\_ind** & **0.705** & **0.717** & **0.727** & **0.703** \\ BaseLine - Roberta Large & 0.698 & 0.759 & 0.718 & 0.690 \\ **2d\_roberta\_embeddings\_chain** & **0.691** & **0.711** & **0.755** & **0.682** \\ **2b\_roberta\_embeddings** & **0.691** & **0.713** & **0.764** & **0.681** \\ **2d\_roberta-suicide-es\_fine-tuning** & **0.671** & **0.695** & **0.764** & **0.655** \\ BaseLine - Deberta & 0.664 & 0.788 & 0.691 & 0.642 \\ **2b\_roberta-suicide-es\_fine-tuning** & **0.638** & **0.663** & **0.735** & **0.616** \\ BaseLine - Roberta Base & 0.631 & 0.744 & 0.658 & 0.605 \\ \hline \hline \end{tabular} \end{table} Table 3: Task A absolute Metric Results ### Results for task 2b: Simple Regression ### 4.3 Results for task 2c: Multiclass Classification ### Carbon Emissions ## 5 Conclusions The results show that the approaches considered in this work were successful at modeling each of the predictive subtasks, with at least one of our models outperforming the baselines in most cases. We can make the following observations: * The best-performing approach across all tasks seems to be the one that uses the embeddings of the messages as input to a multi-output regression model (task 2d). At least one model trained with this approach reached the top ranking for tasks 2a, 2b, and 2d absolute ranking metrics and outperformed the baseline absolute metrics across all tasks. * Most notably, the regression method that uses multi-output chained regressors obtained the best metrics for task 2d across all models, outperforming the fine-tuning approach by over 20% in the absolute metrics and reaching the second highest spot in the early-risk metrics for this task. * Models trained for multi-output regression perform very well for binary classification and simple regression tasks, even outperforming the models trained for simple regression targets in their own subtask. This suggests that using one model to solve for multiple targets was indeed a good approach to this problem. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & p@10 & p@20 & p@30 & p@5 & p@50 \\ \hline BaseLine - Deberta & 0.300 & 0.338 & 0.350 & 0.250 & 0.250 \\ **2d\_roberta\_embeddings\_ind** & **0.300** & **0.300** & **0.292** & **0.600** & **0.280** \\ BaseLine - Roberta Large & 0.275 & 0.263 & 0.275 & 0.350 & 0.350 \\ **2d\_roberta\_embeddings\_chain** & **0.275** & **0.275** & **0.250** & **0.550** & **0.240** \\ BaseLine - Roberta Base & 0.300 & 0.225 & 0.192 & 0.250 & 0.250 \\ **2d\_roberta-suicide-es\_fine-tuning** & **0.075** & **0.113** & **0.167** & **0.150** & **0.145** \\ \hline \hline \end{tabular} \end{table} Table 10: Task D ranking Metric Results: Ranked by p@30 \begin{table} \begin{tabular}{l c c c c} \hline \hline & duration (secs) & emissions (kgCO2eq) & cpu\_energy & ram\_energy \\ model & & & & \\ \hline **2b\_roberta-suicide-es\_fine-tuning** & **5.74** & **1.56e-06** & 7.97e-06 & 2.53e-07 \\ **2d\_roberta-suicide-es\_fine-tuning** & **7.393** & **2.00e-06** & 1.03e-05 & 2.58e-07 \\ **2b\_roberta\_embeddings** & 23.287 & 6.70e-06 & 3.23e-05 & 2.94e-06 \\ **2d\_roberta\_embeddings\_ind** & 23.976 & 6.94e-06 & 3.33e-05 & 3.25e-06 \\ **2d\_roberta\_embeddings\_chain** & 23.721 & 7.14e-06 & 3.29e-05 & 4.63e-06 \\ \hline \hline \end{tabular} \end{table} Table 11: Estimated CO\({}_{2}\) emissions of each model from predicting all messages on the test set (absolute). Estimations were obtained with the codecarbon python library [23] using a Macbook Pro (2021) w/ M1 Pro and 16GB of RAM for inference. The models with the lowest emissions are highlighted in bold. * The models obtained with a pure DL approach from fine-tuning a RoBERTa model are estimated to produce over 3-4x _less_ emissions at inference time than the hybrid approach from training linear regressors on sentence embeddings. This gap is likely because the fine-tuning approach requires less computation at inference time than the hybrid approach, which requires the computation of the sentence embeddings before feeding them to multiple regressors, while the fine-tuning approach is made in one forward pass. Another finding we can conclude from these insights is that while our models achieve great results in the absolute ranking metrics, they do not perform as well for the metrics that assess early-risk performance. In our work, we did not model explicitly for an early detection scenario; we only added information about prior messages through data augmentation. This limitation means our models may not perform as well in real-world situations where we aim to detect signs of depression in a conversation early on. Thus, it may be important to explore different training approaches to improve the performance of early-risk detection. This might include directly employing online learning to predict and update the model as new messages come in or incorporating an ensemble of models to make independent decisions about a message's risk level and combining them for a final decision (as seen in [23]). Additionally, we may also look into more efficient implementations of the hybrid approach to minimize the disparity in emissions compared to pure DL models. These improvements are crucial when considering the deployment of our models in real-world situations and will be the focus of future work.
2303.13683
OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural Architecture Search
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints by decoupling the training and the searching stages. The computationally expensive process of training the OFA neural network is done only once, and then it is possible to perform multiple searches for subnetworks extracted from this trained network according to each deployment scenario. In this work we aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem. A Pareto frontier is then populated with efficient, and already trained, neural architectures exhibiting distinct trade-offs among the conflicting objectives. This could be achieved by using any multi-objective evolutionary algorithm during the search stage, such as NSGA-II and SMS-EMOA. In other words, the neural network is trained once, the searching for subnetworks considering different hardware constraints is also done one single time, and then the user can choose a suitable neural network according to each deployment scenario. The conjugation of OFA and an explicit algorithm for multi-objective optimization opens the possibility of a posteriori decision-making in NAS, after sampling efficient subnetworks which are a very good approximation of the Pareto frontier, given that those subnetworks are already trained and ready to use. The source code and the final search algorithm will be released at https://github.com/ito-rafael/once-for-all-2
Rafael C. Ito, Fernando J. Von Zuben
2023-03-23T21:30:29Z
http://arxiv.org/abs/2303.13683v1
# OFA2: A Multi-Objective Perspective for the Once-for-All Neural Architecture Search ###### Abstract Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints by decoupling the training and the searching stages. The computationally expensive process of training the OFA neural network is done only once, and then it is possible to perform multiple searches for subnetworks extracted from this trained network according to each deployment scenario. In this work we aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem. A Pareto frontier is then populated with efficient, and already trained, neural architectures exhibiting distinct trade-offs among the conflicting objectives. This could be achieved by using any multi-objective evolutionary algorithm during the search stage, such as NSGA-II and SMS-EMOA. In other words, the neural network is trained once, the searching for subnetworks considering different hardware constraints is also done one single time, and then the user can choose a suitable neural network according to each deployment scenario. The conjugation of OFA and an explicit algorithm for multi-objective optimization opens the possibility of a posteriori decision-making in NAS, after sampling efficient subnetworks which are a very good approximation of the Pareto frontier, given that those subnetworks are already trained and ready to use. The source code and the final search algorithm will be released at [https://github.com/ito-rafael/once-for-all-2](https://github.com/ito-rafael/once-for-all-2). Neural Architecture Search, Hardware-aware NAS, AutoMI, Multi-objective Optimization ## I Introduction Designing and training a deep neural network requires some user expertise and is often one of the most time-consuming steps of the machine learning pipeline. The field of AutoML aims to automate this process and started gaining some attention in recent years [1]. Specifically, the neural architecture search (NAS) subfield attempts to solve the problem of designing and searching a neural network given the demands of a learning task. This helps to alleviate the neural network designer from hand-crafting the architecture in a usually tedious trial and error procedure, while saving computational resources. At the very beginning, the NAS algorithms were strictly focused on the automation process, leaving aside the computational resources concerns. In fact, the first NAS algorithms are supported by search strategies based on reinforcement learning [2][3] or evolutionary algorithms [4][5], and the search cost took thousands of GPU days. Only later, when gradient-based search strategies were introduced, the computational burden was finally addressed [6]. The search cost was reduced to less than 10 GPU days, making it much more accessible for researchers to contribute to the field by improving the algorithms and proposing new ones. However, with the recent progress in the IoT area and with the increasing amount of edge devices and edge computing, bringing machine learning into these devices may require the deployment of the application confined in different hardware settings, which is usually translated to hardware constraints such as latency, FLOPS and energy. Therefore, searching for an architecture for each specific device target leads to a computationally prohibited scenario, since training a single deep learning model can take weeks or even months to finish. Besides not scaling well with the number of deployment scenarios, each additional training also presents an environmental burden. In fact, using NAS to search for a Transformer-like architecture [7][8], a popular architecture in the Natural Language Processing (NLP) area, is reported to cause as much as 5 lifetime cars in CO\({}_{2}\) emission [9]. The Once-for-All (OFA) [10] is a NAS framework that Fig. 1: Once-for-All NAS framework overview attempts to address both the problem of having different deployment scenarios and the CO\({}_{2}\) emission. This is done by decoupling the training stage from the search stage. In other words, this framework trains a single supernetwork once, and then performs multiple searches of subnetworks nested inside the already trained supernetwork for distinct deployment requirements. Since the training stage is the most costly step in the NAS process and it is done a single time, this cost can be amortized by the number of deployment scenarios. Figure 1 illustrates the decoupling feature of the OFA framework. Although the search stage of the OFA framework is much cheaper than the training stage, it does not mean that the cost of this step is negligible. Moreover, the user must define a hardware constraint (latency or FLOPS) a priori to the search stage. This means that one search is required for each different deployment scenario, which is not very efficient. In this work we propose a multi-objective perspective to the search stage of the OFA framework, in such way that non-dominated neural networks for a representative amount of deployment scenarios are all produced by a single search procedure. This means that now the training stage of the OFA supernetwork is done once, but also the search stage of the OFA framework is done only a single time, and the output of this last stage now covers roughly all hardware trade-offs instead of a single constraint, as depicted in Figure 2. Furthermore, we conduct experiments with those non-dominated neural networks as components of an ensemble, and compare performance against the single architectures and ensembles formed by architectures found by the original OFA framework and architectures randomly sampled from the OFA search space (without any search procedure). We list the main contributions of this paper as follows: * We extend the Once-for-All framework by proposing a multi-objective perspective for its search stage and solve this multi-objective optimization problem by finding the optimal architectures for different hardware constraints (e.g.: latency or FLOPS) all at once. * We compare the performance of architectures randomly sampled from the OFA search space, architectures found by the original OFA search algorithm and our multi-objective optimization method, and explicitly show the benefits in both accuracy and usage of computational resources compared to the others. * We conduct experiments with ensembles and we empirically show that choosing efficient architectures to form the ensemble beats the performance of ensembles composed of random architectures and also the ensembles formed by architectures returned by the original OFA search. ## II Methods Our work is an extension for the Once-for-All framework that aims to improve the search stage by selecting diverse architectures with better performance all at once. Since the training stage represents the step that demands most of the computational resources, we decided to take advantage of the fact that the OFA supernetwork was already trained and has its weights publicly available, and we focused on improving the search stage. More specifically, we formulated the search stage as a multi-objective optimization problem and solved it making the search more robust and leading to more efficient final architectures. Lately, we use the set of solutions obtained from the previous step to form ensembles and conduct a set of experiments comparing different scenarios. The dataset used for both the training and searching stages is the ImageNet [11], a standard dataset in the computer vision area that consists of 1,281,167 images in the training set and 50,000 images in the validation set, organized in 1,000 categories. Next, we describe the training stage of the OFA framework just for completeness, and then we report the contributions of this work, including the changes in the search stage and the employment of efficient and diverse candidate solutions as components of state-of-the-art ensembles. ### _Training stage_ The search spaces used in the NAS frameworks are typically divided into two categories: cell-based and one-shot. While the cell-based approach consists in finding two cell structures (normal and reduction) in the form of a directed acyclic graph (DAG) and lately stacking these cells multiple times to form the final neural network architecture, the one-shot approach heavily relies on weight sharing, meaning that when one model inside this search space is trained, all other models that share these parameters will also have their weights updated (they are the same after all). The OFA framework presents a one-shot search space during the training stage. The full OFA supernetwork is formed by 5 convolutional units placed in sequence. Each unit have 3 different parameters that can have their values changed: depth (number of layers) chosen from {2, 3, 4}, channel expansion ratio (number of channels) chosen from {3, 4, 6} and the convolution kernel size chosen from {3, 5, 7}. The fourth and last parameter Fig. 2: Comparison between the search stage of the OFA and OFA\({}^{2}\). that describes the search space is the resolution of the image that will be cropped from the original ImageNet and used as the input of the network. Its value is chosen from {128, 132,..., 224}, totalizing 25 different input resolutions. Table I summarizes the OFA search space and the possible options to each hyperparameter. With the choices available, the amount of different subnetworks inside the OFA search space is approximately \(10^{19}\). The first step is to train the OFA network at its full capacity, that is, each of the 5 units having 4 layers, 6 channels with convolutions kernels of size 7x7. The resolution is randomly sampled during all the training. Then, after training the full network, the next step is to fine-tune the parameters training the subnetworks nested in the OFA supernetwork. This is achieved with the _Progressive Shrinking_ (PS) algorithm, introduced alongside the Once-for-All framework. The PS algorithm can be viewed as a general pruning technique, but instead of shrinking only a single dimension as it is usually done, it shrinks 4 dimensions gradually and in a sequence. It starts reducing the kernel size of the convolutional filters from 7x7 to 5x5, and then from 5x5 to 3x3, nesting the 3x3 kernel at the center of the 5x5 kernel, which is nested at the center of the 7x7 (elastic kernel size). Then skip connections are added to decrease the number of layers inside a unit (elastic depth), and finally it sorts the filters by its importance calculating the L1 norm of each channels' weights (elastic width). The main concepts are based on weights sharing, such that when the weights of a 3x3 convolutional kernel is being updated it also updates the 5x5 and 7x7 kernels, and also on the idea of sorting the subnetworks according to both accuracy and computational resources. Figure 3 illustrates the progressive shrinking algorithm. The shrinking process is completely done in one hyperparameter at a time before moving to the next one (vertical axis of Figure 3) and always starts with the hyperparameters options that makes the neural network larger towards smaller architectures (horizontal axis of Figure 3). For example, with respect to the depth, the network is first trained with 4 layers, then with 3 and lately with 2. Please refer to the original paper for more detailed information about the training. A final remark is related to the _neural-network-twins_. Since evaluating the architecture accuracy on the validation set of the ImageNet dataset and measuring the latency of a model in a specific hardware can be expensive, the authors built an accuracy and a latency predictor as substitutes to speed up the search stage. The accuracy predictor is a three-layer feedforward neural network with 400 hidden nodes in each layer and the latency predictor is built as a lookup table [12] one for each different hardware being considered. ### _Search stage_ The original framework performs an evolutionary search on the trained OFA network to find an architecture with the meeting requirements based on an input provided by the user. This means that the resource constraints must be defined a priori to the search process and that the output of the framework is a single architecture. Here we propose the search process in a multi-objective perspective [13], where the goal is to minimize two conflicting objective functions that represent the top-1 error and a hardware constraint (latency or FLOPS). We tested two multi-objective optimization (MOO) algorithms in our experiments, both being evolutionary and population-based: the NSGA-II [14], which uses the concepts of non-dominated sorting and crowding distance sorting to solve the problem, and the SMS-EMOA [15], which uses the hypervolume metric to perform the evolution, besides the non-dominated sorting concept as well. The whole problem was modeled using a multi-objective optimization framework called pymoo [16], which uses the NumPy library as its backend [17]. For the evaluation of the neural network architectures, a Titan X GPU with 11 GB of memory was used and the code was implemented using the PyTorch framework [18]. #### Iii-B1 Genotype The first step to solve the search stage as a MOO problem is to define the genotype encoding that will be used by the evolutionary multi-objective algorithm (EMOA) to represent an individual of the population, where each individual represents a full neural network and the population Fig. 3: Progressive Shrinking (PS) algorithm. \begin{table} \begin{tabular}{|c c c c|} \hline PS & property & options & \# choices \\ \hline elastic resolution & input resolution & {224,..., 128} & 25 \\ elastic kernel & kernel size & {3, 5, 7} & 3 \\ elastic depth & layers & {2, 3, 4} & 3 \\ elastic width & channels & {3, 4, 6} & 3 \\ \hline \end{tabular} \end{table} TABLE I: OFA search space is just a set of neural networks considered at each iteration of the algorithm. For this, we simply flatten the hyperparameters used to represent an architecture from Table I in a one dimensional array. Since the final architecture is formed by 5 units in sequence and each unit may have up to 4 layers, we have a total of 20 entries to represent the convolutional kernel size (ks) and 20 entries to represent the width (w). We also have 5 entries related to the depth (d) of each unit showing the number of layers considered, and one last entry informing the resolution (\(x\)) of the image that will be cropped from the dataset and used as the input of the neural network. This gives us a total of 46 variables for the encoding of each individual that represents a full neural network. Figure 4 illustrates this representation of an individual. Given the encoding of an individual, in order to build the full architecture associated with that individual, the first step is to separate each variable according to the hyperparameter it represents, following the scheme depicted in Figure 4. Figure 4 illustrates a numerical example of an individual and Figure 4 shows the respective division of its values according to the hyperparameters associated. After that, we can group these hyperparameters per unit. If the depth of the unit being considered is equal to 4, i.e. the unit has 4 layers, then all 4 values of the kernel size and all 4 values of the width are valid and will be used to build the model. If the depth of the unit is equal to 3 layers, then we simply discard the last value of both the kernel size and width. If the depth is equal to 2, then we discard the last 2 entries of both kernel size and width. Figure 4 illustrates this procedure. Finally, with all hyperparameters values of the 5 units, we can build the full architecture, as shown in Figure 4. #### Iv-B2 Objective Functions The objective functions we want to optimize is the top-1 accuracy (maximize) and another conflicting objective, represented by a resource constraint, such as latency of FLOPS (minimize). For this, we use the accuracy and latency predictors provided by the OFA framework. That is, given the encoding of an architecture we use the accuracy predictor (the same regardless the hardware) to estimate the model accuracy, and we use specific latency lookup table for each hardware to get the predicted latency. There are a few hardware options available to choose the latency lookup table, such as the Google Pixel 2 mobile phone, the NVIDIA GTX 1080 Ti GPU, CPU or even FLOPS. We chose the Samsung Galaxy Note 10 for our experiments. After obtaining the final population of architectures using a MOO algorithm with these predictors, we took each of the final models and performed an evaluation under the validation set of the ImageNet. During the evaluation, the resolution of the squared image was randomly chosen from {160, 176, 192, 208, 224}. #### Iv-B3 Operators Next, we define the four operators that will take place on the individuals during the iterations of evolutionary algorithms: sampling, mutation, crossover and selection. The first three operators are the same in both multi-objective optimization algorithms used to find the final architectures of the OFA search stage (NSGA-II and SMSEMOA, discussed in the next section). The sampling operator is related to the initialization of the algorithm, that is, it defines the initial population of individuals at iteration zero. In our case we simply used the random sampling, meaning that values for the depth, width and kernel size were randomly chosen among their respective valid choices (d \(\in\) {2, 3, 4}, w \(\in\) {3, 4, 6}, ks \(\in\) {3, 5, 7}). The mutation operator is Fig. 4: Genotype encoding used to represent an individual of the population. Fig. 5: Encoding of an individual of the population. used to promote diversity among the solutions, which might prevent the algorithm to get stuck in a local minimum. In our experiments we defined that each gene of the chromosome has a probability of 10% of replacing its value into one of the valid choices (including the same value), which is the same probability used by the evolutionary search proposed on the OFA framework. The crossover operator, also known as recombination, takes two solutions as parents and combine them to generate a child solution. Here we chose the uniform crossover, which means that the value of each gene of the child solution is randomly taken from one of the parents solution with equal probability. Finally, the selection operator defines a criterion for choosing the individuals of the current population that will be used to generate the offspring, that is, the next generation of individuals. It usually incorporates the fitness function (objective functions) somehow, and in our case this operator is implicitly defined according to which of the two MOO algorithms is used during the evolutionary search. #### Iii-B4 MOO Algorithms The first MOO algorithm considered is the Non-dominated Sorting Genetic Algorithm II (NSGA-II) [14]. In this algorithm the selection operator chooses first the individuals based on their level of non-dominated front. When the sum of the individuals already selected with the amount of individuals of the current front being considered surpass the population size previously defined, then the individuals of this last front are selected based on their crowding distance (Manhattan distance in the objective space). The selection based on the rank of the front always favors the best solutions with respect to the objective functions, while the crowding distance selection aims to spread the solutions toward regions less explored. In our experiments, we defined the population size to be 100 and ran the algorithm for 1,000 generations. The second MOO algorithm considered is the SMS-EMOA [15]. This algorithm guides the search aiming to maximize the hypervolume measure (or s-metric), which is commonly applied to compare the performance of different evolutionary multi-objective optimization algorithms (EMOA). This algorithm combines both the concepts of non-dominated sorting and the hypervolume measure as the selection operator. Similarly to the previous scenario, we used a population size of 100 and ran this algorithm for 1,000 generations. It is important to note that others MOO algorithms could have been used in this stage. ### _Ensemble_ After the search stage we end up with a final population where each of the individuals represent an efficient architecture considering different trade-offs between the objective functions, i.e., accuracy and latency. We then realize a series of experiments grouping different individuals to form ensembles. There are three scenarios considered for the experiments, regarding the group of neural architectures in which the components of the ensemble will be taken from. #### Iii-C1 Random components This group of architectures represents 100 neural networks sampled from the OFA search space without any search procedure. We randomly choose the value of each gene (depth, kernel size, expansion ratio) among its valid options. The procedure to get these architectures is the same of the one to define the initial population in the MOO algorithms. #### Iii-C2 OFA components The second group of architectures is the one obtained from the evolutionary algorithm of the original OFA framework. Since this search is guided by a specific hardware constraint, we need to perform a full search for each requested architecture, resulting in 9 runs of the search algorithm for the 9 architectures in this group. The restrictions used as input of the searches are the latencies from 15 ms up to 55 ms, increasing 5 ms at each step. #### Iii-C3 Efficient components The last set of architectures considered in the ensemble experiments are the architectures obtained from the MOO algorithm, which aims to optimize both accuracy and latency. Since the evolutionary algorithm is population-based, all architectures are found at once at the end of the search procedure. Each of them is optimal considering a trade-off between the conflicting objectives. #### Iii-C4 Voting schemes In order to evaluate the performance of the ensemble, we use two voting schemes. In the first one, called "hard voting", the output of the ensemble is decided according to the most-voted top-1 class among the participants of the ensemble. If there is a draw in votes between one or more classes, then we check the occurrences of these classes on the second most likely output of each model (top-2 output) and decided by the most frequent. If there is still a draw in votes, then we keep checking the top-3 up to the top-5 output. After that, if the draw still exists, we look at the architectures who voted on the classes with the same amount of votes on the top-1 output and choose the output of the biggest model to be the output of the ensemble (according to the premise the larger models have more flexibility and might be more reliable than smaller models). In this scheme, the output of each neural network has the same importance, regardless how certain the Fig. 6: Comparison between the NSGA-II and SMS-EMOA final population. model is about its output. The second voting scheme also take into account the probability assigned to each class on the output. For this, we take the last layer of each neural network and append a softmax layer straight after it. To decide the output of the ensemble, we sum the probabilities for each class among all participants of the ensemble, and take the output with highest accumulated value. This helps to alleviate one problem of the hard voting scheme, which is the fact that a vote from a model with a low confidence in its output has the same weight of a vote from a model with a high confidence. This second voting scheme, called "soft voting", provides a way to weight the vote of each architecture according to the confidence of the model in its output class, which can be beneficial in some cases. ## III Results and Discussion ### _Searching for efficient architectures_ To solve the multi-objective optimization problem formulated for the search stage of the Once-for-All framework, we proposed using two MOO algorithms: NSGA-II and SMS-EMOA. Figure 6 shows the final architectures found by each of these algorithms after the 1,000 generations. We can see that the final population approximates the typical Pareto front for MOO problems with two conflicting objective functions. Moreover, it looks that while the SMS-EMOA finds slightly better architectures with lower latency, the NSGA-II finds better architectures from the middle up to the end of the latency axes. Apart from the region with higher latency, the predicted accuracies found by both algorithms are very similar though. In Figure 7 we plot the progression of the hypervolume measure (s-metric) of the NSGA-II and the SMS-EMOA across the 1,000 generations. This metric is commonly used to compare the performance of different evolutionary multi-objective optimization algorithms (EMOA) and requires a reference point on the objective space to be calculated, which for the Figure in question is the point (100, 25). Once the NSGA-II presented a higher value for the hypervolume measure after the 1,000 generations of the evolutionary search, we decided to use the population found by this algorithm for the following experiments with the ensembles. Figure 8 shows the progression of the populations across the iterations of the algorithm. We can see that the individuals of the initial population (red) are spread across the objective space, which is comprehensible, since these individuals are randomly sampled from the OFA search space. Even though they are far from being efficient in terms of both accuracy and latency compared to the optimal solutions, over the generations, they progressively approximate the Pareto front, as we can see for the individuals after 10 (green), 100 (orange) and 1,000 generations (blue). This indicates that even though the neural networks of the OFA search space have their weights already trained, a search is indeed needed to retrieve efficient architectures. Next, in Figure 9 we show the comparison between the 100 individuals of the final population searched with the NSGA-II algorithm in a MOO approach, the 100 individuals randomly sampled from the OFA search space and the 9 individuals obtained with 9 runs of the original evolutionary algorithm of the OFA framework for 9 different latency constraint (15 ms to 55 ms, with a step of 5 ms). It is clear here the advantage of proposing the search process as a multi-objective optimization problem, since all solutions are found in a single search, given the user the power to make a choice a posteriori on which architecture to select. The performance predictors are useful to speed up the search process. However, if the predictions are not accurate, the resulting architectures of the search process will not present the same results during inference. In order to check the reliability of the accuracy predictor, we took each of the architectures shown in Figure 9 and measured the real accuracy under the validation set of the ImageNet. The results are shown in Figure Fig. 8: Progression of the solutions for the NSGA-II MOO algorithm. Fig. 7: Comparison between the NSGA-II and SMS-EMOA hypervolume. 9. We can see that although the top-1 % accuracies are not exactly the same, the shape of curve is, which means that the accuracy predictor works well disregarding a constant offset. Luckily, the domination concept used by the MOO algorithms are not dependent on the absolute values with respect to the objective functions, relying instead on a comparative approach, which means that as long the shape of the curves on the objective space is the same, we should not have any problem with these types of evolutionary algorithms. ### _Ensemble_ With the architectures represented in Figures 9 and 10, we now perform a series of experiments grouping these neural networks to form ensembles and to compare the impact of each search method on the results of the committee machines. For each population we realize experiments considering both the hard and soft voting mechanisms. Furthermore, instead of taking only the accuracy of the ensemble into account, we also consider two approaches regarding the latency metric of the ensembles. In the first approach, the latency of the ensemble is defined to be the sum of all architectures participating in the ensemble. This strategy is based on the premise that we have a single hardware with limited amount of memory to implement the ensemble, and therefore we need to load each model one at a time to evaluate its performance. In the second approach, the latency of the ensemble is equal to the model's latency that has the highest value among those networks participating of the ensemble. This strategy is based on the premise that parallelization is viable, and therefore all models can be evaluated simultaneously. This, of course, requires a limited amount of models in the ensemble, due to memory scaling. As a consequence, we test ensembles with the number of components varying from 2 up to 8 (totalizing 7 different ensemble size) due to this limitation. #### Vi-B1 Efficient components The first group of architectures considered in our experiments with ensemble are those of the last population obtained from the NSGA-II search described before, and we call it the efficient population. We then propose two different sampling techniques to form the ensembles. For both of them, the first step is to sort the neural networks of this population by latency, which means that the model with index 0 is the model with the lowest latency (most left model in Figure 10) and the model with index 99 is the one with the highest latency (most right model in Figure 10). In the first method, we sample 43 random different combinations of these architectures to make up the committee. We start with ensembles containing 2 architectures, then we sample more 43 random different combinations for ensembles with size 3, so on and so forth, up to ensembles composed by 8 neural networks. This method will be used as a comparison to the performance of the ensembles with components from the random architecture's population. The second approach uses a subset of the efficient population and aims to mimic the architectures found by the OFA search. For that, we take the already sorted by latency architectures of the efficient population and choose the architectures immediately before the latency constraint of 15 ms, then 20 ms, up to 55 ms. This approach will be put side by side of the ensemble formed with the architectures found with the OFA search. Figure 11 shows all non-dominated architectures for the first method considering the sum of latencies and the soft voting scheme. #### Vi-B2 Random components The experiments with ensembles formed by the population of random architectures follow the first method of the ensembles with efficient architectures. That is, we take 43 different combinations of architectures for each of the ensembles with 2, 3,..., 8 models in its compositions out of the 100 models available. All of this is done once for the hard voting scheme and once for the soft voting scheme, both with the sum and maximum of latencies in the ensemble. Figure 12 shows an example of comparison between the ensemble with efficient (blue) and random (green) Fig. 10: Comparison between the different search methods considering the real accuracy evaluated on the ImageNet validation set. Fig. 9: Comparison between the different search methods using the accuracy predictor. architectures, using the soft voting scheme and considering the highest latency on the committee. We can see that the ensembles found using the efficient architectures consistently outperforms the ones found with random architectures. This same statement is valid for roughly all other experiments related to these two populations. #### Iii-B3 OFA components The experiments with ensembles formed by the architectures found by the OFA search follow the second method of the ensembles with efficient architectures. That is, we take 43 different combinations of architectures for each of the ensembles with 2, 3,..., 8 models in its compositions out of the 9 models available. Figure 13 illustrates a comparison between the ensembles with size 6, using the hard voting scheme and taking the sum of latencies of all models in the committee. Again we see that the ensemble with efficient architectures outperforms the ones with the architectures from the OFA search. ## IV Conclusion We introduced the OFA2, a technique that extends the OFA framework by formulating the search stage as a multi-objective optimization problem. Solving this problem in the multi-objective perspective can be seen as the same of decoupling the previously jointed search and deploy stages. While in the original OFA framework the user must make a decision a priori to the search with respect to the desired hardware constraint, now we alleviate this decision to be made a posteriori. Instead of searching for a unique architecture, now we get a pool of non-dominated solutions all at once at the end of the search procedure, each of them with a unique trade-off regarding the conflicting objective functions (accuracy and latency). Besides of improving the efficiency of the search stage, our method is also capable of finding better architectures with respect to the accuracy evaluated on the ImageNet dataset. Furthermore, we conduct a series of experiments related to the selection of architectures to form ensembles. We show that employing these efficient and diverse candidate solutions as components for ensembles may improve even more the overall performance.
2303.02956
Fault Awareness in the MPI 4.0 Session Model
The latest version of MPI introduces new functionalities like the Session model, but it still lacks fault management mechanisms. Past efforts produced tools and MPI standard extensions to manage fault presence, including ULFM. These measures are effective against faults but do not fully support the new additions to the standard. In this paper, we combine the fault management possibilities of ULFM with the new Session model functionality introduced in version 4.0 of the standard. We focus on the communicator creation procedure, highlighting criticalities and proposing a method to circumvent them. The experimental campaign shows that the proposed solution does not significantly affect applications' execution time and scalability while better managing the insurgence of faults.
Roberto Rocco, Gianluca Palermo, Daniele Gregori
2023-03-06T08:01:31Z
http://arxiv.org/abs/2303.02956v1
# Fault Awareness in the MPI 4.0 Session Model ###### Abstract. The latest version of MPI introduces new functionalities like the Session model, but it still lacks fault management mechanisms. Past efforts produced tools and MPI standard extensions to manage fault presence, including ULFM. These measures are effective against faults but do not fully support the new additions to the standard. In this paper, we combine the fault management possibilities of ULFM with the new Session model functionality introduced in version 4.0 of the standard. We focus on the communicator creation procedure, highlighting criticalities and proposing a method to circumvent them. The experimental campaign shows that the proposed solution does not significantly affect applications' execution time and scalability while better managing the insurgence of faults. HPC, MPI Sessions, ULFM, Fault Tolerance + Footnote †: [ cid][http://www.cid.com/cid/](http://www.cid.com/cid/) + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ and reduces the coordination need, making the Session approach more scalable and better fitting the exascale scenario. While the World model initialisation process happens entirely through the MPI_Init call, the Session model requires a more complex procedure. Figure 1 summarises the Session model creation flow. Most of the functions are local and thus require no communication, except for the MPI_Comm_create_from_group call, which needs participation from all the processes part of the future communicator. After the communicator generation, the application operates analogously to the World model. ### The User Level Fault Mitigation extension The MPI standard does not specify the execution behaviour after the incurrence of a fault, practically precluding any fault management technique. Many efforts proposed solutions to circumvent this limitation, with ULFM (Bordes et al., 2017) being the one currently receiving the most attention. It is an MPI standard extension that allows the execution to continue even if a process stops working and introduces calls to propagate faults, remove them and reach an agreement even in a faulty scenario (with the _revoke_, _shrink_ and _agree_ functions respectively). However, faults can impact the execution's correctness, requiring additional recovery techniques. ULFM considers faults as an abrupt and unplanned termination of computation. Other fault types, like silent data corruption and timing errors, usually require ad-hoc measures and are thus excluded from the ULFM analysis. Moreover, ULFM does not provide any recovery mechanism, expecting application developers to choose it depending on their needs. The ULFM fault management features apply from communicator creation to its destruction. This assumption makes the ULFM approach viable in the Session model since the behaviour after communicator creation is the same as the World one. However, ULFM does not specify the application behaviour if faults happen before and during the MPI_Init call. This limitation does not heavily impact the World model since no MPI routine is callable before initialisation, so applications perform it as soon as possible. On the other hand, due to the added flexibility from the Session model, we can avoid calling the MPI_Init function or invoke the Session initialisation procedure multiple times and further in the execution, extending the period for a possible undefined behaviour. In this work, we try to enlarge the ULFM-defined behaviour, analysing the Session model initialisation phase. ### Related work Aside from the ULFM extension, the other main direction in the MPI fault tolerance field is the Reinit proposal (Bordes et al., 2017; Bordes et al., 2017). Its main feature is the possibility to perform the MPI_Init function multiple times. In case of failure, no repair process is necessary since re-initialising the communication is sufficient. The approach provided by the Reinit solution is similar to the multiplicity of the sessions but focuses only on the World model and would bring no additional benefit in the Session one compared to ULFM. Out of all the efforts leveraging the ULFM extension, (Bordes et al., 2017) analysed the MPI_Comm_create_from_group function, proposing a solution that enables the completion of the call even with faults among the involved processes. They introduce a Liveness Discovery Algorithm that queries the execution and proactively discovers failures. This solution is promising since it deals with a critical function of the Session initialisation process but limits its scope to the World model. A completely different approach comes from the Session Working Group (responsible for the Session model development inside the MPI standard). They are currently working on extending the idea behind the Session model to obtain isolated communication _Bubbles_(Bordes et al., 2017). This work should open up for malleability properties in MPI, allowing dynamic reconfiguration of the execution. Moreover, those benefits also affect fault tolerance since they allow handling faults as resource changes. While being a promising direction, the fault tolerance analysis requires additional efforts: the Bubble idea can represent the faults' consequences but needs effective detection and propagation mechanisms. This work is orthogonal to the Bubbles proposal since it operates at different levels while keeping the approaches compatible. ## 3. Failures in the initialisation flows In this section, we analyse the effects of faults on the initialisation flows of the two MPI execution models. For reference, we use an MPI implementation with ULFM features enabled. Otherwise, fault incurrence would have just stopped the execution regardless of our measures. In the World model, the initialisation procedure consists only of the MPI_Init function. It involves all the processes part of the execution and produces the starting communicators. If someone does not participate in the call, the others will wait until it joins. If that process cannot join, the execution enters a deadlock state, which is also the case if some process fails before the function invocation. From the execution viewpoint, it is impossible to decide whether the function will eventually terminate or is in a deadlock Figure 1. The Session model execution flow, from the Session creation to the communicator generation. Boxes in orange represent local operations, the ones in green the produced handles, and the ones in blue functions that require synchronisation. state: this decision is analogous to the _Halting Problem_(Hari and Scholkopf, 1998). The deadlock-vulnerable function is, however, the first call of every MPI application, minimising the vulnerability time window. Overall, the deadlock risk is acceptable, so ULFM can focus on handling faults in the rest of the application. In the Session model, we face a different situation: almost all the functions of the initialisation flow are local, thus requiring no communication and providing a result even with faults. The only non-local operation is the MPI_Comm_create_from_group call, which needs the participation of all the processes involved in the communication creation. Similarly to the MPI_Init function, the communicator creation call cannot distinguish between faults and delays and exposes a deadlock whenever a process does not participate. Differently from the MPI_Init function, however, the initialisation is not unique and can happen at any point of the application execution, even multiple times per session. These considerations imply that the deadlock risk has a high impact and moderate probability of verifying, making the problem relevant. ## 4. The Horizon Sets The effort (Hari and Scholkopf, 1998) analysed the MPI_Comm_create_from_group function from the fault management perspective, removing the deadlock eventuality from its behaviour. The authors leveraged a Liveness Discovery Algorithm (LDA) to explore the group and remove failed processes. Their approach requires an already-created communicator, including all the processes that intend to be part of the new one, so they opted for MPI_COMM_WORLD. Since MPI_COMM_WORLD is usable only in the World model, that approach is valid only if another communicator is available. However, to create such a communicator, we can only use the MPI_Comm_create_from_group function, which can still deadlock. The situation resembles a _chicken-egg_ problem. Nonetheless, if we provide a correct communicator, we can use the LDA algorithm and remove the deadlock eventuality. This analysis aims to create such a communicator, thus reducing the deadlock vulnerability but preserving the application performance. To better visualise the problem, we introduce a set of notions. For a given group of processes \(G\), we define as \(Create(C_{G})\) the call which produces a communicator \(C_{G}\) with all the processes part of \(G\). Moreover, we refer to the _superset_\(S(G)\) as the set of communicators containing at least all the processes of \(G\). From these definitions, we can deduce these simple assumptions: 1. \(\forall G:C_{G}\in S(G)\land\mbox{MPI\_COMM\_WORLD}\in S(G)\) 2. \(\forall G1,G2:C_{G2}\in S(G1)\Rightarrow G2\supseteq G1\) The first assumption guarantees that the superset of each group is never empty, while the other defines the relationship between different supersets. The requirement for the Liveness Discovery Algorithm (LDA) and the deadlock removal is the existence of a call to create a communicator part of the superset of the group passed as a parameter. Formally: \[\begin{split} LDA\,\mbox{applicable}\,\,\mbox{for}\,\,Create(C_{G}) \Leftrightarrow\\ \exists K\in S(G)\,|\,Create(K)\,\,\mbox{prior}\,\,\mbox{to}\, \,Create(C_{G})\end{split} \tag{1}\] Given the above analysis, if we want to minimise the deadlock eventuality, we must define a communicator set \(W\) such that \(\forall G\,\exists K\in W\,|\,K\in S(G)\). We should initialise all the communicators inside \(W\) before the first communicator creation call so that, for each successive \(Create(C_{G})\), Equation 1 is valid. Moreover, following the Assumption A.1, we can deduce that the minimal \(W\) set contains only a communicator equivalent to MPI_COMM_WORLD. Given this conclusion, then _naive_ solution to the deadlock vulnerability problem is the generation of a global communicator during the first call of MPI_Session_init, which will happen before any other operation. This solution, however, features many problems: * The function MPI_Session_init is local, thus requiring no communication between processes. Communicator creation needs coordination, changing the function's characteristics. * The MPI_Session_init invocation must happen as soon as possible to minimise the vulnerability window, compromising the Session model flexibility. * One of the Session model benefits consists in its scalability due to not requiring the application to initialise the entire global communicator, but just the needed portions. This solution loses that benefit. This _naive_ solution is thus insufficient, but it is a good starting point. Its main weakness is its need to create a global communicator, regardless of the characteristics of the application. Moreover, its use costs some of the Session model benefits, so we opted for a different approach. While global communicator usage is the safest option, it is not the only one. To further the analysis, we propose the use of _Horizon sets_\(H(t)\), defined such that the following formulation is valid: \[\begin{split}\forall G:Create(C_{G})\,\,\mbox{prior}\,\,\mbox{to} \,\,t\Rightarrow\\ \exists C\in H(t)\,|\,C\in S(G)\wedge Create(C)\,\,\mbox{prior}\, \,\mbox{to}\,\,t\end{split} \tag{2}\] In other words, at each time \(t\), a Horizon set contains at least an element of the superset of all the groups passed as a parameter in a previous communicator creation call. Moreover, combining the Equation 2 with the Equation 1, we deduce that the LDA is Figure 2. Evolution of the Horizon set (in blue) when introducing a new communicator (in red). The grey rectangle represents the global communicator, each white circle a process inside it. The behaviour depends on whether there is an inclusion relationship (cases 2a and 2c) or not (case 2b). applicable if there exists an element of the superset of the group passed as a parameter inside a Horizon set. Formally: \[\begin{split} VCreate(C_{G}):Create(C_{G})\ happens\ at\ t+1\\ LDA\ applicable\ for\ Create(C_{G})\ \Leftrightarrow\\ \exists K\in H(t)\mid K\in S(G)\end{split} \tag{3}\] The Equation 3 links Horizon sets to our problem, so we can reason on the formers to solve the latter. Horizon sets evolve with time, growing in size with new communicators. The collection of all the communicators created before time t is a Horizon set, and we can use it to decide whether the LDA algorithm is applicable. However, that Horizon set may not be the lowest cardinality one. We define the minimal Horizon set as \(H^{*}(t)\). Focusing on the minimal Horizon set is mandatory if we want to reduce the overhead due to communicator management. We can analyse the evolution of \(H^{*}(t)\) by considering all the possible topology cases of communicator creations, as shown in Figure 2. In general, the addition of a communicator to the minimal Horizon set must not take place if another communicator containing it already exists (Figure 1(a)). On the other hand, the newly inserted communicator removes all the communicators it includes from the Horizon set (Figure 1(c)). Following these concepts, we can keep track of \(H^{*}(t)\) and use it to remove the deadlock eventuality from some communication creation calls. Inside the application code, calls to \(Create(C_{G})\) refer to invocations of functions MPI_Comm_create_from_group or MPI_Init (which creates MPI_COMM_WORLD). By keeping track of those functions, we can update the minimal Horizon set correctly and use it to limit the deadlock vulnerability of the Session model initialisation flow. Moreover, we provide a new ad-hoc collective function MPIX_Horizon_from_group, which includes the communicator of the group passed as a parameter inside the Horizon set. While its behaviour is analogous to the communicator creation functions, it allows the user to provide the application communication intent. ## 5. Experimental Results We integrated the proposed Horizon set management functionalities inside the Legio library (Buller et al., 2018; Buller et al., 2018) since it already implements the LDA algorithm and operates at the same abstraction level. All the added functionalities leverage the PMPI layer to avoid tampering with the application code directly. The experimental campaign evaluates the proposed solution overhead and scalability in a real-world scenario. In particular, we integrate the solution with the embarrassingly parallel (EP) and data traffic (DT) benchmarks part of the NAS collection (Buller et al., 2018) and execute them on the IT4Innovations Karolina cluster, featuring nodes with 2 x AMD Zen 2 EPYC(tm) 7H12, 2.6 GHz processors and 256 GB of RAM, each running 128 MPI processes. We use the latest version of OpenMPI featuring ULFM (v5.0.0) and implementing MPI standard 4.0. We changed the two benchmarks to leverage the Session model. We use a "C" size workload for both applications, the maximum available for their executions. We ran tests using different network sizes to evaluate the scalability impact of our solution. We ran both applications with the naive solution, with Horizon set support, and without fault management support. For each configuration, we measured the total time to complete ten executions. Figure 3 shows the benchmarks' execution time, normalised to the values without fault management support. The results from the two benchmarks' executions differ due to their peculiarities. In particular, we used the EP benchmark (Figure 2(b)) to stress the solutions' scalability, while DT focuses on the overall execution impact (Figure 2(a)). We can see from the figures that the naive approach always has more overhead than the proposed Horizon set solution. Moreover, the naive approach overhead grows with the network size, as visible in Figure 2(b). On the other hand, the Horizon set solution does not heavily impact the application scalability since it does not require the initialisation of the global communicator present in the naive approach. Aside from performance-oriented experiments, we did functional tests to prove the resiliency of our approach. Those tests showed that an application leveraging our solutions is more deadlock-resilient, managing to complete even with faults in the execution. ## 6. Conclusion In this effort, we analysed and limited the impact of faults in MPI applications using the Session model. We motivated the need for Horizon sets, and we analysed their behaviour. We showed our proposed implementation of the Horizon set management functionalities and tested it using benchmarks in a real-world scenario. Results showed that the Horizon set solution does not compromise the performance and scalability of executions, reduces the deadlock vulnerability to faults and performs better than the naive one. Future work in this area includes interfacing with the MPI Bubbles proposal and additional evaluation on new benchmarks that could further stress the capabilities of the Session model.
2307.01770
Fast Optimal Transport through Sliced Wasserstein Generalized Geodesics
Wasserstein distance (WD) and the associated optimal transport plan have been proven useful in many applications where probability measures are at stake. In this paper, we propose a new proxy of the squared WD, coined min-SWGG, that is based on the transport map induced by an optimal one-dimensional projection of the two input distributions. We draw connections between min-SWGG and Wasserstein generalized geodesics in which the pivot measure is supported on a line. We notably provide a new closed form for the exact Wasserstein distance in the particular case of one of the distributions supported on a line allowing us to derive a fast computational scheme that is amenable to gradient descent optimization. We show that min-SWGG is an upper bound of WD and that it has a complexity similar to as Sliced-Wasserstein, with the additional feature of providing an associated transport plan. We also investigate some theoretical properties such as metricity, weak convergence, computational and topological properties. Empirical evidences support the benefits of min-SWGG in various contexts, from gradient flows, shape matching and image colorization, among others.
Guillaume Mahey, Laetitia Chapel, Gilles Gasso, Clément Bonet, Nicolas Courty
2023-07-04T15:20:41Z
http://arxiv.org/abs/2307.01770v2
# Fast Optimal Transport through Sliced Wasserstein Generalized Geodesics ###### Abstract Wasserstein distance (WD) and the associated optimal transport plan have been proven useful in many applications where probability measures are at stake. In this paper, we propose a new proxy of the squared WD, coined min-SWGG, that is based on the transport map induced by an optimal one-dimensional projection of the two input distributions. We draw connections between min-SWGG and Wasserstein generalized geodesics in which the pivot measure is supported on a line. We notably provide a new closed form for the exact Wasserstein distance in the particular case of one of the distributions supported on a line allowing us to derive a fast computational scheme that is amenable to gradient descent optimization. We show that min-SWGG is an upper bound of WD and that it has a complexity similar to as Sliced-Wasserstein, with the additional feature of providing an associated transport plan. We also investigate some theoretical properties such as metricity, weak convergence, computational and topological properties. Empirical evidences support the benefits of min-SWGG in various contexts, from gradient flows, shape matching and image colorization, among others. ## 1 Introduction Gaspard Monge, in his seminal work on Optimal Transport (OT) [42], studied the following problem: how to move with minimum cost the probability mass of a source measure to a target one, for a given transfer cost function? At the heart of OT is the optimal map that describes the optimal displacement; the Monge problem can be reformulated as an assignment problem. It has been relaxed by [33] as finding a plan that describes the amount of mass moving from the source to the target. Beyond this optimal plan, an interest of OT is that it defines a distance between probability measures, the Wasserstein distance (WD). Recently, OT has been successfully employed in a wide range of machine learning applications, in which the Wasserstein distance is estimated from the data, such as supervised learning [30], natural language processing [38] or generative modelling [5]. Its ability to provide meaningful distances between empirical distributions is at the core of distance-based algorithms such as kernel-based methods [60] or \(k\)-nearest neighbors [6]. The optimal transport plan has also been used successfully in many applications where a matching between empirical samples is sought such as color transfer [55], domain adaptation [19] and positive-unlabeled learning [15]. Solving the OT problem is computationally intensive; the most common algorithmic tools to solve the discrete OT problem are borrowed from combinatorial optimization and linear programming, leading to a cubic complexity with the number of samples that prevents its use in large scale application [53]. To reduce the computation burden, regularizing the OT problem with e.g. an entropic term allows defining solvers with a quadratic complexity [23]. Other methods based on the existence of a closed form of OT were also devised to efficiently compute a proxy of WD as sketched hereafter. **Projections-based OT.** Sliced-Wasserstein distance (SWD) [10, 56] leverages 1D-projections of distributions to give a lower approximation of the Wasserstein distance, relying on the closed form of OT for 1D probability distributions, leading to a linearithmic time complexity. While SWD averages WDs computed over several 1D projections, max-SWD [24] keeps only the most informative projection. This framework provides efficient algorithms that can handle millions of points and have similar topological properties to as WD [45]. Otherworks restrain SWD and max-SWD to projections onto low dimensional subspaces [40, 52] to provide more robust estimation of those OT metrics. Though effective as a surrogate for WD, those methods do not provide a transport plan in the original space \(\mathbb{R}^{d}\). To overcome this limitation, [44] aims at computing transport plans in a subspace which are extrapolated in the original space. **Pivot measure-based OT.** Other research works rely on a pivot, yet intermediate measure. They decompose the OT metric into Wasserstein distances between each input measure and the considered pivot measure. They exhibit better properties such as statistical sample complexity or computation gain [29, 65]. Even though the OT problems are split, they are still expensive when dealing with large sample size distributions, notably when only two distributions are involved. **Contributions.** We introduce a new proxy of the squared WD that exploits the principles of aforesaid approximations of OT metric. The original idea is to rely on projections and one-dimensional assignment of the projected distributions to compute the new proxy. The approach is well-grounded as it hinges on the notion of Wasserstein generalized geodesics [4] with pivot measure supported on a line. The main features of the method are as follows: i) its computational complexity is on par with SW, ii) it provides an optimal transport plan through the 1D assignment problem, iii) it acts as an upper bound of WD, and iv) is amenable to optimization to find the optimal pivot measure. As an additional contribution, we establish a closed form of WD when an input measure is supported on a line. **Outline.** Section 2 presents some background of OT. Section 3 formulates our new WD proxy starting and provides some of its topological properties and a numerical computation scheme. Section 4 builds upon the notion of Wasserstein generalized geodesics to reformulate our OT metric approximation as the Sliced Wasserstein Generalized Geodesics (SWGG) along its optimal variant coined min-SWGG. The reformulation allows deriving additional topological properties and an optimization scheme. Finally, Section 5 provides experimental evaluations. **Notations.** Let \(\langle\cdot,\cdot\rangle\) be the Euclidean inner product on \(\mathbb{R}^{d}\) and let \(\mathbb{S}^{d-1}=\{\mathbf{u}\in\mathbb{R}^{d}\text{ s.t. }\|\mathbf{u}\|_{2}=1\}\), the unit sphere. We denote \(\mathcal{P}(\mathbb{R}^{d})\) the set of probability measures on \(\mathbb{R}^{d}\) endowed with the \(\sigma-\)algebra of Borel set and \(\mathcal{P}_{2}(\mathbb{R}^{d})\subset\mathcal{P}(\mathbb{R}^{d})\) those with finite second-order moment i.e. \(\mathcal{P}_{2}(\mathbb{R}^{d})=\{\mu\in\mathcal{P}(\mathbb{R}^{d})\text{ s.t. }\int_{\mathbb{R}^{d}}\|\mathbf{x}\|_{2}^{2}d\mu(\mathbf{x})<\infty\}\). Let \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) be the subspace of \(\mathcal{P}_{2}(\mathbb{R}^{d})\) defined by empirical measures with \(n\)-atoms and uniform masses. For any measurable function \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\), we denote \(f_{\#}\) its push forward, namely for \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and for any measurable set \(A\in\mathbb{R}^{d}\), \(f_{\#}\mu(A)=\mu(f^{-1}(A))\), with \(f^{-1}(A)=\{\mathbf{x}\in\mathbb{R}^{d}\text{ s.t. }f(\mathbf{x})\in A\}\). ## 2 Background on Optimal Transport **Definition 2.1** (Wasserstein distance).: The squared WD [63] between \(\mu_{1},\mu_{2}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) is defined as: \[W_{2}^{2}(\mu_{1},\mu_{2})\ \stackrel{{\text{def}}}{{=}}\inf_{\pi\in \Pi(\mu_{1},\mu_{2})}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\|\mathbf{x}-\mathbf{y} \|_{2}^{2}d\pi(\mathbf{x},\mathbf{y}) \tag{1}\] with \(\Pi(\mu_{1},\mu_{2})=\{\pi\in\mathcal{P}_{2}(\mathbb{R}^{d}\times\mathbb{R}^{ d})\text{ s.t. }\pi(\mathbb{R}^{d}\times A)=\mu_{2}(A)\text{ and }\pi(A\times \mathbb{R}^{d})=\mu_{1}(A)\), \(\forall A\) measurable set of \(\mathbb{R}^{d}\}\). The \(\arg\min\) of eq (1) is called the optimal transport plan. Denoted \(\pi^{*}\), it expresses how to move the probability mass from \(\mu_{1}\) to \(\mu_{2}\) with minimum cost. In some cases, \(\pi^{*}\) is of the form \((Id,T)_{\#}\mu_{1}\) for a measurable map \(T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), _i.e._ there is no mass splitting during the transport. This map is called a Monge map and is denoted \(T^{\mu_{1}\rightarrow\mu_{2}}\) (or \(T^{1\to 2}\) for a shorthand). Thus, one has \(W_{2}^{2}(\mu_{1},\mu_{2})=\inf_{T\text{ s.t. }T_{\#}\mu_{1}=\mu_{2}}\int_{\mathbb{R}^{d}}\|\mathbf{x}-T(\mathbf{x})\|_{2}^{2}d\mu_{ 1}(\mathbf{x})\). This occurs, for instance, when \(\mu_{1}\) has a density w.r.t. the Lebesgue measure [12] or when \(\mu_{1}\) and \(\mu_{2}\) are in \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\)[58]. Endowed with the WD, the space \(\mathcal{P}_{2}(\mathbb{R}^{d})\) is a geodesic space. Indeed, since there exists a Monge map \(T^{1\to 2}\) between \(\mu_{1}\) and \(\mu_{2}\), one can define a geodesic curve \(\mu^{1\to 2}:[0,1]\rightarrow\mathcal{P}_{2}(\mathbb{R}^{d})\)[31] as: \[\forall t\in[0,1],\ \mu^{1\to 2}(t)\ \stackrel{{\text{def}}}{{=}} (tT^{1\to 2}+(1-t)Id)_{\#}\mu_{1} \tag{2}\] which represents the shortest path w.r.t. Wasserstein distance in \(\mathcal{P}_{2}(\mathbb{R}^{d})\) between \(\mu_{1}\) and \(\mu_{2}\). The Wasserstein mean between \(\mu_{1}\) and \(\mu_{2}\) corresponds to \(t=0.5\) and we simply denote it \(\mu^{1\to 2}\). This notion of geodesic allows the study of the curvature of the Wasserstein space [1]. It was shown that the Wasserstein space is of positive curvature [51], _i.e._ it respects the following inequality: \[W_{2}^{2}(\mu_{1},\mu_{2})\geq 2W_{2}^{2}(\mu_{1},\nu)+2W_{2}^{2}(\nu,\mu_{2}) -4W_{2}^{2}(\mu^{1\to 2},\nu) \tag{3}\] for all pivot measures \(\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Solving and approximating Optimal Transport.The Wasserstein distance between empirical measures \(\mu_{1},\mu_{2}\) with \(n\)-atoms can be computed in \(\mathcal{O}(n^{3}\log n)\), preventing the use of OT for large scale applications [11]. Several algorithms have been proposed to lower this complexity, for example the Sinkhorn algorithm [23] that provides an approximation in near \(\mathcal{O}(n^{2})\)[2]. Notably, when \(\mu_{1}=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}\) and \(\mu_{2}=\frac{1}{n}\sum_{i=1}^{n}\delta_{y_{i}}\) are 1D distributions, computing WD can be done by matching the sorted empirical samples, leading to an overall complexity of \(\mathcal{O}(n\log n)\). More precisely, let \(\sigma\) and \(\tau\) two permutation operators s.t. \(x_{\sigma(1)}\leq x_{\sigma(2)}\leq...\leq x_{\sigma(n)}\) and \(y_{\tau(1)}\leq y_{\tau(2)}\leq...\leq y_{\tau(n)}\). Then, the 1D Wasserstein distance is given by: \[W_{2}^{2}(\mu_{1},\mu_{2})=\frac{1}{n}\sum_{i=1}^{n}(x_{\sigma(i)}-y_{\tau(i)} )^{2}. \tag{4}\] Sliced WD.The Sliced-Wasserstein distance (SWD) [56] aims to scale-up the computation of OT by relying on the closed form for 1D distributions. It is defined as the expectation of 1D-WD computed along projection directions \(\theta\in\mathbb{S}^{d-1}\) over the unit sphere: \[\text{SW}_{2}^{2}(\mu_{1},\mu_{2})\ \stackrel{{\text{def}}}{{=}}\int_{ \mathbb{S}^{d-1}}W_{2}^{2}(P_{\#}^{\theta}\mu_{1},P_{\#}^{\theta}\mu_{2})d \omega(\theta), \tag{5}\] where \(P_{\#}^{\theta}\mu_{1}\) and \(P_{\#}^{\theta}\mu_{2}\) are projections onto the direction \(\theta\in\mathbb{S}^{d-1}\) with \(P^{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), \(\mathbf{x}\mapsto\langle\mathbf{x},\theta\rangle\) and where \(\omega\) is the uniform distribution \(\mathbb{S}^{d-1}\). Since the integral in eq (5) is intractable, one resorts, in practice, to Monte-Carlo estimation to approximate the SWD. Its computation only involves projections and permutations. For \(L\) directions, the computational complexity is \(\mathcal{O}(dLn+Ln\log n)\) and the memory complexity is \(\mathcal{O}(Ld+Ln)\). However, in high dimension, many projections are necessary to approximate accurately SWD and many projections lead to 1D-WD close to 0. This issue is well known in the SW community [68], where different ways of performing effective sampling have been proposed [46; 49; 50] such as distributional or hierarchical slicing. In particular, this motivates the definition of max-Sliced-Wasserstein [24] which keeps only the most informative slice: \[\text{max-SW}_{2}^{2}(\mu_{1},\mu_{2})\ \stackrel{{\text{def}}}{{=}} \ \max_{\theta\in\mathbb{S}^{d-1}}W_{2}^{2}(P_{\#}^{\theta}\mu_{1},P_{\#}^{ \theta}\mu_{2}). \tag{6}\] While being a non convex problem, it can be optimized efficiently using a gradient ascent scheme. SW-like distances are attractive since they are fast to compute and enjoy theoretical properties: they are proper metrics and metricize the weak convergence. However, they do not provide an OT plan. Projected WD.An other quantity of interest based on the 1D-WD is the projected Wasserstein distance (PWD) [57]. It leverages the permutations of the projected distributions in 1D in order to derive couplings between the original distributions. Let \(\mu_{1}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{x}_{i}}\) and \(\mu_{2}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{y}_{i}}\) in \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\), we have: \[\text{PWD}_{2}^{2}(\mu_{1},\mu_{2})\ \stackrel{{\text{def}}}{{=}} \ \int_{\mathbb{S}^{d-1}}\frac{1}{n}\sum_{i=1}^{n}\|\mathbf{x}_{\sigma_{\theta}(i)}- \mathbf{y}_{\tau_{\theta}(i)}\|_{2}^{2}d\omega(\theta), \tag{7}\] where \(\sigma_{\theta},\tau_{\theta}\) are the permutations obtained by sorting \(P_{\#}^{\theta}\mu_{1}\) and \(P_{\#}^{\theta}\mu_{2}\). As some permutations are not optimal, we straightforwardly have \(W_{2}^{2}\leq\text{PWD}_{2}^{2}\). Note that, some permutations can appear really irrelevant in the original space, leading to an overestimation of \(W_{2}^{2}\) (typically when the distributions are multi-modal or with support lying in a low dimension manifold, see Supp. A.1 for a discussion). In this paper, we restrict ourselves to empirical distributions with the same number of samples. They are defined as \(\mu_{1}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{x}_{i}}\) and \(\mu_{2}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{y}_{i}}\) in \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\). Note that the results presented therein can be extended to any discrete measures by mainly using cumulative functions instead of permutations and transport plans instead of transport maps (see Supp. A.2). ## 3 Definition and properties of min-SWGG The fact that PWD overestimates \(W_{2}^{2}\) motivates the introduction of our new loss function coined min-SWGG which keeps only the most informative permutations. Afterwards, we derive a property of distance and grant an estimation of min-SWGG via random search of the directions. **Definition 3.1** (Swgg and min-SWGG).: Let \(\mu_{1},\mu_{2}\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) and \(\theta\in\mathbb{S}^{d-1}\). Denote by \(\sigma_{\theta}\) and \(\tau_{\theta}\) the permutations obtained by sorting the 1D projections \(P_{\#}^{\theta}\mu_{1}\) and \(P_{\#}^{\theta}\mu_{2}\). We define respectively SWGG and min-SWGG as: \[\text{SWGG}_{2}^{2}(\mu_{1},\mu_{2},\theta) \stackrel{{\text{def}}}{{=}} \ \frac{1}{n}\sum_{i=1}^{n}\|\mathbf{x}_{\sigma_{\theta}(i)}-\mathbf{y}_{\tau_{ \theta}(i)}\|_{2}^{2}, \tag{8}\] \[\text{min-SWGG}_{2}^{2}(\mu_{1},\mu_{2}) \stackrel{{\text{def}}}{{=}} \ \min_{\theta\in\mathbb{S}^{d-1}}\text{SWGG}_{2}^{2}(\mu_{1},\mu_{2},\theta). \tag{9}\] One shall remark that the function SWGG corresponds to the building block of PWD in eq. (7). One main feature of min-SWGG is that it comes with a transport map. Let \(\theta^{*}\in\text{argmin SWGG}(\mu_{1},\mu_{2},\theta)\) be the optimal projection direction. The associated transport map is: \[T(\mathbf{x}_{i})=\mathbf{y}_{\tau_{\theta^{*}}^{-1}(\sigma_{\theta^{*}}(i))},\quad \forall 1\leq i\leq n. \tag{10}\] We now give some theoretical properties of the quantities min-SWGG and SWGG. Their proofs are deferred in Supp. A.3. **Proposition 3.2** (Distance and Upper bound).: Let \(\theta\in\mathbb{S}^{d-1}\). \(\text{SWGG}_{2}(\cdot,\cdot,\theta)\) defines a distance on \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\). Moreover, min-SWGG is an upper bound of \(W_{2}^{2}\), and \(W_{2}^{2}\leq\text{min-SWGG}_{2}^{2}\leq\text{PWD}_{2}^{2}\), with equality between \(W_{2}^{2}\) and min-SWGG when \(d>2n\). **Remark 3.3**.: Similarly to max-SW, min-SWGG retains only one optimal direction \(\theta^{*}\in\mathbb{S}^{d-1}\). However, the two distances strongly differ: i) min-SWGG is an upper bound and max-SW a lower bound of \(W_{2}^{2}\), ii) the optimal \(\theta^{*}\) differs (see Supp. A.4 for an illustration), and iii) max-SW does not provide a transport plan between \(\mu_{1}\) and \(\mu_{2}\). Solving eq. (9) can be achieved using a random search, by sampling \(L\) directions \(\theta\in\mathbb{S}^{d-1}\) and keeping only the one leading to the lowest value of SWGG. This gives an overall computational complexity of \(\mathcal{O}(Ldn+Ln\log n)\) and a memory complexity of \(\mathcal{O}(dn)\). In low dimension, the random search estimation is effective: covering all possible permutations through \(\mathbb{S}^{d-1}\) can be done with a low number of directions. In high dimension, many more directions \(\theta\) are needed to have a relevant approximation, typically \(\mathcal{O}(L^{d-1})\). This motivates the definition of gradient descent techniques for finding \(\theta^{*}\). ## 4 SWGG as minimizing along the Wasserstein generalized geodesics Solving problem in eq. (9) amounts to optimize over a set of admissible permutations. That problem is hard since SWGG is non convex w.r.t. \(\theta\) and piecewisely constant, thus not differentiable over \(\mathbb{S}^{d-1}\). Indeed, as long as the permutations remain the same for different directions \(\theta\), the value of SWGG is constant. Whenever the permutations change, the objective SWGG "jumps" as illustrated in Fig. 1. In this section, we tackle this problem by providing an alternative formulation of min-SWGG that allows smoothing the different kinks of SWGG, hence, making min-SWGG amenable to optimization. This formulation relies on Wasserstein generalized geodesics we introduce hereinafter. We show that this alternative formulation brings in computational advantages and allows establishing some additional topological properties and deriving an efficient optimization scheme. We also provide a new closed form for the Wasserstein distance \(W_{2}^{2}(\mu_{1},\mu_{2})\) when either \(\mu_{1}\) or \(\mu_{2}\) is supported on a line. ### SWGG based on Wasserstein Generalized Geodesics Wasserstein generalized geodesics (see Supp. B for more details) were first introduced in [4] in order to ensure the convergence of Euler scheme for Wasserstein Gradient Flows. This concept has been used notably in [29, 44, 65] to speed up some computations and to derive some additional properties (see Supp. C for more details on related works). Generalized geodesics lay down on a pivot measure \(\nu\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) to transport the distribution \(\mu_{1}\) toward \(\mu_{2}\). Indeed, one can leverage the optimal transport maps \(T^{\nu\to\mu_{1}}\) and \(T^{\nu\to\mu_{2}}\) to construct a curve \(t\mapsto\mu_{g}^{1\to 2}(t)\) linking \(\mu_{1}\) to \(\mu_{2}\) as \[\mu_{g}^{1\to 2}(t)\ \stackrel{{\text{def}}}{{=}}\ \left((1-t)T^{\nu\to\mu_{1}}+tT^{\nu\to\mu_{2}}\right)_{\#}\nu,\qquad\forall t \in[0,1]. \tag{11}\] The related generalized Wasserstein mean corresponds to \(t=0.5\) and is denoted \(\mu_{g}^{1\to 2}\). Intuitively, the optimal transport maps between \(\nu\) and \(\mu_{i},i=1,2\) give rise to a sub-optimal transport map between \(\mu_{1}\) and \(\mu_{2}\): \[T_{\nu}^{1\to 2}\ \stackrel{{\text{def}}}{{=}}\ T^{\nu\to\mu_{2}}\circ T^{\mu_{1}\to\nu}\quad\text{with}\quad(T_{\nu}^{1 \to 2})_{\#}\mu_{1}=\mu_{2}. \tag{12}\] One can be interested in the cost obtained by the transportation of \(\mu_{1}\) to \(\mu_{2}\) via the transport map \(T_{\nu}^{1\to 2}\), known as the \(\nu\)-based Wasserstein distance [47] and defined as \[W_{\nu}^{2}(\mu_{1},\mu_{2})\ \stackrel{{\text{def}}}{{=}}\ \int_{\mathbb{R}^{d}}\|\mathbf{x}-T_{\nu}^{1\to 2}(\mathbf{x})\|_{2}^{2}d \mu_{1}(\mathbf{x})=2W_{2}^{2}(\mu_{1},\nu)+2W_{2}^{2}(\nu,\mu_{2})-4W_{2}^{2}(\mu _{g}^{1\to 2},\nu). \tag{13}\] Figure 1: (Left) Empirical distributions with examples of 2 sampled lines (Right) that gives rise to 2 possible values of SWGG when \(\theta\in[0,2\pi]\). Notably, the second part of eq. (13) straddles the square Wasserstein distance with eq. (3). Remarkably, the computation of \(W_{\nu}^{2}\) can be efficient if the pivot measure \(\nu\) is appropriately chosen. As established in Lemma 4.6, it is the case when \(\nu\) is supported on a line. Based on these facts, we propose hereafter an alternative formulation of SWGG. **Definition 4.1** (Pivot measure).: Let \(\mu_{1}\) and \(\mu_{2}\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\). We restrain the pivot measure \(\nu\) to be the Wasserstein mean of the measures \(Q_{\#\mu}^{\theta}\mu_{1}\) and \(Q_{\#}^{\theta}\mu_{2}\): \[\mu_{\theta}^{1\to 2}\ \stackrel{{\mathrm{def}}}{{=}}\arg \min_{\mu\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})}W_{2}^{2}(Q_{\#}^{\theta}\mu_{ 1},\mu)+W_{2}^{2}(\mu,Q_{\#}^{\theta}\mu_{2}),\] where \(\theta\in\mathbb{S}^{d-1}\) and \(Q^{\theta}:\mathbb{R}^{d}\to\mathbb{R}^{d}\), \(\mathbf{x}\mapsto\theta\langle\mathbf{x},\theta\rangle\) is the projection onto the subspace generated by \(\theta\). One shall notice that \(Q_{\#}^{\theta}\mu_{1}\) and \(Q_{\#}^{\theta}\mu_{2}\) are supported on the line defined by the direction \(\theta\), so is the pivot measure \(\nu=\mu_{\theta}^{1\to 2}\). We are now ready to re-express the metric SWGG. **Proposition 4.2** (SWGG based on generalized geodesics).: Let \(\theta\in\mathbb{S}^{d-1}\), \(\mu_{1},\mu_{2}\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) and \(\mu_{\theta}^{1\to 2}\) be the pivot measure. Let \(\mu_{\theta,\theta}^{1\to 2}\) be the generalized Wasserstein mean between \(\mu_{1}\) and \(\mu_{2}\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) with pivot measure \(\mu_{\theta}^{1\to 2}\). Then, \[\text{SWGG}_{2}^{2}(\mu_{1},\mu_{2},\theta)=2W_{2}^{2}(\mu_{1},\mu_{\theta}^{1 \to 2})+2W_{2}^{2}(\mu_{\theta}^{1\to 2},\mu_{2})-4W_{2}^{2}(\mu_{g,\theta}^{1 \to 2},\mu_{\theta}^{1\to 2}). \tag{14}\] The proof is in D.1. From Proposition 4.2, SWGG is the \(\mu_{\theta}^{1\to 2}\)-based Wasserstein distance between \(\mu_{1}\) and \(\mu_{2}\). This alternative formulation allows establishing additional properties for min-SWGG. ### Theoretical properties Additionally to the properties derived in Section 3 (SWGG is a distance and min-SWGG an upper bound of \(W_{2}^{2}\)), we provide below other theoretical guarantees. **Proposition 4.3** (Weak Convergence).: min-SWGG metrizes the weak convergence in \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\). In other words, let \((\mu_{k})_{k\in\mathbb{N}}\) be a sequence of measures in \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) and \(\mu\in\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\). We have: \[\mu_{k}\xrightarrow[k]{\mathcal{L},2}\mu\iff\text{min-SWGG}_{2}^{2}(\mu_{k},\mu)\underset{k}{\longrightarrow}0,\] where \(\xrightarrow[k]{\mathcal{L},2}\) stands for the weak convergence of measure i.e. \(\int_{\mathbb{R}^{d}}fd\mu_{k}\to\int_{\mathbb{R}^{d}}fd\mu\) for all continuous bounded functions \(f\). Beyond the weak convergence, min-SWGG possesses the translation property, _i.e._ the translations can be factored out as Wasserstein distance does (see (53, remark 2.19) for a recall). **Proposition 4.4** (Translation).: Let \(T^{u}\) (resp. \(T^{v}\)) be the map \(\mathbf{x}\mapsto\mathbf{x}-\mathbf{u}\) (resp. \(x\mapsto\mathbf{x}-\mathbf{v}\)), with \(\mathbf{u},\mathbf{v}\) vectors of \(\mathbb{R}^{d}\). We have: \[\text{min-SWGG}_{2}^{2}(T^{u}_{\#}\mu_{1},T^{v}_{\#}\mu_{2})=\text{min-SWGG}_{2 }^{2}(\mu_{1},\mu_{2})+\|\mathbf{u}-\mathbf{v}\|_{2}^{2}-2\langle\mathbf{u}-\mathbf{v},\mathbf{m_ {1}}-\mathbf{m_{2}}\rangle\] where \(\mathbf{m_{1}}=\int_{\mathbb{R}^{d}}\mathbf{x}d\mu_{1}(\mathbf{x})\) and \(\mathbf{m_{2}}=\int_{\mathbb{R}^{d}}\mathbf{x}d\mu_{2}(\mathbf{x})\) are the means of \(\mu_{1}\), \(\mu_{2}\). This property is useful in some applications such as shape matching, in which translation invariances are sought. The proofs of the two Propositions are deferred to Supp. D.2 and D.3. **Remark 4.5** (Equality).: min-SWGG and \(W_{2}^{2}\) are equals in different cases. First, [43] showed that it is the case whenever \(\mu_{1}\) is the shift and scaling of \(\mu_{2}\) (see Supp. C.1 for a full discussion). In Lemma 4.6, we will state that it is also the case when at least one of the two distributions is supported on a line. ### Efficient computation of SWGG SWGG defined in eq. (14) involves computing three WDs that are fast to compute, with an overall \(\mathcal{O}(dn+n\log n)\) complexity, as detailed below. Building on this result, we provide an optimization scheme that allows optimizing over \(\theta\) with \(\mathcal{O}(sdn+sn\log sn)\) operations at each iteration, with \(s\) a (small) integer. We first start by giving a new closed form for WD whenever one distribution is supported on a line, that proves useful to derive an efficient computation scheme. New closed form for WD.The following lemma states that \(W_{2}^{2}(\mu_{1},\mu_{2})\) admits a closed form whenever \(\mu_{2}\) is supported on a line. For that, it relies on the computation of WD between \(\mu_{2}\) and the orthogonal projection of \(\mu_{1}\) onto the linear subspace defined by the line. Additionally, the optimal transport map \(T^{1\to 2}\) is also given by an explicit formulation. **Lemma 4.6**.: Let \(\mu_{1},\mu_{2}\) in \(\mathcal{P}_{2}^{n}(\mathbb{R}^{d})\) with \(\mu_{2}\) supported on a line of direction \(\theta\in\mathbb{S}^{d-1}\). We have: \[W_{2}^{2}(\mu_{1},\mu_{2})=W_{2}^{2}(\mu_{1},Q_{\#}^{\theta}\mu_{1})+W_{2}^{2} (Q_{\#}^{\theta}\mu_{1},\mu_{2}) \tag{15}\] with \(Q^{\theta}\) as in Def. 4.1. Note that \(W_{2}^{2}(\mu_{1},Q_{\#}^{\theta}\mu_{1})=\frac{1}{n}\sum\|\mathbf{x}_{i}-Q^{\theta }(\mathbf{x}_{i})\|_{2}^{2}\) and that \(W_{2}^{2}(Q_{\#}^{\theta}\mu_{1},\mu_{2})=W_{2}^{2}(P_{\#}^{\theta}\mu_{1},P _{\#}^{\theta}\mu_{2})\) are WD between 1D distributions. Additionally, the optimal transport map is given by \(T^{1\to 2}=T^{Q_{\#}^{\theta}\mu_{1}\to\mu_{2}}\circ T^{\mu_{1}\to Q_{\#}^{ \theta}\mu_{1}}=T^{Q_{\#}^{\theta}\mu_{1}\to\mu_{2}}\circ Q^{\theta}\). In particular, the map \(T^{1\to 2}\) can be obtained via the permutations of the 1D distributions \(P_{\#}^{\theta}\mu_{1}\) and \(P_{\#}^{\theta}\mu_{2}\). The proof is in Supp. D.4. Efficient computation of SWGG.Eq. (14) is defined as the Wasserstein distance between a distribution (either \(\mu_{1}\) or \(\mu_{2}\) or \(\mu_{g,\theta}^{1\to 2}\)) and a distribution supported on a line (\(\mu_{\theta}^{1\to 2}\)). As detailed in Supp. D.5, computation of eq. (14) results in three Wasserstein distances between distributions and their projections: i) \(W_{2}^{2}(\mu_{1},Q_{\#}^{\theta}\mu_{1})\), ii) \(W_{2}^{2}(\mu_{2},Q_{\#}^{\theta}\mu_{2})\), iii) \(W_{2}^{2}(\mu_{g,\theta}^{1\to 2}\), \(\mu_{\theta}^{1\to 2}\)), and one dimensional Wasserstein distance \(W_{2}^{2}(P_{\#}^{\theta}\mu_{1},P_{\#}^{\theta}\mu_{2})\), resulting in a \(\mathcal{O}(dn+n\log n)\) complexity. Optimization scheme for min-SWGGThe term \(W_{2}^{2}(\mu_{g,\theta}^{1\to 2},\mu_{\theta}^{1\to 2})\) in eq. (14) is non continuous w.r.t. \(\theta\). Indeed, the generalized mean \(\mu_{g,\theta}^{1\to 2}\) only depends on the transport maps \(T^{\mu_{g}^{1\to 2}\to\mu_{1}}\) and \(T^{\mu_{\theta}^{1\to 2}\to\mu_{2}}\), that are constant as long as the projection direction \(\theta\) leads to the same permutations \(\sigma_{\theta}\) and \(\tau_{\theta}\). Hence, we rely on a smooth surrogate \(\widetilde{\mu_{g,\theta}^{1\to 2}}\) of the generalized mean and we aim at minimizing the following objective function: \[\widetilde{\text{SWGG}_{2}^{2}(\mu_{1},\mu_{2},\theta)}\ \overset{\text{def}}{=}\ 2W_{2}^{2}(\mu_{1},\mu_{\theta}^{1\to 2})+2W_{2}^{2}(\mu_{ \theta}^{1\to 2},\mu_{2})-4W_{2}^{2}(\widetilde{\mu_{g,\theta}^{1\to 2}},\mu_{ \theta}^{1\to 2}). \tag{16}\] To define \(\widetilde{\mu_{g,\theta}^{1\to 2}}\), one option would be to use entropic maps in eq. (11) but at the price of a quadratic time complexity. We rather build upon the blurred Wasserstein distance [26] to define \(\widetilde{\mu_{g,\theta}^{1\to 2}}\) as it can be seen as an efficient surrogate of entropic transport plans in 1D. In 1D, \(\widetilde{\mu_{g,\theta}^{1\to 2}}\) can be efficiently approximated by adding an empirical Gaussian noise followed by a sorting pass. In our case, it resorts in making \(s\) copies of each sorted projections \(P^{\theta}(\mathbf{x}_{\sigma(i)})\) and \(P^{\theta}(\mathbf{y}_{\tau(i)})\) respectively, to add an empirical Gaussian noise of deviation \(\sqrt{\epsilon}/2\) and to compute averages of sorted blurred copies \(\mathbf{x}_{\sigma^{s}}^{s}\), \(\mathbf{y}_{\tau^{s}}^{s}\). We finally have \((\widetilde{\mu_{g,\theta}^{1\to 2}})_{i}=\frac{1}{2s}\sum_{k=(i-1)s+1}^{s}\mathbf{x}_{ \sigma^{s}(k)}^{s}+\mathbf{y}_{\tau^{s}(k)}^{s}\). [26] showed that this blurred WD has the same asymptotic properties as the Sinkhorn divergence. The surrogate \(\widetilde{\text{SWGG}}(\mu_{1},\mu_{2},\theta)\) is smoother w.r.t. \(\theta\) and can thus be optimized with a gradient descent, converging towards a local minima. Once the optimal direction \(\theta^{*}\) is found, min-SWGG resorts to be the solution provided by \(\text{SWGG}(\mu_{1},\mu_{2},\theta^{*})\). Fig. 2 illustrates the effect of the smoothing on a toy example and more details are given in Supp. D.6. The computation of \(\widetilde{\text{SWGG}}(\mu_{1},\mu_{2},\theta)\) is summarized by Alg. 1. Figure 2: Illustration of the smoothing effect in the same setting as in Fig. 1. (Left) Two sets of generalized Wasserstein means are possible, depending on the direction of the sampled line w.r.t. \(\theta_{1}\) and \(\theta_{2}\), giving rise to 2 different values for SWGG. (Middle) The surrogate provides a smooth transition between the two sets of generalized Wasserstein means as the direction \(\theta\) changes, (Right) providing a smooth approximation of SWGG that is amenable to optimization. ## 5 Experiments We highlight that min-SWGG is fast to compute, gives an approximation of WD and the associated transport plan. We start by comparing the random search and gradient descent schemes for finding the optimal direction in 5.1. In subsection 5.2, we illustrate the weak convergence property of min-SWGG through a gradient flow application to match distributions. We then implement an efficient algorithm for colorization of gray scale images in 5.3, thanks to the new closed form of WD. We finally evaluate min-SWGG on a shape matching application in 5.4. When possible from the context, we compare min-SWGG with the main methods for approximating WD namely SW, max-SW, Sinkhorn [23], factored coupling [29] and subspace robust WD (SRW) [52]. Supp. E provides additional results on the behavior of min-SWGG and experiments on other tasks such as color transfer or on data sets distance computation. ### Computing min-SWGG Let consider Gaussian distributions in dimension \(d\in\{2,20,200\}\). We first sample \(n=1000\) points from each distribution to define \(\mu_{1}\) and \(\mu_{2}\). We then compute min-SWGG\({}_{2}^{2}(\mu_{1},\mu_{2})\) with different schemes, either by random search, by simulated annealing [54] or by gradient descent; we report the results in Fig. 3 (left). For the random search scheme, we repeat each experiment 20 times and we plot the average value of min-SWGG \(\pm\) 2 times the standard deviation. For the gradient descent, we select a random initial \(\theta\). We observe that, in low dimension, all schemes provide similar values of min-SWGG. When the dimension increases, optimizing the direction \(\theta\) leads to a better approximation of the true Wasserstein distance (see plots' title in Fig. 3). On Fig. 3 (right), we compare the empirical runtime evaluation for min-SWGG with different competitors for \(d=3\) between \(n\) samples from Gaussian distributions, with \(n\in\{10^{2},10^{3},10^{4},5\times 10^{4},10^{5}\}\). We observe that, as expected, min-SWGG with random search is as fast as SW with a super linear complexity. With the optimization process, it is faster than SRW for a given number of samples. We also note that SRW is more demanding in memory and hence does not scale as well as min-SWGG. Figure 3: (Left) evolution of min-SWGG with different numbers of projections and with the dimension \(d\) in \(\{2,20,200\}\). (Right) Runtimes. ### Gradient Flows We highlight the weak convergence property of min-SWGG. Starting from a random initial distribution, we aim at moving the particles of a source distribution \(\mu_{1}\) to a target one \(\mu_{2}\) by reducing the objective min-SWGG\({}_{2}^{2}(\mu_{1},\mu_{2})\) at each step. We compare both variants of min-SWGG with SW, max-SW and PWD, relying on the code provided in [37] for running the experiment; we report the results on Fig. 4. We consider several target distributions, representing diverse scenarios and fix \(n=100\). We run each experiment 10 times and report the mean \(\pm\) the standard deviation. In every case, one can see that \(\mu_{1}\) moves toward \(\mu_{2}\) and that all methods tend to have similar behavior. One can notice though that, for the distribution with \(d=500\), min-SWGG with optimization scheme leads to the best alignment of the distributions. ### Gray scale image colorization Lemma 4.6 states that WD has a closed form when one of the 2 distributions is supported on a line, allowing us to compute the WD and the OT map with a complexity of \(\mathcal{O}(dn+n\log n)\). This particular situation arises for instance with RBG images (\(\mu_{1},\mu_{2}\in\mathcal{P}_{2}^{n}(\mathbb{R}^{3})\)), where black and white images are supported on a line (the line of gray). One can deal with the problem of colorization through color transfer, where a black and white image is the source and a colorful image the target. Our fast procedure allows considering large images without sub-sampling Fig. 5 gives an example of colorization of an image of size 1280\(\times\)1024 that was computed in less than 0.2 second, while being totally untractable for the \(\mathcal{O}(n^{3}\log n)\) solver of WD. This procedure can be lifted to pan-sharpening [64] where one aims at constructing a super-resolution multi-chromatic satellite image with the help of a super-resolution mono-chromatic image (source) and low-resolution multi-chromatic image (target). Obtained results are given in the Supp. E.4. ### Point clouds registration Iterative Closest Point (ICP) is an algorithm for aligning point clouds based on their geometries [7]. Roughly, its most popular version defines a one-to-one correspondence between point clouds, computes a rigid transformation (namely translation, rotation or reflection), moves the source point clouds using the transformation, and iterates the process until convergence. The rigid transformation is the solution of the Procrustes problem _i.e._\(\arg\min_{(\Omega,t)\in O(d)\times\mathbb{R}^{d}}\|\Omega(\mathbf{X}-t)-\mathbf{Y}\|_{2}^ {2}\), where \(\mathbf{X},\mathbf{Y}\) are the source and the target cloud points and \(O(d)\) the space of orthogonal matrices of dimension \(d\). This Procrustes problem can be solved using a SVD [59] for instance. We perform the ICP algorithm with different variants to compute the one-to-one correspondence: nearest neighbor (NN) correspondence, OT transport map (for small size datasets) and min-SWGG Figure 4: Log 2-WD between different source and target distributions function of the number of iterations. Figure 5: Cloud point source and target (left) colorization of image (right). transport map. Note that SW, PWD, SRW, factored coupling and Sinkhorn cannot be run in this context where a one-to-one correspondence is mandatory; subspace detours [44] is irrelevant in this context (see Supp. E.5). We evaluate the results of the ICP algorithm considering 3 datasets of different sizes on: i) the quality of the final alignment, measured by the Sinkhorn divergence between the re-aligned and target point cloud; ii) the speed of the algorithm given by the time until convergence. Results are shown in Table 1 and more details about the setup, together with a deeper analysis of the results, can be found in Supp. E.5. One can see that the assignment provided by OT-based methods is better than NN. min-SWGG allows working with large datasets, while OT fails to provide a solution for \(n=150000\). ## 6 Conclusion In this paper, we hinge on the properties of SWD and on the Wasserstein generalized geodesics to define min-SWGG, a new upper bound of the Wasserstein distance that comes with an associated transport map. Topological properties of SWGG are provided, showing that it defines a metric and that min-SWGG metrizes the weak convergence of measure. We also proposed two algorithms for computing min-SWGG, by either a random search scheme or gradient descent after smoothing the generalized geodesics definition of min-SWGG. We illustrate its behavior in several experimental setups, notably showcasing its interest in applications where a transport map is needed. The set of permutations that is covered by min-SWGG is the one induced by projections and permutations on the line. It is a subset of the original Birkhoff polytope and it would be interesting to characterise how these two sets relates. In particular, in case of empirical realisations of continuous distributions, the behavior of min-SWGG, when \(n\) grows, needs to be investigated. In addition, the fact that min-SWGG and WD coincide when \(d>2n\) calls for embedding the distributions in higher dimensional spaces to benefit from more expressive power of projection onto the line. An other important consideration is to have a theoretical upper bound for min-SWGG. \begin{table} \begin{tabular}{l c c c} \(n\) & 500 & 3000 & 150 000 \\ \hline NN & 3.54 (**0.02**) & 96.9 (**0.30**) & 23.3 (**59.37**) \\ OT & 0.32 (0.18) & 48.4 (58.46) &. \\ min-SWGG & **0.05** (0.04) & **37.6** (0.90) & **6.7** (105.75) \\ \end{tabular} \end{table} Table 1: Sinkhorn Divergence between final transformation on the source and the target. Timings in seconds are into parenthesis. Best values are boldfaced. An example of a point clouds (\(n=3000\)) is provided on the left.
2305.18458
Conditional Support Alignment for Domain Adaptation with Label Shift
Unsupervised domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on the labeled samples on the source domain and unlabelled ones in the target domain. The dominant existing methods in the field that rely on the classical covariate shift assumption to learn domain-invariant feature representation have yielded suboptimal performance under the label distribution shift between source and target domains. In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task. We also introduce a novel theoretical target risk bound, which justifies the merits of aligning the supports of conditional feature distributions compared to the existing marginal support alignment approach in the UDA settings. We then provide a complete training process for learning in which the objective optimization functions are precisely based on the proposed target risk bound. Our empirical results demonstrate that CASA outperforms other state-of-the-art methods on different UDA benchmark tasks under label shift conditions.
Anh T Nguyen, Lam Tran, Anh Tong, Tuan-Duy H. Nguyen, Toan Tran
2023-05-29T05:20:18Z
http://arxiv.org/abs/2305.18458v1
# Conditional Support Alignment for Domain Adaptation with Label Shift ###### Abstract Unsupervised domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on the labeled samples on the source domain and unlabelled ones in the target domain. The dominant existing methods in the field that rely on the classical covariate shift assumption to learn domain-invariant feature representation have yielded suboptimal performance under the label distribution shift between source and target domains. In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task. We also introduce a novel theoretical target risk bound, which justifies the merits of aligning the supports of conditional feature distributions compared to the existing marginal support alignment approach in the UDA settings. We then provide a complete training process for learning in which the objective optimization functions are precisely based on the proposed target risk bound. Our empirical results demonstrate that CASA outperforms other state-of-the-art methods on different UDA benchmark tasks under label shift conditions. ## 1 Introduction The remarkable success of modern deep learning models often relies on the assumption that training and test data are independent and identically distributed (i.i.d), contrasting the types of real-world problems that can be solved. The violation of that i.i.d. assumption leads to the data distribution shift, or out-of-distribution (OOD) issue, which negatively affects the generalization performance of the learning models (Torralba and Efros, 2011; Li et al., 2017) and renders them impracticable. One of the most popular settings for the OOD problem is unsupervised domain adaptation (UDA) (Ganin and Lempitsky, 2015; David et al., 2010) in which the training process is based on fully-labeled samples from a source domain and completely-unlabeled samples from a target domain, aiming for a good generalization performance on the unseen target data. While the covariate shift assumption has been extensively studied under the UDA problem setting, with reducing the feature distribution divergence between domains as the dominant approach (Ganin and Lempitsky, 2015; Tzeng et al., 2017; Shen et al., 2018; Courty et al., 2017; Liu et al., 2019; Long et al., 2015, 2017, 2016, 2014), the label shift assumption (\(p(y)\) varies between domains, while \(p(x|y)\) is unchanged) remains vastly underexplored in comparison. Compared to the covariate shift assumption, the label shift assumption is often more reasonable in several real-world settings, e.g., the healthcare industry, where the distribution of diseases in medical diagnosis may change across hospitals, while the conditional distribution of symptoms given diseases remains unchanged. In the presence of such label shift, Zhao et al. (2019); Tachet des Combes et al. (2020) introduced upper bounds on the performance of existing methods and showed that enforcing domain-invariance of learned representation can provably affect generalization performance. Several UDA methods that explicitly consider the label shift assumption often rely on estimating the importance weights of the source and target label distribution and strictly require the conditional distributions \(p(x|y)\) or \(p(z|y)\) to be domain-invariant Lipton et al. (2018); Tachet des Combes et al. (2020); Azizzadenesheli et al. (2019). Another popular UDA under label shift framework is enforcing domain invariance of representation \(z\) w.r.t some _relaxed_ divergences Wu et al. (2019); Tong et al. (2022). Wu et al. (2019) proposed reducing \(\beta\)-admissible distribution divergence to prevent cross-label mapping in conventional domain-invariant approaches. However, choosing inappropriate \(\beta\) values can critically reduce the performance of this method under extreme label shifts (Wu et al., 2019; Li et al., 2020). This paper aims to develop a new theoretically sound approach for UDA under the label shift. Our method is relatively related to but different from the adversarial support alignment (ASA) (Tong et al., 2022) method, which utilizes the symmetric support divergence (SSD) to align the support of the marginal feature distribution of the source and the target domains and achieve impressive domain adaptation results under severe label shift. However, one of the critical drawbacks of the ASA method is that reducing the marginal support divergence indiscriminately may make the learned representation susceptible to conditional distribution misalignment. We illustrate this observation intuitively in Figure 1. Furthermore, the results in Tong et al. (2022) have not been verified rigorously due to the lack of a theoretical guarantee for classification performance in the target domain. We theoretically verify the benefit of our proposed method by introducing a novel generalization error bound for the learning process based on a new conditional SSD. The main contributions of our paper are summarized as follows: * We propose a novel conditional adversarial support alignment (CASA) to align the support of the conditional feature distributions on the source and target domains, aiming for a more label-informative representation for the classification task. * We provide a new theoretical upper bound for the target risk for the learning process of our CASA. We then introduce a complete training scheme for our proposed CASA by minimizing that bound. * We provide experimental results on several benchmark tasks in UDA, which consistently demonstrate the empirical benefits of our proposed method compared to other relevant existing UDA approaches. ## 2 Methodology ### Problem Statement Let us consider a classification framework where \(\mathcal{X}\subset\mathbb{R}^{d}\) represents the input space and \(\mathcal{Y}=\{y_{1},y_{2},\ldots,y_{K}\}\) denotes the output space consisting of \(K\) classes. The classes are represented by one-hot vectors \(y_{k}\) in \(\mathbb{R}^{K}\). A domain is then defined by \(P(x,y)\in\mathcal{P}_{\mathcal{X}\times\mathcal{Y}}\), where \(\mathcal{P}_{\mathcal{X}\times\mathcal{Y}}\) is the set of Figure 1: Illustration of the learned latent space of different domain-invariant frameworks under label shift for a binary classification problem. It can be seen that support alignment can mitigate the high error rate induced by distribution alignment, whereas conditional support alignment can achieve the best representation by explicitly aligning the supports of class-conditioned latent distributions. joint probability distributions on \(\mathcal{X}\times\mathcal{Y}\). The set of conditional distributions of \(x\) given \(y\), \(P(x|y)\), is denoted as \(\mathcal{P}_{\mathcal{X}|\mathcal{Y}}\), and the set of probability marginal distributions on \(\mathcal{X}\) and \(\mathcal{Y}\) is denoted as \(\mathcal{P}_{\mathcal{X}}\) and \(\mathcal{P}_{\mathcal{Y}}\), respectively. We also denote \(P_{X|Y=y_{k}}\) as \(P_{X|k}\) for convenience. Consider an UDA framework, in which the source domain \(D^{S}=\left\{\left(x_{j}^{S},y_{i}^{S}\right)\right\}_{i=1}^{n_{S}}\), where \(\left(x_{j}^{S},y_{i}^{S}\right)\sim P^{S}(x,y)\), and \(y_{i}^{S}\sim P^{S}(y)\); and the target domain \(D^{T}=\left\{x_{j}^{T}\right\}_{j=1}^{n_{T}}\), where \(x_{j}^{t}\sim P_{X}^{T}\subset\mathcal{P}_{\mathcal{X}}\). Without loss of generality, we assume that both the source and target domains consists of \(K\) classes, i.e., \(\mathcal{Y}=\left\{y_{1},\ldots,y_{K}\right\}\). In this paper, we focus on the UDA setting with label shift, which assumes that \(P^{S}(y)\neq P^{T}(y)\) while the conditional distributions \(P^{S}(x|y)\) and \(P^{T}(x|y)\) remain unchanged. Nevertheless, unlike relevant works such as Tachet des Combes et al. (2020), Lipton et al. (2018), we do not make the strong assumption about the invariance of the conditional input distribution \(P(x|y)\) or conditional feature distribution \(P(z|y)\) between those domains. A classifier (or hypothesis) is defined by a function : \(\mathcal{X}\mapsto\Delta_{K}\), where \(\Delta_{K}=\left\{\boldsymbol{\pi}\in\mathbb{R}^{K}:\|\boldsymbol{\pi}\|_{1}= 1\wedge\pi\geq\boldsymbol{0}\right\}\) is the \(K\)-simplex, and an induced scoring function \(g:\mathcal{X}\mapsto\mathbb{R}^{K}\). Consider a loss function \(\ell:\mathbb{R}^{K}\times\mathbb{R}^{K}\rightarrow\mathbb{R}_{+}\), satisfying \(\ell(y,y)=0,\ \forall y\in\mathcal{Y}\). Given a scoring function \(g\), we define its associated classifier as \(h_{g}\), i.e., \(h_{g}(\mathbf{x})=\hat{y}\) with \(\hat{y}\in\operatorname*{argmin}_{y\in\mathcal{Y}}\ell[g(\mathbf{x}),y]\). The \(\ell\)-risk of a scoring function \(g\) over a distribution \(P_{X\times Y}\) is then defined by \(\mathcal{L}_{P}(g):=\mathbb{E}_{(x,y)\sim P}[\ell(g(x),y)]\), and the classification mismatch of \(g\) with a classifier \(h\) by \(\mathcal{L}_{P}(g,h):=\mathbb{E}_{x\sim P_{\mathbb{X}}}[\ell(g(x),h(x))]\). For convenience, we denote the source and target risk of scoring function or classifier \(g\) as \(\mathcal{L}_{S}(g)\) and \(\mathcal{L}_{T}(g)\), respectively. ### A target risk bound based on support misalignment Similar to Ganin et al. (2016), we assume that the hypothesis \(g\) can be decomposed as \(g=c\circ f\), where \(c:\mathcal{X}\rightarrow\mathcal{Y}\) is the classifier and \(f:\mathcal{X}\rightarrow\mathcal{Z}\) is the feature extractor. Here, \(\mathcal{Z}\) represents the latent space. Let us denote the domain discriminator as \(r:\mathcal{Z}\rightarrow\left\{0,1\right\}\), and the probability marginal distribution of latent variable \(Z\in\mathcal{Z}\) induced by the source and target domains as \(P_{Z}^{S}\) and \(P_{Z}^{T}\), respectively. We next introduce an existing upper bound for the target risk \(\mathcal{L}\) based on the support alignment-based divergence between \(P_{Z}^{S}\) and \(P_{Z}^{T}\). The bound is different from those in previous domain-invariant works, which rely on several types of divergence such as \(\mathcal{H}\Delta\mathcal{H}\)-divergence (Ben-David et al., 2010), Wasserstein distance (Courty et al., 2017) or maximum mean discrepancy (MMD) (Long et al., 2015). **Definition 1** (_Source-guided uncertainty_(Dhouib and Maghsudi, 2022)).: Let \(\mathbb{H}\) be a hypothesis space, and let \(\ell\) be a loss function, and \(\mathcal{L}_{P}(\cdot)\) as the associated risk for a distribution \(P\). The source-guided uncertainty (or source-conditioned confidence) of a function \(g\in\mathbb{H}\) associated with \(\ell\) is defined by: \[\mathcal{C}_{\mathbb{H}}(g)=\inf_{h\in\mathbb{H}}\mathcal{L}_{T}(g,h)+\mathcal{ L}_{S}(h), \tag{1}\] where \(\mathcal{L}_{T}(g,h)\) is the classification mismatch of \(g\) and \(h\) on \(P_{X}^{T}\). _Remark 1_.: One important property of the source-guided uncertainty is that if \(h_{g}\in\mathbb{H}\), then we have \(\mathcal{C}_{\mathbb{H}}(g)\leq\mathcal{L}_{T}(g,h_{g})+\mathcal{L}_{S}(h_{g})\). Dhouib and Maghsudi (2022) showed that when \(g(x)\) is in the \(K\)-dimensional probability simplex, and \(\ell\) is the cross entropy loss, then \(\mathcal{L}_{T}(g,h_{g})\) is upper-bounded by the average entropy \(g(x)\) i.e. \(\mathcal{L}_{T}(g,h_{g})\leq E_{x\sim P_{X}^{T}}H(g(x))\), where \(H(g)\) denotes the entropy of the function \(g\). Hence, minimizing the conditional entropy of predictor \(g(x)\) on the target domain, which has been performed in previous domain adaptation algorithms (Shu et al., 2018; Kirchmeyer et al., 2022; Saito et al., 2019; Tong et al., 2022), and the source risk \(\mathcal{L}_{S}(h_{g})\) effectively minimizes the source-guided uncertainty \(\mathcal{C}_{\mathbb{H}}(g)\). To derive a target error bound for the marginal support alignment approach (Dhouib and Maghsudi, 2022), we restate the following definition. **Definition 2** (_Integral measure discrepancy_).: Let \(\mathbb{F}\) be a family of nonnegative functions over \(\mathbb{X}\), containing the null function. The Integral Measure Discrepancy (IMD) associated to \(\mathbb{F}\) between two distribution \(\mathcal{Q}\) and \(\mathcal{P}\) over \(\mathbb{X}\) is \[\operatorname{IMD}_{\mathbb{F}}\left(\mathcal{Q},\mathcal{P}\right):=\sup_{f\in \mathbb{F}}\int f\;\mathrm{d}\mathcal{Q}-\int f\;\mathrm{d}\mathcal{P}. \tag{2}\] Intuitively, this discrepancy aims to capture the distances between measures w.r.t. difference masses. Now, we recapitulate the IMD-based domain adaptation bound in Dhouib and Maghsudi (2022). **Proposition 1** (Domain adaptation bound using IMD Dhouib and Maghsudi (2022)).: _Let \(\mathbb{H}\) be a hypothesis space, \(g\) be a score function, and \(\ell\) be a loss function satisfying the triangle inequality. Consider the localized hypothesis \(\mathbb{H}^{r}\coloneqq\{h\in\mathbb{H};\mathcal{L}_{S}(h)\leq r\}\) and localized nonnegative function set \(\mathbb{F}_{\epsilon}=\{f;f(z)\geq 0,\mathbb{E}_{P_{Z}^{S}}[f]\leq\epsilon\}\). Then, for any \(r_{1},r_{2}\geq 0\), we have_ \[\mathcal{L}_{T}(g)\leq\mathcal{C}_{\mathbb{H}^{r_{1}}}(g)+\mathrm{ IMD}_{\mathbb{F}_{r_{1}+r_{2}}}(P_{Z}^{T},P_{Z}^{S})+\inf_{h\in\mathbb{H}^{r_{2}}} \mathcal{L}_{T}(h)+\mathcal{L}_{S}(h). \tag{3}\] By incorporating the Lipschitz property into the functions \(f\in\mathbb{F}\), Dhouib and Maghsudi (2022) demonstrated that the bound presented in Proposition 1 can justify the optimization objective in Tong et al. (2022) using the following divergence. **Definition 3** (_Symmetric support divergence_).: Assuming that \(d\) is a proper distance on the latent space \(\mathcal{Z}\). The symmetric support divergence (SSD) between two probability distributions \(P_{Z}\) and \(Q_{Z}\) is then defined by: \[\mathcal{D}_{\mathrm{supp}}(P_{Z},Q_{Z})=\mathbb{E}_{z\sim P_{Z}}[d(z, \mathrm{supp}(Q_{Z}))]+\mathbb{E}_{z\sim Q_{Z}}[d(z,\mathrm{supp}(P_{Z}))],\] where \(\mathrm{supp}(P_{Z})\) and \(\mathrm{supp}(Q_{Z})\) are the corresponding supports of \(P_{Z}\) and \(Q_{Z}\), respectively. Unlike the work in Dhouib and Maghsudi (2022) that extends the bound with \(\beta\)-admissible distances, our proposed method goes beyond interpreting the bound solely in terms of support divergences Tong et al. (2022). In particular, our method also incorporates the label structures of the domains, allowing a more comprehensive analysis of the underlying relationships. ### A novel conditional SSD-based domain adaptation bound In this work, we first introduce a definition for a novel _conditional symmetric support divergence_ between the conditional distributions \(P_{Z|Y}^{S}\) and \(P_{Z|Y}^{T}\). For simplicity, we also denote \(d\) as a well-defined distance on the conditional \(\mathcal{Z}|\mathcal{Y}\) space. **Definition 4** (_Conditional symmetric support divergence_).: The conditional symmetric support divergence (CSSD) between the conditional distributions \(P_{Z|Y}^{S}\) and \(P_{Z|Y}^{T}\) is defined by \[\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{S},P_{Z|Y}^{T})\] \[= \sum_{y\in\mathcal{Y}}P^{S}(Y=y)\mathbb{E}_{z\sim P_{Z|Y=y}^{S}} [d(z,\mathrm{supp}(P_{Z|Y=y}^{T}))]+P^{T}(Y=y)\mathbb{E}_{z\sim P_{Z|Y=y}^{T} }[d(z,\mathrm{supp}(P_{Z|Y=y}^{S}))].\] In comparison to the SSD in Definition 3, our CSSD takes into account the class proportions in both source and target domains. As a result, the localized IMD considers per-class localized functions, which are defined as the \((\boldsymbol{\epsilon},P_{Z|Y}^{S})\)-localized nonnegative function denoted by \(\mathbb{F}_{\boldsymbol{\epsilon}}\). Specifically, \(\mathbb{F}_{\boldsymbol{\epsilon}}=\{f;f(z)\geq 0,\mathbb{E}_{P_{Z|k}^{S}}[f] \leq\epsilon_{k},\quad k=1\dots K\}\) with \(\boldsymbol{\epsilon}=(\epsilon_{1},\dots,\epsilon_{K})\geq 0\). In the following lemma, we introduce upper bounds for the IMD using CSSD (the corresponding proof is provided in the Appendix). **Lemma 1** (Upper bound IMD using CSSD).: _Let \(\mathbb{F}\) be a set of nonnegative and \(1\)-Lipschitz functions, and let \(\mathbb{F}_{\boldsymbol{\epsilon}}\) be a \((\boldsymbol{\epsilon},P_{Z|Y}^{S})\)-localized nonnegative function. Then, we can bound the IMD w.r.t the conditional support domain and CSSD, respectively, as_ \[\mathrm{IMD}_{\mathbb{F}_{\boldsymbol{\epsilon}}}(P_{Z}^{T},P_{Z}^{S}) \leq\sum_{k=1}^{K}q_{k}\mathbb{E}_{z\sim P_{Z|k}^{T}}[d(z, \mathrm{supp}(P_{Z|k}^{S}))]+q_{k}\delta_{k}+p_{k}\epsilon_{k}, \tag{4}\] \[\mathrm{IMD}_{\mathbb{F}_{\boldsymbol{\epsilon}}}(P_{Z}^{T},P_{Z}^{S}) \leq\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{T},P_{Z|Y}^{S})+\sum_ {k=1}^{K}p_{k}\delta_{k}+q_{k}\gamma_{k}, \tag{5}\] _where \(\delta_{k}\coloneqq\sup_{z\in\mathrm{supp}\,P_{Z|k}^{S},f\in\mathbb{F}_{ \boldsymbol{\epsilon}}}f(z)\), \(\gamma_{k}\coloneqq\sup_{z\in\mathrm{supp}\,P_{Z|k}^{T},f\in\mathbb{F}_{ \boldsymbol{\epsilon}}}f(z)\), \(p_{k}=P^{S}(Y=y_{k})\) and \(q_{k}=P^{T}(Y=y_{k})\)._ We now provide a novel upper bound of the target risk based on CSSD in the following theorem, which is the straightforward result from Lemma 1. **Theorem 1** (Domain adaptation bound via CSSD).: _Let \(\mathbb{H}\) be a hypothesis space, \(g\) be a score function, and \(\ell\) be a loss function satisfying the triangle inequality. Consider the localized hypothesis \(\mathbb{H}^{r}\coloneqq\{h\in\mathbb{H};\mathcal{L}_{\mathcal{S}_{k}}(h)\leq r _{k},k=1\dots K\}\). Given that all the assumptions for \(\mathbb{F}_{\mathbf{\epsilon}}\) in Lemma 1 are fulfilled. Then, for any \(\mathbf{r}^{1}=(r_{1}^{1},\dots,r_{K}^{1})\geq 0,\mathbf{r}^{2}=(r_{1}^{2}, \dots,r_{K}^{2})\geq 0\) that satisfy \(r_{k}^{1}+r_{k}^{2}=\epsilon_{k}\), we have:_ \[\mathcal{L}_{T}(g)\leq\mathcal{C}_{\mathbb{H}^{1}}(g)+\mathcal{D}_{\mathrm{ supp}}^{c}(P_{Z|Y}^{T},P_{Z|Y}^{S})+\sum_{k=1}^{K}q_{k}\delta_{k}+p_{k}\gamma_{k} +\inf_{h\in\mathbb{H}^{2}}\mathcal{L}_{\mathcal{S}}(h)+\mathcal{L}_{T}(h). \tag{6}\] _Remark 2_.: In the case that we do not incorporate the label information, the IMD can be bounded as \[\mathrm{IMD}_{\mathbb{F}_{\mathbf{\epsilon}}}\leq\mathbb{E}_{z\sim P_{Z}^{T}}[d(z,\mathrm{supp}(P_{Z}^{S}))]+\delta+\epsilon, \tag{7}\] where \(\delta=\sup_{z\in\mathrm{supp}(P_{Z}^{S}),f\in\mathbb{F}_{\mathbf{\epsilon}}}f(z)\). Similar to the findings in Dhouib and Maghsudi (2022), the inequality in Equation (7) provides a justification for minimizing SSD proposed in Tong et al. (2022). Notably, this inequality extends to the case where \(\epsilon\geq 0\), and thus recover the bound with SSD in Dhouib and Maghsudi (2022) as a special case. Note that in order to make a fair comparison between Equation (4) and (7), we assume that \(\sum_{k}\epsilon_{k}p_{k}=\epsilon\) making \(\mathbb{F}_{\mathbf{\epsilon}}\subseteq\mathbb{F}_{\mathbf{\epsilon}}\). In comparison the the upper bound in Equation (7), our expectation \[\sum_{k}q_{k}\mathbb{E}_{z\sim P_{Z|k}^{T}}[d(z,\mathrm{supp}(P_{Z|k}^{S}))] \geq\mathbb{E}_{z\sim P_{Z}^{T}}[d(z,\mathrm{supp}(P_{Z}^{S}))],\] due to the fact that \(d(z,\mathrm{supp}(P_{Z|k}^{S}))\geq d(z,\mathrm{supp}(P_{Z}^{S}))\) and that Jensen's inequality holds. However, this inequality does not imply that our bound is less tight than the bound using SSD. When considering the remaining part, we can observe that \(\sum_{k}q_{k}\delta_{k}\leq\delta\) since \(\mathrm{supp}(P_{Z|k}^{S})\subseteq\mathrm{supp}(P_{Z}^{S})\) for any \(k=1,\dots,K\). In other words, there is a trade-off between the distance to support space and the uniform norm (sup norm) of function on the supports. _Remark 3_.: The proposed bound shares several similarities with other target error bounds in the UDA literature (Ben-David et al., 2010; Acuna et al., 2021). In particular, these bounds all upperbound the target risk with a source risk term, a domain divergence term, and an ideal joint risk term. The main difference is that we use the conditional symmetric support divergence instead of \(\mathcal{H}\Delta\mathcal{H}\)-divergence in Ben-David et al. (2010) and \(f\)-divergence in Acuna et al. (2021), making our bound more suitable for problems with large degrees of label shift, as a lower value of CSSD does not necessarily increase the target risk under large label shift, unlike \(\mathcal{H}\Delta\mathcal{H}\)-divergence and \(f\)-divergence Tachet des Combes et al. (2020); Zhao et al. (2019). This is intuitively illustrated by Figure 1 and also empirically validated by experiment results in Section 3. Furthermore, the localized hypothesis spaces \(\mathbb{H}^{\mathbf{r}^{1}}\) and \(\mathbb{H}^{\mathbf{r}^{2}}\) are reminiscent of the localized adaptation bound proposed in Zhang et al. (2020). While lower values of \(\mathbf{r}^{1},\mathbf{r}^{2}\) can make the term \(\sum_{k=1}^{K}q_{k}\delta_{k}+p_{k}\gamma_{k}\) smaller, the source-guided uncertainty term and ideal joint risk term can increase as a result. In our final optimization procedure, we assume the ideal joint risk term and \(\sum_{k=1}^{K}q_{k}\delta_{k}+p_{k}\gamma_{k}\) values to be small and minimize the source-guided uncertainty (see section 2.4.1) and CSSD via another proxy (see section 2.4.3) to reduce the target domain risk. ### Training scheme for our CASA Based on our theoretical guarantee in Theorem 1, we propose Conditional Adversarial Support Alignment (CASA ) algorithm for UDA problem setting under label shift. The main motivation CASA is to minimize both of the source-guided uncertainty \(\mathcal{C}_{\mathbb{H}^{\mathbf{r}^{1}}}(g)\) and the CSSD \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{T},P_{Z|Y}^{S})\) of Eq. (6). #### 2.4.1 Minimizing source-guided uncertainty As stated in Remark 1, minimizing the source risk and the target conditional entropy effectively reduces the source-guided uncertainty \(\mathcal{C}_{\mathbb{H}^{\mathbf{r}^{1}}}(g)\), which is the second term in the target risk bound of Eq. (6). Minimizing the prediction entropy has also been extensively studied and resulted in many effective semi-supervised (Laine and Aila, 2017; Dai et al., 2017; Tarvainen and Valpola, 2017; Grandvalet and Bengio, 2004), and UDA algorithms (Shu et al., 2018; Kirchmeyer et al., 2022; Liang et al., 2021). Hence, the total loss of CASA first includes classification loss \(\ell_{cls}\) on source samples and the conditional entropy loss on target samples defined as follows: \[\mathcal{L}_{y}(g)=\frac{1}{n_{S}}\sum_{i=1}^{n_{S}}\ell_{cls}(g(x_{i}^{S}),y_ {i}^{S})\quad\text{and}\quad\mathcal{L}_{ce}(g)=-\frac{1}{n_{T}}\sum_{i=1}^{n_{ T}}g(x_{i}^{T})^{\top}\ln g(x_{i}^{T}).\] #### 2.4.2 Enforcing Lipschitz hypothesis The risk bound in Eq. (6) suggests regularizing the Lipschitz continuity of the classifier \(c\). Inspired by the success of virtual adversarial training by Miyato et al. (2018) on domain adaptation tasks (Shu et al., 2018; Tong et al., 2022), we instead enforce the locally-Lipschitz constraint of the classifier, which is a relaxation of the global Lipschitz constraint, by enforcing consistency in the norm-ball w.r.t each representation sample \(z\). We observe that enforcing the local Lipschitz constraint of \(g=c\circ f\) instead of \(c\) leads to better performance in empirical experiments, which may suggest the benefit of ensuring Lipschitz continuity of the feature representation function \(f\)(Wu et al., 2019; Shu et al., 2018) in addition to the feature classifier \(c\). Hence, the following term is added to the overall loss: \[\mathcal{L}_{v}(c,f)=\frac{1}{n_{S}}\sum_{i=1}^{n_{S}}\max_{\| \mathbb{r}\|<\epsilon}\mathrm{D}_{\mathrm{KL}}\left(g(x_{i}^{S})\|g(x_{i}^{S} +r)\right)+\frac{1}{n_{T}}\sum_{i=1}^{n_{T}}\max_{\|\mathbb{r}\|<\epsilon} \mathrm{D}_{\mathrm{KL}}\left(g(x_{i}^{T})\|g(x_{i}^{T}+r)\right). \tag{8}\] #### 2.4.3 Minimizing conditional symmetric support divergence The next natural step for tightening the target risk bound in Eq. (6) is to minimize \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{S},P_{Z|Y}^{T})\). However, it is challenging to directly optimize this term since, in a UDA setting, we have no access to the labels of the target samples. Motivated by the use of pseudo-labels to guide the training process in domain adaptation literature (French et al., 2018; Chen et al., 2019; Long et al., 2018; Zhang et al., 2019), we alternatively consider minimizing \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|\widehat{Y}}^{S},P_{Z|\widehat{Y}}^{T})\) as a proxy for minimizing \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{S},P_{Z|Y}^{T})\), with \(\widehat{Y}\) as pseudo-labels. To mitigate the error accumulation issue of using pseudolabels under large domain shift (Zhang et al., 2019; Liu et al., 2021), we employ the entropy conditioning technique in Long et al. (2018) in our implementation of CASA. The next proposition demonstrates that the support divergence between the joint distributions \(P_{Z,Y}^{S},P_{Z,Y}^{T}\) can be used to estimate \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{S},P_{Z|Y}^{T})\). **Proposition 2**.: _Assuming that \(P^{S}(\widehat{Y}=y)>0\), \(P^{T}(\widehat{Y}=y)>0\)\(\forall y\in\mathcal{Y}\), and there exists a well-defined distance denoted by \(d\) in the space \(\mathcal{Z}\times\mathcal{Y}\). Then \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y}^{S},P_{Z|Y}^{T})=0\) if and only if \(\mathcal{D}_{\mathrm{supp}}(P_{Z,\widehat{Y}}^{S},P_{Z,\widehat{Y}}^{T})=0\)._ Proposition 2 motivates us to alternatively minimize the joint support divergence \(\mathcal{D}_{\mathrm{supp}}(P_{Z,\widehat{Y}}^{S},P_{Z,\widehat{Y}}^{T})\) to tighten the target error bound, without using any explicit estimate of the marginal label distribution shift as performed in Lipton et al. (2018), Tachet des Combes et al. (2020). Moreover, minimizing this joint support divergence can be performed efficiently in one-dimensional space. In particular, Tong et al. (2022) indicated that when considering the log-loss discriminator \(r:\mathcal{X}\rightarrow[0,1]\) trained to discriminate between two distributions \(P\) and \(Q\) with binary cross entropy loss function can be can be used to estimate \(\mathcal{D}_{\mathrm{supp}}(p,q)\). Exploiting this result, we can align the support of \(P_{Z,\widehat{Y}}^{S}\) and \(P_{Z,\widehat{Y}}^{T}\) with the optimal discriminator \(r^{*}\) that is trained to discriminate between these distributions, which are represented as the outer product \(Z\otimes\widehat{Y}\), similar to Long et al. (2018). Consequently, our model incorporates the domain discriminator loss and support alignment loss to minimize the conditional support divergence \(\mathcal{D}_{\mathrm{supp}}^{c}(P_{Z|Y},Q_{Z|Y})\) in the error bound specified in Eq. (6). \[\mathcal{L}_{d}(r) =-\frac{1}{n_{S}}\sum_{i=1}^{n_{S}}\ln\left[r(s(x_{i}^{S}))\right]- \frac{1}{n_{T}}\sum_{i=1}^{n_{T}}\ln\left[1-r(s(x_{i}^{T}))\right]; \tag{9}\] \[\mathcal{L}_{align}(f) =\frac{1}{n_{S}}\sum_{i=1}^{n_{S}}d\left(r(s(x_{i}^{S})),\{r(s(x_ {j}^{T}))\}_{j=1}^{n_{T}}\right)+\frac{1}{n_{T}}\sum_{i=1}^{n_{T}}d\left(r(s(x_ {i}^{T})),\{r(s(x_{j}^{S}))\}_{j=1}^{n_{S}}\right), \tag{10}\] where \(s(x)=f(x)\otimes g(x)\) and \(\ u\otimes v=uv^{T}\). Overall, the training process of our proposed algorithm, CASA, can be formulated as an alternating optimization problem (see Algorithm 1), where \(\lambda_{align},\lambda_{y},\lambda_{ce},\lambda_{v}\) are the weight hyper-parameters associated with the respective loss terms: \[\min_{f,c}\mathcal{L}_{y}(g)+\lambda_{align}\mathcal{L}_{align}(f)+ \lambda_{ce}\mathcal{L}_{ce}(g)+\lambda_{v}\mathcal{L}_{v}(g), \tag{11}\] \[\min_{r}\mathcal{L}_{d}(r). \tag{12}\] ``` 0:\(D^{S}=\left\{\left(x_{i}^{S},y_{i}^{S}\right)\right\}_{i=1}^{n_{S}}\), \(D^{T}=\left\{x_{j}^{T}\right\}_{j=1}^{n_{T}}\) 0: Feature extractor \(f\), classifier \(c\), domain discriminator \(r\) 1:for number of training iterations do 2: Sample minibatch from source \(\left\{\left(x_{i}^{S},y_{i}^{S}\right)\right\}_{i=1}^{m}\) and target \(\left\{x_{i}^{T}\right\}_{i=1}^{m}\) 3: Update \(r\) according to Eq. (12) 4: Update \(f,c\) according to Eq. (11) 5:endfor ``` **Algorithm 1** Conditional Adversarial Support Alignment ## 3 Experiments **Datasets.** We focus on visual domain adaptation tasks and empirically evaluate our proposed algorithm CASA on benchmark UDA datasets **USPS \(\rightarrow\) MNIST**, **STL \(\rightarrow\) CIFAR** and **VisDA-2017**. We further conduct experiments on the **Office-31** dataset and provide the results in Appendix. **Evaluation setting.** To assess CASA's robustness to label shift, we adopt the experimental protocol of Garg et al. (2020). We simulate label shift using the Dirichlet distribution, ensuring a balanced label distribution in the source domain and a target domain label distribution (\(P_{Y}^{T}(y)\)) following \(Dir(\alpha)\), with \(\alpha\) values of \(10,3.0,1.0,0.5\). Lower \(\alpha\) values indicate more severe label shift. We also include a no label shift setting, denoted as \(\alpha=\) None, where both source and target label distributions are balanced. For each method and dataset, we perform 5 runs with varying target label distributions across different levels of label shift and report average per-class accuracy on the target domain's test set as evaluation metrics. **Baselines.** We assess the performance of CASA by comparing it with various existing UDA algorithms, including: No DA (training using solely labeled source samples), DANN (Ganin et al., 2016), CDAN (Long et al., 2018) for aligning conditional feature distribution, VADA (Shu et al., 2018) which combines distribution alignment with enforcing the cluster assumption, IWDAN and IWCDAN (Tachet des Combes et al., 2020) for distribution alignment with importance weighting, sDANN (Wu et al., 2019) for aligning feature distribution with \(\beta\)-relaxed divergence, ASA (Tong et al., 2022) for aligning support of marginal feature distribution, PCT (Tanwisuth et al., 2021) for aligning class-level feature prototypes, and SENTRY (Prabhu et al., 2021) which optimizes entropy with predictive consistency. Note that IWDAN and IWCDAN rely on importance weighting methods that involve estimating the target label distribution. CDAN, IWCDAN and SENTRY employ target pseudolabels to enhance performance under label shift. Further implementation details are provided in Appendix. **Main results.** We report the results on USPS\(\rightarrow\)MNIST, STL\(\rightarrow\)CIFAR and VisDA-2017 in Table 1, 2 and 3 respectively. Among the methods focusing on distribution alignment, such as DANN, CDAN, and VADA, they tend to achieve the highest accuracy scores under \(\alpha=\) None. However, their performances degrade \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Algorithm & \(\alpha=\text{None}\) & \(\alpha=10.0\) & \(\alpha=3.0\) & \(\alpha=1.0\) & \(\alpha=0.5\) & AVG \\ \hline No DA & \(73.9\pm 2.4\) & \(73.8\pm 2.5\) & \(73.5\pm 2.2\) & \(73.9\pm 2.3\) & \(73.8\pm 2.3\) & \(73.8\) \\ \hline DANN [15] & \(97.8\pm 1.1\) & \(96.1\pm 1.0\) & \(89.6\pm 2.0\) & \(78.7\pm 2.5\) & \(71.0\pm 3.3\) & \(86.6\) \\ CDAN [14] & \(97.9\pm 0.2\) & \(97.6\pm 0.2\) & \(93.2\pm 1.9\) & \(81.7\pm 2.9\) & \(68.4\pm 3.8\) & \(87.8\) \\ VADA [16] & \(\mathbf{98.1}\pm 0.1\) & \(\mathbf{98.0}\pm 0.1\) & \(\mathbf{96.8}\pm 1.1\) & \(84.9\pm 2.5\) & \(76.8\pm 3.9\) & \(90.9\) \\ \hline IWDAN [13] & \(97.5\pm 2.5\) & \(97.1\pm 3.2\) & \(90.4\pm 4.2\) & \(81.3\pm 4.1\) & \(73.3\pm 4.5\) & \(87.9\) \\ IWCDAN [13] & \(97.8\pm 2.3\) & \(97.5\pm 2.0\) & \(91.4\pm 3.1\) & \(82.6\pm 2.9\) & \(73.8\pm 3.6\) & \(88.7\) \\ sDANN [15] & \(87.4\pm 0.6\) & \(90.7\pm 0.7\) & \(92.1\pm 1.9\) & \(89.4\pm 2.2\) & \(85.2\pm 3.4\) & \(89.0\) \\ ASA [17] & \(94.1\pm 0.6\) & \(93.7\pm 0.8\) & \(94.1\pm 0.2\) & \(90.8\pm 2.2\) & \(84.7\pm 1.1\) & \(91.5\) \\ PCT [17] & \(97.4\pm 0.2\) & \(97.2\pm 0.2\) & \(94.3\pm 3.6\) & \(82.3\pm 4.4\) & \(71.8\pm 4.8\) & \(88.6\) \\ SENTRY [12] & \(97.5\pm 2.0\) & \(91.5\pm 1.8\) & \(91.4\pm 2.2\) & \(84.7\pm 2.5\) & \(82.3\pm 3.1\) & \(89.5\) \\ \hline CASA (Ours) & \(98.0\pm 0.1\) & \(98.0\pm 0.2\) & \(\mathbf{97.2}\pm 0.5\) & \(\mathbf{96.7}\pm 0.6\) & \(\mathbf{88.3}\pm 1.7\) & \(\mathbf{95.6}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Per-class average accuracies on USPS\(\rightarrow\)MNIST. Bold and underscore denote the best and second-best methods respectively \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Algorithm & \(\alpha=\text{None}\) & \(\alpha=10.0\) & \(\alpha=3.0\) & \(\alpha=1.0\) & \(\alpha=0.5\) & AVG \\ \hline No DA & \(69.9\pm 0.6\) & \(69.8\pm 0.7\) & \(69.7\pm 0.3\) & \(68.8\pm 0.9\) & \(67.9\pm 1.0\) & \(69.2\) \\ \hline DANN [15] & \(75.3\pm 0.3\) & \(74.3\pm 0.4\) & \(73.5\pm 0.8\) & \(70.0\pm 2.1\) & \(67.1\pm 3.0\) & \(72.0\) \\ CDAN [14] & \(75.5\pm 0.3\) & \(74.1\pm 0.3\) & \(73.5\pm 0.3\) & \(71.3\pm 1.7\) & \(67.6\pm 2.9\) & \(72.4\) \\ VADA [14] & \(\mathbf{77.1}\pm 0.3\) & \(75.5\pm 0.4\) & \(73.8\pm 0.7\) & \(71.3\pm 1.5\) & \(68.0\pm 1.0\) & \(73.1\) \\ \hline IWDAN [13] & \(72.9\pm 0.8\) & \(72.6\pm 1.0\) & \(71.8\pm 0.9\) & \(70.6\pm 1.0\) & \(69.5\pm 1.1\) & \(71.5\) \\ IWCDAN [13] & \(72.1\pm 1.0\) & \(72.0\pm 0.4\) & \(71.5\pm 0.5\) & \(71.9\pm 0.7\) & \(69.9\pm 0.9\) & \(71.5\) \\ sDANN [15] & \(72.8\pm 0.4\) & \(72.0\pm 0.3\) & \(72.0\pm 0.8\) & \(71.4\pm 1.1\) & \(70.1\pm 0.9\) & \(71.7\) \\ ASA [17] & \(72.7\pm 0.4\) & \(72.2\pm 0.5\) & \(72.1\pm 0.8\) & \(71.5\pm 1.0\) & \(69.8\pm 0.7\) & \(71.7\) \\ PCT [17] & \(75.0\pm 0.2\) & \(76.1\pm 0.2\) & \(75.0\pm 0.9\) & \(70.9\pm 1.6\) & \(68.3\pm 2.3\) & \(73.1\) \\ SENTRY [12] & \(76.7\pm 0.5\) & \(76.6\pm 1.3\) & \(75.2\pm 1.2\) & \(71.2\pm 1.0\) & \(67.0\pm 1.6\) & \(73.3\) \\ \hline CASA (Ours) & \(76.9\pm 0.6\) & \(\mathbf{76.8}\pm 0.6\) & \(\mathbf{75.8}\pm 0.5\) & \(\mathbf{74.2}\pm 1.1\) & \(\mathbf{71.7}\pm 1.2\) & \(\mathbf{75.1}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Results on STL\(\rightarrow\)CIFAR. Same setup as Table 1 \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Algorithm & \(\alpha=\text{None}\) & \(\alpha=10.0\) & \(\alpha=3.0\) & \(\alpha=1.0\) & \(\alpha=0.5\) & AVG \\ \hline No DA & \(55.6\pm 0.7\) & \(56.0\pm 0.7\) & \(55.5\pm 0.8\) & \(55.2\pm 0.9\) & \(55.1\pm 0.8\) & \(55.5\) \\ \hline DANN [15] & \(\mathbf{75.5}\pm 0.4\) & \(71.3\pm 0.4\) & \(68.4\pm 1.8\) & \(62.2\pm 2.2\) & \(56.4\pm 1.6\) & \(66.8\) \\ CDAN [14] & \(75.0\pm 0.3\) & \(72.5\pm 0.4\) & \(69.8\pm 2.8\) & \(61.3\pm 2.9\) & \(56.3\pm 3.1\) & \(67.0\) \\ VADA [14] & \(75.2\pm 0.9\) & \(72.3\pm 1.2\) & \(69.6\pm 1.6\) & \(59.2\pm 1.8\) & \(52.6\pm 2.3\) & \(65.8\) \\ \hline IWDAN [13] & \(74.1\pm 0.9\) & \(73.3\pm 0.9\) & \(71.4\pm 1.1\) & \(65.3\pm 2.3\) & \(59.7\pm 2.6\) & \(68.8\) \\ IWCDAN [13] & \(73.5\pm 1.1\) & \(72.5\pm 0.4\) & \(69.6\pm 1.9\) & \(69.2\pm 2.8\) & \(57.2\pm 3.4\) & \(67.1\) \\ sDANN [15] & \(72.8\pm 1.1\) & \(72.2\pm 1.1\) & \(71.2\pm 2.3\) & \(64.9\pm 3.3\) & \(62.5\pm 3.1\) & \(68.7\) \\ ASA [17] & \(66.4\pm 0.3\) & \(65.3\pm 1.3\) & \(6 significantly as the severity of label shift increases. For instance, under \(\alpha=0.5\) in USPS\(\rightarrow\)MNIST task, DANN and CDAN perform worse than source-only training by 2.8% and 5.4%, respectively. On the other hand, baseline methods in the third group that explicitly address label distribution shift, such as ASA, sDANN and IWCDAN, often outperform distribution alignment methods under severe label shift (\(\alpha\in\{1.0,0.5\}\)). However, they fall behind these methods when label shift is mild or nonexistent (\(\alpha\in\{\text{None},10.0\}\)) by a large margin of 2-4% in the STL\(\rightarrow\)CIFAR task. In contrast, CASA outperforms baseline methods on 11 out of 15 transfer tasks. It achieves the second-highest average accuracies when there is no label shift in the USPS\(\rightarrow\)MNIST and STL\(\rightarrow\)CIFAR tasks, and outperforms the second-best methods by 3.6%, 1.6% and 0.7% under \(\alpha=0.5\) in the USPS\(\rightarrow\)MNIST, STL\(\rightarrow\)CIFAR and VisDA-2017 tasks, respectively. The robustness of CASA to different label shift levels is further demonstrated by its highest average accuracy results, surpassing the second-best methods by 4.1%, 1.8% and 1.0% on the 3 benchmark datasets. In comparison to IWCDAN, which tries to align the conditional feature _distribution_ also by using pseudo labels, CASA consistently provides higher accuracy across 15 transfer tasks, surpassing IWCDAN by a margin of 3.6% in terms of average accuracy on the STL\(\rightarrow\)CIFAR dataset. We hypothesize that aligning the conditional feature support is more robust to label distribution shift than aligning the conditional feature distribution. On the other hand, the performance of CASA is also consistently higher than that of ASA, which agrees with our theoretical analysis 2 that justifies minimizing CSSD over minimizing SSD. **Analysis of individual loss terms.** To study the impact of each loss term in Eq.(11), we provide additional experiment results, which consist of the average accuracy over 5 different random runs on USPS\(\rightarrow\)MNIST in Table 4. It is evident that the conditional support alignment loss term \(\mathcal{L}_{align}\), conditional entropy loss term \(\mathcal{L}_{ce}\) and virtual adversarial loss term \(\mathcal{L}_{v}\) all improve the model's performance across different levels of label shift. **Visualization of learned feature embeddings under severe label shift.** We first conduct an experiment to visualize the effectiveness of our proposed method. Images from three classes (3, 5, and 9) are selected from USPS and MNIST datasets, following Tong et al. (2022), to create source and target domains, respectively. The label probabilities are equal in the source domain, while they are \([22.9\%,64.7\%,12.4\%]\) in the target domain. We compare the per-class accuracy scores, Wasserstein distance \(\mathcal{D}_{W}\), \(\mathcal{D}_{\mathrm{supp}}^{c}\) and 2D feature distribution of CDAN (Long et al., 2018), ASA (Tong et al., 2022) and CASA. Fig. 2 shows that CASA achieves a higher target average accuracy, resulting in a clearer separation among classes and more distinct feature clusters compared to CDAN and ASA. Although ASA reduces the overlap of different target classes to some extent, its approach that only enforces support alignment between marginals does not fully eradicate the overlap issue. CASA tackles this drawback by considering discriminative class information during the support alignment in feature embeddings. The plot also demonstrates that CASA effectively reduces \(\mathcal{D}_{\mathrm{supp}}^{c}\) through the proxy objective in Proposition 2. Our observations are consistent with those made in Tong et al. (2022), namely that lower values of \(\mathcal{D}_{\mathrm{supp}}^{c}\) tend to correlate with higher accuracy values under label distribution shift. In contrast, this observation is not true for the case of conventional distribution divergence, such as Wasserstein distance. ## 4 Related works A dominant approach for tackling the UDA problem is learning domain-invariant feature representation, based on the theory of Ben-David et al. (2006), which suggests minimizing the \(\mathcal{H}\Delta\mathcal{H}\)-divergence between the two marginal distributions \(P_{Z}^{S},P_{Z}^{T}\). More general target risk bound than that of Ben-David et al. (2006) have been developed by extending the problem setting to multi-source domain adaptation (Mansour et al., 2008; Zhao et al., 2018; Phung et al., 2021), or considering discrepancy distance (Mansour et al., 2009), hypothesis-independent disparity discrepancy (Zhang et al., 2019), or PAC-Bayesian bounds with weighted majority vote learning (Germain et al., 2016). Numerous methods have been proposed to align the distribution of feature representation between source and target domains, using Wasserstein distance (Courty et al., 2017; Shen et al., 2018; Lee and Raginsky, 2018), maximum mean discrepancy (Long et al., 2014, 2015, 2016), Jensen-Shannon divergence (Ganin and Lempitsky, 2015; Tzeng et al., 2017), or first and second moment of the concerned distribution (Sun and Saenko, 2016; Peng et al., 2019),. However, recent works have pointed out the limits of enforcing invariant feature representation distribution, particularly when the marginal label distribution differs significantly between domains (Johansson et al., 2019; Zhao et al., 2019; Wu et al., 2019; Tachet des Combes et al., 2020). Based on these theoretical results, different methods have been proposed to tackle UDA under label shift, often by minimizing \(\beta\)-relaxed Wasserstein distance (Tong et al., 2022; Wu et al., 2019), or estimating the importance weight of label distribution between source and target domains (Lipton et al., 2018; Tachet des Combes et al., 2020; Azizzadenesheli et al., 2019). Our proposed method leverages the _Symmetric Support Divergence_ proposed by Tong et al. (2022). Different from the work in Tong et al. (2022) that aims to align the support of marginal feature distributions in source and target domains, the key idea of our proposed method is to tackle the label shift by aligning the conditional distributions of the latent given the label, targeting a label-informative representation. We theoretically justify our proposed method by introducing a novel theoretical target risk bound inspired by the work of Dhouib and Maghsudi (2022). ## 5 Conclusion In this paper, we propose a novel CASA framework to handle the UDA problem under label distribution shift. The key idea of our work is to learn a more discriminative and useful representation for the classification task by aligning the supports of the conditional distributions between the source and target domains. We next provide a novel theoretical error bound on the target domain and then introduce a complete training process for our proposed CASA. Our experimental results consistently show that our CASA framework outperforms other relevant UDA baselines on several benchmark tasks. We plan to employ and extend our proposed method to more challenging problem settings, such as domain generalization and universal domain adaptation.
2306.01459
Topological methods for studying contextuality: $N$-cycle scenarios and beyond
Simplicial distributions are combinatorial models describing distributions on spaces of measurements and outcomes that generalize non-signaling distributions on contextuality scenarios. This paper studies simplicial distributions on $2$-dimensional measurement spaces by introducing new topological methods. Two key ingredients are a geometric interpretation of Fourier--Motzkin elimination and a technique based on collapsing of measurement spaces. Using the first one, we provide a new proof of Fine's theorem characterizing non-contextual distributions on $N$-cycle scenarios. Our approach goes beyond these scenarios and can describe non-contextual distributions on scenarios obtained by gluing cycle scenarios of various sizes. The second technique is used for detecting contextual vertices and deriving new Bell inequalities. Combined with these methods, we explore a monoid structure on simplicial distributions.
Aziz Kharoof, Selman Ipek, Cihan Okay
2023-06-02T11:36:31Z
http://arxiv.org/abs/2306.01459v1
# Topological methods for studying contextuality: ###### Abstract Simplicial distributions are combinatorial models describing distributions on spaces of measurements and outcomes that generalize non-signaling distributions on contextuality scenarios. This paper studies simplicial distributions on 2-dimensional measurement spaces by introducing new topological methods. Two key ingredients are a geometric interpretation of Fourier-Motzkin elimination and a technique based on collapsing of measurement spaces. Using the first one, we provide a new proof of Fine's theorem characterizing non-contextual distributions on \(N\)-cycle scenarios. Our approach goes beyond these scenarios and can describe non-contextual distributions on scenarios obtained by gluing cycle scenarios of various sizes. The second technique is used for detecting contextual vertices and deriving new Bell inequalities. Combined with these methods, we explore a monoid structure on simplicial distributions. ###### Contents * 1 Introduction * 2 Simplicial distributions * 2.1 Two-dimensional distributions with binary outcomes * 2.2 Gluing and extending distributions * 2.3 Polytope of simplicial distributions * 2.4 Monoid structure on simplicial distributions * 3 Distributions on the classical \(N\)-disk * 3.1 Fourier-Motzkin elimination * 3.1.1 Application to the Diamond scenario * 3.2 Extending to the classical \(N\)-disk * 3.3 Bouquet of classical \(N\)-disks * 4 Distributions on the \(N\)-cycle scenario and beyond * 4.1 Topological proof of Fine's theorem Collapsing measurement spaces * 5.1 Application to Bell inequalities * 5.2 Detecting contextual vertices * 5.3 Contextual vertices of the cycle scenario * 5.4 Conclusion ## 1 Introduction Quantum contextuality is a fundamental feature of collections of probability distributions obtained from quantum measurements. In a classical setting, experimental statistics are derivable from a joint probability distribution. Measurements of quantum observables, however, do not satisfy this principle, leading to violations of Bell inequalities, or more generally, non-contextual inequalities, which serve as a witness of this quintessentially non-classical phenomenon. That such violations were _necessary_ was first discovered by Bell [1]. Later, Fine [2, 3] showed that such inequalities were also _sufficient_ for recovering a classical description in the well-known Clauser-Horne--Shimony-Holt (CHSH) scenario [4]. A systematic study of contextuality scenarios using sheaf theory was introduced by Abramsky-Brandenburger in [5]. Later topological ideas from group cohomology were introduced to the study of contextuality [6], with an emphasis on investigating quantum advantage in measurement-based quantum computation. More recently, a unified framework for the study of contextuality was introduced, based on combinatorial representations of topological spaces known as simplicial sets [7]. The basic objects in this theory are called simplicial distributions. This theory subsumes the theory of non-signaling distributions and goes beyond by formulating the notion of distributions on spaces rather than sets. Contextuality can be formulated in this generality. Initial applications of simplicial distributions in [7] included a new topological proof of Fine's theorem for the CHSH scenario. A novel feature of this approach is its flexibility in realizing measurement scenarios as topological spaces. Such expressiveness allows for contextuality to be characterized topologically in multiple ways. For instance, one realization of the CHSH scenario is topologically equivalent to a disk consisting of four triangles, while another realization, also appearing in [8], is given by a punctured torus. While the former allows for an analysis similar in spirit to that of Fine, the latter work supplies an alternative proof by the classifying the extreme distributions on the torus. In this paper, we go beyond these examples and consider a generalization Figure 1: Flower scenario. of \(N\)-cycle scenarios [9, 10] which we call _flower scenarios_. The flower scenario is obtained by gluing various cycle scenarios of arbitrary size as in Fig. (1). This scenario is a particular example of a class of 2-dimensional measurement spaces. Given a 1-dimensional simplicial set, i.e., a graph, the cone construction produces a 2-dimensional simplicial set. This construction introduces a new vertex and a set of triangles connecting each edge on the graph to the new vertex. For a 1-dimensional space \(X\), we will write \(\mathcal{C}(X)\) for the cone space. We will write \(L_{N}\) for the space obtained by gluing \(N\) edges in the shape on a line. **Theorem 4.9**.: _Let \(\mathcal{C}(X)\) denote the flower scenario (Fig. (1)), the cone of \(X\) obtained by gluing the lines \(L_{N_{1}},\cdots,L_{N_{k}}\) at their end points. A simplicial distribution \(p\in\mathsf{sDist}(\mathcal{C}(X))\) is non-contextual if and only if for every \(N\)-circle \(C\) on \(X\) the restriction \(p|_{C}\) satisfies the \(N\)-circle inequalities1._ Footnote 1: In the literature what we refer to as \(N\)-circle inequalities are known as \(N\)-cycle inequalities. We diverge in terminology by emphasizing the underlying topological space, which is a circle. The primary technique that goes into the proof of this result is the Fourier-Motzkin (FM) elimination [11], a version of Gaussian elimination for inequalities. In Section 3.1 we present a topological interpretation of FM elimination. A measurement space is represented by a simplicial set whose simplices correspond to measurements. In this paper, we will restrict our attention to 2-dimensional simplicial sets, that is, those obtained by gluing triangles. Our outcome space will be fixed to a canonical choice obtained from \(\mathbb{Z}_{2}=\{0,1\}\) (known as the nerve space) so that the measurements labeling the edges have binary outcomes. In this setting, non-contextuality is characterized by Bell inequalities consisting of variables corresponding to probabilities of measurements on the edges. For our topological proof of Fine's theorem we consider a particular triangulation of the disk, which we refer to as a _classical \(N\)-disk_. On these disks any simplicial distribution turns out to be non-contextual, hence the name classical. If we start from a distribution on the boundary of a disk, the \(N\)-circle inequalities appear as the sufficient and necessary condition for extending such a distribution from the boundary to the entire disk (Proposition 3.10). Now, given two such classical disks glued at a common edge, the topological interpretation of FM elimination is that the boundary of the new space is formed by taking the union of the boundaries of the disks and omitting the common edge; see Fig. (8). The elimination of the edge is the geometric interpretation of removing the variable by FM elimination. This key idea allows us to characterize the extension condition from the boundary of a bouquet of classical disks, i.e., a collection of disks glued at a common edge, by a collection of circle inequalities (Corollary 3.12). This extension result is the main ingredient of the proof of Theorem 4.9 that characterizes non-contextual distributions on the flower scenario Fig. (1). Note that this scenario generalizes bipartite Bell scenarios where Alice performs 2 measurements, and Bob performs \(m\) measurements, and all measurements have binary outcomes [12]. Our next main contribution is the collapsing of measurement spaces to detect contextual vertices of simplicial distributions (Section 5). To study simplicial distributions on the cone space we introduce a technique based on collapsing edges. Let \(\pi:X\to X/\sigma\) denote the map that collapses an edge \(\sigma\) of the graph. Applying the cone construction gives a map \(\mathcal{C}\pi:\mathcal{C}(X)\to\mathcal{C}(X/\sigma)\) between the cone spaces. A simplicial distribution on the cone of the collapsed measurement space can be extended via \(\mathcal{C}\pi\) to give a simplicial distribution on the cone of the original measurement space. We denote this map by \[(\mathcal{C}\pi)^{*}:\mathsf{sDist}(\mathcal{C}(X/\sigma))\to\mathsf{sDist}( \mathcal{C}(X)).\] In Theorem 5.2 we show that for a simplicial distribution \(p\in\mathsf{sDist}(\mathcal{C}(X/\sigma))\) and its image \(q=(\mathcal{C}\pi)^{*}(p)\) the following holds: 1. \(p\) is a vertex if and only if \(q\) is a vertex. 2. \(p\) is contextual if and only if \(q\) is contextual. 3. \(p\) is strongly contextual if and only if \(q\) is strongly contextual. 4. \(p\) is a deterministic distribution if and only if \(q\) is a deterministic distribution. In particular, parts (1) and (2) imply that contextual vertices map to contextual vertices under the collapsing map. This method is very powerful in detecting vertices. Let \(n_{X}\) denote the number of generators of the fundamental group of the graph \(X\). Then the number of contextual vertices in \(\mathsf{sDist}(\mathcal{C}(X))\) is lower bounded by \((2^{n_{X}}-1)2^{|X_{0}|-1}\) where \(|X_{0}|\) denotes the number of vertices of the graph (Theorem 5.10). In addition, we use this method to derive new Bell inequalities from known ones. For example, the Froissart inequalities [13] of the scenario given by the cone of the bipartite graph \(K_{3,3}\) produce new Bell inequalities for the cone of the graph obtained by collapsing one of the edges (Section 5.1). Finally, we explore a new algebraic feature of simplicial distributions first introduced in [14]. The set of simplicial distributions \(\mathsf{sDist}(X)\) has a monoid structure. Together with its polytope structure, this gives a convex monoid. The restriction of the monoid structure to deterministic distributions gives a group structure, and this group acts on simplicial distributions. Using this action, we can generate more vertices from those obtained from the collapsing technique. Our other contributions are as follows: (1) For a 2-dimensional measurement space \(X\) we show that \(\mathsf{sDist}(X)\) is a convex polytope (Proposition 2.15) and provide the \(H\)-description (Corollary 2.16). (2) We describe the monoid structure on \(\mathsf{sDist}(X)\) (Section 2.4) and describe the action of the set of deterministic distributions on Bell inequalities and contextual vertices (Example 2.24). (3) The 1-cycle scenario obtained as the cone of a circle (Fig. (17)) is a new scenario that cannot be realized in the conventional non-signaling picture. More generally, we describe the polytope of simplicial distributions on the cone of the wedge \(\vee_{i=1}^{n}C_{1}\) of 1-circles (Proposition 5.8). ## 2 Simplicial distributions The theory of simplicial distributions is introduced in [7]. A simplicial distribution is defined for a space of measurements and outcomes. In this formalism spaces are represented by combinatorial objects known as simplicial sets. More formally, a _simplicial set_\(X\) consists of a sequence of sets \(X_{n}\) for \(n\geq 0\) and the simplicial structure maps: * Face maps \(d_{i}:X_{n}\to X_{n-1}\) for \(0\leq i\leq n\). * Degeneracy maps \(s_{j}:X_{n}\to X_{n+1}\) for \(0\leq j\leq n\). These maps are subject to simplicial relations (see, e.g., [15]). An \(n\)-simplex is called _degenerate_ if it lies in the image of a degeneracy map, otherwise it is called _non-degenerate_. Geometrically only the non-degenerate simplices are relevant. Among the non-degenerate simplices there are ones that are not a face of another non-degenerate simplex. Those simplices we will refer to as _generating simplices_. Throughout the paper when we refer to an edge (1-simplex) or a triangle (2-simplex) of a simplicial set we mean a non-degenerate one. In this paper we will focus on spaces obtained by gluing triangles. **Example 2.1**.: The triangle, denoted by \(\Delta^{2}\), is the simplicial set with simplices \[(\Delta^{2})_{n}=\{\sigma^{a_{0}\cdots a_{n}}:\,0\leq a_{0}\leq\cdots a_{n} \leq 2,\,a_{i}\in\mathbb{Z}\}.\] The \(i\)-th face map deletes the \(i\)-th index: \(d_{i}(\sigma^{a_{0}\cdots a_{n}})=\sigma^{a_{0}\cdots a_{i-1}a_{i+1}\cdots a_{n}}\), and the \(j\)-th degeneracy map copies the \(j\)-th index: \(s_{j}(\sigma^{a_{0}\cdots a_{n}})=\sigma^{a_{0}\cdots a_{j}a_{j}\cdots a_{n}}\). The simplex \(\sigma^{012}\) is the generating simplex. Any other simplex can be obtained by applying a sequence of face and degeneracy maps. In general, we can define \(\Delta^{d}\) consisting of \(n\)-simplices of the form \(\sigma^{a_{0}\cdots a_{n}}\) where \(0\leq a_{0}\leq\cdots\leq a_{n}\leq d\). This simplicial set represents the topological \(d\)-simplex. Of particular interest are \(\Delta^{0}\) and \(\Delta^{1}\) representing a point and an edge, respectively. The gluing operation can be specified by introducing relations between the generating simplices. The simplest example is obtained by gluing two triangles along a face. **Example 2.2**.: The diamond space \(D\) is defined as follows: * Generating simplices: \(\sigma^{012}_{A}\) and \(\sigma^{012}_{B}\). * Identifying relation: \(d_{1}\sigma^{012}_{A}=d_{1}\sigma^{012}_{B}\). We can define other versions by changing the faces. We will write \(D_{ij}\) for the diamond whose identifying relation is \(d_{i}\sigma^{012}_{A}=d_{j}\sigma^{012}_{B}\). Next we introduce the notion of maps between simplicial sets. A _map \(f:X\to Y\) of simplicial sets_ consists of a sequence \(f_{n}:X_{n}\to Y_{n}\) of functions that respect the face and the degeneracy maps. Given a simplex \(\sigma\in X_{n}\) we will write \(f_{\sigma}\) for \(f_{n}(\sigma)\in Y_{n}\). With this notation the compatibility conditions are given by \[d_{i}f_{\sigma}=f_{d_{i}\sigma}\quad\text{and}\quad s_{j}f_{\sigma}=f_{s_{j} \sigma}.\] A simplicial set map \(f:\Delta^{2}\to Y\) is determined by the image of the generating simplex, that is, by an arbitrary \(2\)-simplex \(f_{\sigma^{012}}\in Y_{2}\). Therefore these maps are in bijective correspondence with the elements of \(Y_{2}\). In the case of the diamond space a simplicial set map \(f:D_{ij}\to Y\) is determined by \(f_{\sigma^{012}_{A}}\) and \(f_{\sigma^{012}_{B}}\) satisfying \[d_{i}f_{\sigma^{012}_{A}}=f_{d_{i}\sigma^{012}_{A}}=f_{d_{j}\sigma^{012}_{B}}= d_{j}f_{\sigma^{012}_{B}}.\] Given a simplicial set \(Y\) we will construct another simplicial set that represents the space of distributions on \(Y\). For this we need the distribution monad \(D_{R}\) defined for a commutative semiring \(R\)[16]. Throughout the paper we will take \(R\) to be \(\mathbb{R}_{\geq 0}\). A distribution on a set \(U\) is defined to be a function \(p:U\to R\) of finite support such that \(\sum_{u\in U}p(u)=1\). The delta distribution at \(u\in U\) is defined by \[\delta^{u}(u^{\prime})=\left\{\begin{array}{ll}1&u^{\prime}=u\\ 0&\text{otherwise.}\end{array}\right.\] Any distribution can be expressed as a sum of delta distributions: \(p=\sum_{u\in U}p(u)\delta^{u}\). For a function \(f:U\to V\) we will write \(D_{R}f\) for the function \(D_{R}(U)\to D_{R}(V)\) defined by \[D_{R}f(p)(v)=\sum_{u\in f^{-1}(v)}p(u).\] The space of distributions on \(Y\) is represented by the simplicial set \(D_{R}(Y)\) whose \(n\)-simplices are given by \(D_{R}(Y_{n})\). The face and the degeneracy maps are given by \(D_{R}d_{i}\) and \(D_{R}s_{j}\). There is a canonical simplicial set map \(\delta:Y\to D_{R}(Y)\) defined by sending a simplex \(\sigma\) to the delta distribution \(\delta^{\sigma}\). **Definition 2.3**.: A _simplicial scenario_ consists of a pair \((X,Y)\) of simplicial sets where \(X\) represents the space of measurements and \(Y\) represents the space of outcomes. A _simplicial distribution_ on \((X,Y)\) is a simplicial set map \(p:X\to D_{R}(Y)\). A simplicial set map of the form \(s:X\to Y\) is called an _outcome assignment_. The associated distribution \(\delta^{s}:X\to D_{R}(Y)\) is defined to be the composite \(\delta\circ s\) is called a _deterministic distribution_. We will write \(\mathsf{sDist}(X,Y)\) and \(\mathsf{dDist}(X,Y)\) for the set of simplicial and deterministic distributions. There is a canonical map \[\Theta:D_{R}(\mathsf{dDist}(X,Y))\to\mathsf{sDist}(X,Y)\] defined by sending \(d=\sum_{s}d(s)\delta^{s}\) to the simplicial distribution \(\Theta(p)\) defined by \[\Theta(p)_{\sigma}=\sum_{s}d(s)\delta^{s_{\sigma}}.\] **Definition 2.4**.: A simplicial distribution \(p:X\to D_{R}Y\) is called _non-contextual_ if \(p\) is in the image of \(\Theta\). Otherwise, it is called _contextual_. There is a stronger version of contextuality whose definition relies on the notion of support. The _support_ of a simplicial distribution \(p:X\to D_{R}Y\) is defined by \[\mathrm{supp}(p)=\{s:X\to Y\,:\,p_{\sigma}(s_{\sigma})>0\;\forall\sigma\in X _{n},\;n\geq 0\}. \tag{1}\] **Definition 2.5**.: A simplicial distribution \(p\) on \((X,Y)\) is called _strongly contextual_ if its support \(\mathrm{supp}(p)\) is empty. ### Two-dimensional distributions with binary outcomes Throughout the paper we will work concretely with binary outcome measurements in \(\mathbb{Z}_{2}\). In effect this means that our outcome space will be the _nerve space_ of \(\mathbb{Z}_{2}\). This simplicial set is denoted by \(N\mathbb{Z}_{2}\) and is defined as follows: * The set of \(n\)-simplices is \(\mathbb{Z}_{2}^{n}\), * The face maps are given by \[d_{i}(a_{1},\cdots,a_{n})=\left\{\begin{array}{ll}(a_{2},\cdots,a_{n})&i=0 \\ (a_{1},\cdots,a_{i}+a_{i+1},\cdots,a_{n})&0<i<n\\ (a_{1},\cdots,a_{n-1})&i=n\end{array}\right.\] and the degeneracy maps are given by \[s_{j}(a_{1},\cdots,a_{n})=(a_{1},\cdots,a_{j},0,a_{j+1},\cdots,a_{n}).\] Our measurement spaces will be obtained by gluing triangles. A simplicial set is _d-dimensional_ if all its non-degenerate simplices are in dimension \(n\leq d\). In this paper we will restrict ourselves to simplicial scenarios of the form \((X,N\mathbb{Z}_{2})\) where \(X\) is 2-dimensional. We will study simplicial distributions on such scenarios. For simplicity of notation we will write \(\mathsf{sDist}(X)\) omitting the outcome space when it is fixed to \(N\mathbb{Z}_{2}\), and denote the simplicial scenario only by the measurement space \(X\). Let us look more closely to simplicial distributions on the triangle. Consider a triangle \(\Delta^{2}\) with the generating 2-simplex \(\sigma=\sigma^{012}\). A simplicial distribution is given by a simplicial set map \[p:\Delta^{2}\to D_{R}(N\mathbb{Z}_{2})\] which is determined by the distribution \(p_{\sigma}\) on \(\mathbb{Z}_{2}^{2}\). We will write \(p_{\sigma}^{ab}\) for the probability of obtaining the outcome \((a,b)\in\mathbb{Z}_{2}^{2}\) when we measure \(\sigma\). The three edges bounding \(\sigma\) are given by the face maps as follows: \(\sigma^{01}=d_{2}\sigma^{012}\), \(\sigma^{02}=d_{1}\sigma^{012}\), \(\sigma^{12}=d_{0}\sigma^{012}\). For simplicity of notation we will write \(x=\sigma^{01}\), \(y=\sigma^{12}\) and \(z=\sigma^{02}\). The corresponding marginal distribution \(p_{x}:\mathbb{Z}_{2}\to\mathbb{R}_{\geq 0}\) at edge \(x\) can be identified with \((p_{x}^{0},p_{x}^{1})\). Since \(p_{x}^{0}+p_{x}^{1}=1\), it suffices just to keep \(p_{x}^{0}\). Similarly for edges \(y\) and \(z\). Compatibility with face maps requires that: \[p_{x}^{0} = p_{\sigma}^{00}+p_{\sigma}^{01},\] \[p_{y}^{0} = p_{\sigma}^{00}+p_{\sigma}^{10},\] \[p_{z}^{0} = p_{\sigma}^{00}+p_{\sigma}^{11}.\] Since \(p_{\sigma}^{ab}\) is also normalized, it can be expressed by three parameters. Without loss of generality we can take these three parameters to be the marginal distributions corresponding to the edges on the boundary. Conversely, given the marginals on the edges we have that \[p_{\sigma}^{ab} = \frac{1}{2}\left(p_{x}^{a}+p_{y}^{b}-p_{z}^{a+b+1}\right). \tag{2}\] Therefore a simplicial distribution on the triangle is determined by its restriction to the boundary. This observation generalizes to every 2-dimensional simplicial set. As we will observe in Proposition 2.15 for such measurement spaces restriction of a simplicial distribution to the 1-dimensional simplicial subset consisting of all the edges determines the distribution. Alternatively, we can use the expectation coordinates instead of the probability coordinates. For an edge \(\tau\in X_{1}\), let us define its expectation value by \[\bar{\tau}=p_{\tau}^{0}-p_{\tau}^{1}. \tag{3}\] Using this we can rewrite \(p_{\sigma}^{ab}\), which takes the form \[p_{\sigma}^{ab}=\frac{1}{4}\left(1+(-1)^{a}\bar{x}+(-1)^{b}\bar{y}+(-1)^{a+b} \bar{z}\right). \tag{4}\] Next we describe non-contextual distributions on \(\Delta^{2}\). Let us start with outcome assignments. An outcome assignment \(s:\Delta^{2}\to N\mathbb{Z}_{2}\) is determined by a pair of bits \(s_{\sigma}\in\mathbb{Z}_{2}^{2}\). The corresponding deterministic distribution is \(\delta^{s}\). For simplicity of notation we will write \(\delta^{ab}\) for the deterministic distribution corresponding to the outcome assignment \(s_{\sigma}=(a,b)\). **Proposition 2.6**.: _Every simplicial distribution on \(\Delta^{2}\) is non-contextual._ Proof.: Given a simplicial distribution \(p:\Delta^{2}\to D(N\mathbb{Z}_{2})\) described by \(\{p_{\sigma}^{ab}\}_{a,b\in\mathbb{Z}_{2}}\). Then the classical distribution \[d=\sum_{a,b}d(ab)\delta^{ab}\ \ \mbox{with}\ d(ab)=p_{\sigma}^{ab}\] satisfies \(\Theta(d)=p\). In this paper we are interested in cones of \(1\)-dimensional simplicial sets. For instance, the \(N\)-cycle scenario (Definition 4.1) is of this form. Given a simplicial set \(X\) we will construct a new simplicial set denoted by \(\mathcal{C}(X)\) which represents the topological construction of adding a new vertex and joining every \(n\)-simplex of \(X\) to this vertex to create an \((n+1)\)-simplex. The new vertex is represented by \(\Delta^{0}\), the simplicial set representing a point. This simplicial set is defined by * \((\Delta^{0})_{n}=\{\sigma^{0\cdots 0}\}\), * the face and the degeneracy maps are given by deleting and copying; see Example 2.1. For notational convenience we will write \(c_{n}\) for the simplex \(\sigma^{0\cdots 0}\) in dimension \(n\). With this notation a face map sends \(c_{n}\mapsto c_{n-1}\) and a degeneracy map send \(c_{n}\mapsto c_{n+1}\). **Definition 2.7**.: The _cone_\(\mathcal{C}(X)\) is the simplicial set given as follows: * \((\mathcal{C}(X))_{n}=\{c_{n}\}\sqcup X_{n}\sqcup(\sqcup_{k+1+l=n}\{c_{k}\} \times X_{l})\). * For \((c_{k},\sigma)\in\{c_{k}\}\times X_{l}\) \[d_{i}(c_{k},\sigma)=\left\{\begin{array}{ll}(c_{k-1},\sigma)&i\leq k\\ (c_{k},d_{i-1-k}\sigma)&i>k\end{array}\right.\] and \[s_{j}(c_{k},\sigma)=\left\{\begin{array}{ll}(c_{k+1},\sigma)&j\leq k\\ (c_{k},s_{j-1-k}\sigma)&j>k.\end{array}\right.\] Otherwise, the face and the degeneracy maps on the \(\{c_{n}\}\) and \(X_{n}\) factors act the same as in \(\Delta^{0}\) and \(X\). This construction is a special case of the join construction \(Z*X\) defined for a pair of simplicial sets [17, Chapter 17.1]. In the cone construction \(Z=\Delta^{0}\). We will use the cone construction to obtain \(2\)-dimensional measurement spaces. **Remark 2.8**.: For \(n\geq 1\), the non-degenerate \(n\)-simplices of \(\mathcal{C}(X)\) are of the form \((c_{0},\sigma)\) where \(\sigma\) is a non-degenerate \((n-1)\)-simplex of \(X\). We will usually write \(c=c_{0}\). Figure 2: (a) A triangle can be considered as the cone of an edge. The generating \(2\)-simplex is given by \((c,\sigma^{01})\) whose faces are \((c,\sigma^{0})\), \((c,\sigma^{1})\) and \(\sigma^{01}\), where \(c=c_{0}\). (b) A simplicial distribution on the triangle. ### Gluing and extending distributions Fundamental tools in the study of simplicial distributions are the extension and the gluing lemmas. They will be crucial for the proof of Fine's theorem for the \(N\)-cycle and the flower scenarios in Section 4.1. Given a simplicial set map \(f:Z\to X\) we will write \[f^{*}:\mathsf{sDist}(X)\to\mathsf{sDist}(Z)\] for the map that sends a simplicial distribution \(p\) on \(X\) to the simplicial distribution defined by the composition \(f^{*}p:Z\xrightarrow{f}X\xrightarrow{p}D_{R}(N\mathbb{Z}_{2})\). Similarly there is a map between the deterministic distributions, which is also denoted by \(f^{*}:\mathsf{dDist}(X)\to\mathsf{dDist}(Z)\). In this case a deterministic distribution \(\delta^{s}\) is sent to \(\delta^{\mathsf{s}\circ f}\). There is a commutative diagram (5) **Proposition 2.9**.: _If \(q\in\mathsf{sDist}(X)\) is non-contextual then \(p=f^{*}(q)\) is also non-contextual._ Proof.: Let \(d\in D_{R}(\mathsf{dDist}(X))\) such that \(\Theta(d)=q\). Then \(e=D_{R}f^{*}(d)\) satisfies \(\Theta(e)=p\) by the commutativity of Diagram (5). Let \(A\) be a simplicial subset of \(X\) and let us write \(i:A\to X\) for the inclusion map. This means that each \(A_{n}\) is a subset of \(X_{n}\) and the simplicial structure of \(A\) is compatible with that of \(X\). Given \(p\in\mathsf{sDist}(X)\) we will write \(p|_{A}\) for the distribution \(i^{*}p\). For a deterministic distribution \(\delta^{s}\) on \(X\) the distribution \(i^{*}\delta^{s}\) will be denoted by \(\delta^{s}|_{A}\). Note that \(\delta^{s}|_{A}=\delta^{s|_{A}}\) where \(s|_{A}\) stands for the composition \(s|_{A}:A\xrightarrow{i}X\xrightarrow{s}N\mathbb{Z}_{2}\). An important special case of Proposition 2.9 is the following result we will need later in the paper. **Corollary 2.10**.: _Let \(A\) be a simplicial subset of \(X\). If \(q\in\mathsf{sDist}(X)\) is non-contextual then \(q|_{A}\) is also non-contextual._ Another important result is the following Gluing Lemma. Using this result one can reduce the study of distributions on a measurement space to its smaller constituents in some cases. **Lemma 2.11**.: _Suppose that \(X=A\cup B\) with \(A\cap B=\Delta^{n}\) for some \(n\geq 0\). Then \(p\in\mathsf{sDist}(X)\) is non-contextual if and only if both \(p|_{A}\in\mathsf{sDist}(A)\) and \(p|_{B}\in\mathsf{sDist}(B)\) are non-contextual._ Proof.: See Corollary 4.6 in [7]. ### Polytope of simplicial distributions Recall that the triangle \(\Delta^{2}\) has a single generating simplex \(\sigma\). The boundary \(\partial\sigma\) consists of three non-degenerate \(1\)-simplices denoted by \(x,y,z\). Using Eq. (2) the polytope of simplicial distributions \(\mathsf{sDist}(\Delta^{2})\) can be described as the space consisting of triples \((p^{0}_{x},p^{0}_{y},p^{0}_{z})\in\mathbb{R}^{3}\) satisfying \[p^{0}_{x}+p^{0}_{y}+p^{0}_{z} \geq 1 \tag{6}\] \[p^{0}_{x}-p^{0}_{y}-p^{0}_{z} \geq-1\] \[-p^{0}_{x}+p^{0}_{y}-p^{0}_{z} \geq-1\] \[-p^{0}_{x}-p^{0}_{y}+p^{0}_{z} \geq-1.\] This set of inequalities is an example of \(N\)-circle inequalities introduced in Definition 3.5 (\(N=3\)). They imply that \(\mathsf{sDist}(\Delta^{2})\) is a tetrahedron in \(\mathbb{R}^{3}\). Proposition 2.6 can be used to observe that its vertices are given by \((a,b,c)\) where \(a,b,c\in\{0,1\}\) and \(c+1=a+b\mod 2\). In general, we will show that \(\mathsf{sDist}(X)\) is described as the intersection of finitely many half-space inequalities corresponding to the non-negativity of the parameters \(p_{\sigma}^{ab}\). Such a description of a polytope is called the \(H\)-representation. Our goal is to characterize the geometric structure of \(\mathsf{sDist}(X)\) including the vertices (extreme distributions) and the Bell inequalities bounding the non-contextual distributions. **Definition 2.12**.: A \(1\)-simplex \(\tau\) of \(X\) is called a _deterministic edge_ (with respect to \(p\)) if \(p_{\tau}\) is a deterministic distribution on \(\mathbb{Z}_{2}\). **Proposition 2.13**.: _If two of the edges of a triangle are deterministic then the third edge is also deterministic._ Proof.: Assume that \(p_{x}^{0}=1\) and \(p_{y}^{0}=1\), the other cases follow similarly. Then the last inequality in Eq. (6) implies that \(p_{z}^{0}=1\). Next, we recall some basic facts from polytope theory [11, 18]. In the \(H\)-representation a (convex) polytope is specified by a set of inequalities: \[P(A,b)=\{x\in\mathbb{R}^{d}:\,Ax\geq b\}\] where \(A\) is a \(m\times d\) matrix and \(b\) is a column vector of size \(m\). We will assume that \(P\subset\mathbb{R}^{d}\) is full-dimensional, that is, the dimension of the polytope is given by \(d\). **Lemma 2.14**.: _Let \(X\) be a \(2\)-dimensional simplicial set with a single generating \(2\)-simplex \(\sigma\) whose boundary \(\partial\sigma\) consists of the \(1\)-simplices \(x,y,z\) all of them are non-degenerate simplices. Consider the injective map_ \[f_{\sigma}:\mathsf{sDist}(X)\to[0,1]^{|\partial\sigma|}\] _that sends \(p\) to the tuple \((p_{\tau}^{0})_{\tau\in\partial\sigma}\). Then the image of \(f\) is a polytope of dimension \(|\partial\sigma|\)._ Proof.: Let \(P\) denote the image of \(f\). First consider the case where \(|\partial\sigma|=3\). \(P\) is defined by the set of inequalities in Eq. (6). This is a tetrahedron in \(\mathbb{R}^{3}\) with vertices \((\delta^{a},\delta^{b},\delta^{c})\) where \(a,b,c\in\{0,1\}\) and \(c=a+b\mod 2\). Therefore the dimension of \(P\) is \(3\). Next consider \(|\partial\sigma|=2\). We can assume that \(x\) and \(y\) identified. Then the polytope is obtained by intersecting the tetrahedron by the hyperplane \(p_{x}^{0}=p_{y}^{0}\). This gives a \(2\)-dimensional polytope. Finally, if \(|\partial\sigma|=1\) then all the edges are identified. The polytope is obtained by intersecting the previous one with \(p_{y}^{0}=p_{z}^{0}\) producing a polytope of dimension \(1\). A polytope \(P(A,b)\subset\mathbb{R}^{d}\) is called full-dimensional if the dimension of the polytope is \(d\). For a simplicial set \(X\) we will write \(X_{n}^{\circ}\) for the set of non-degenerate simplices. Let \(X^{(n)}\) denote the simplicial subset of \(X\) generated by \(X_{n}^{\circ}\). For example, \(X^{(1)}\) is generated by non-degenerate \(1\)-simplices together with the face relations coming from \(X\). **Proposition 2.15**.: _Let \(X\) be a simplicial set generated by the \(2\)-simplices \(\sigma_{1},\cdots,\sigma_{k}\) such that each \(\partial\sigma_{i}\) does not contain non-degenerate edges. The map_ \[f:\mathsf{sDist}(X)\to\mathsf{sDist}(X^{(1)})=[0,1]^{|X_{1}^{\circ}|} \tag{7}\] _that sends \(p\) to the tuple \((p_{\tau}^{0})_{\tau\in X_{1}^{\circ}}\) is a convex injective map. Moreover, \(P_{X}\subset\mathbb{R}^{|X_{1}^{\circ}|}\) is a full-dimensional convex polytope._ Proof.: This follows from Lemma 2.14: For each \(\sigma_{i}\) the restriction of \(f\) to the simplicial set \(X_{i}\) generated by \(\sigma_{i}\) gives a map \[f_{\sigma}:\mathsf{sDist}(X_{i})\to[0,1]^{|\partial\sigma_{i}|}.\] Thus \(P_{X_{i}}\) is a full-dimensional polytope. Consider the projection map \(\mathbb{R}^{|X_{1}^{\circ}|}\to\mathbb{R}^{|\partial\sigma_{i}|}\) onto the coordinates of the boundary. Combining these projections we can obtain a linear embedding \(i:\mathbb{R}^{|X_{1}^{\circ}|}\to\prod_{i=1}^{k}\mathbb{R}^{|\partial\sigma_{ i}|}\). Then \(P_{X}\) is given by the intersection of the image of \(i\) and the product of the polytopes \(\prod_{i=1}^{k}P_{X_{i}}\). This intersection remains to be full dimensional in the linear subspace. In practice, this result implies that a simplicial distribution on a \(2\)-dimensional simplicial set is determined by its restriction to the edges. This description of \(p\) will be referred to as the edge coordinates. With this result at hand, it is straight-forward to give the \(H\)-description of \(P_{X}\). **Corollary 2.16**.: _Let \(d_{X}=X_{1}^{\circ}\) and \(m_{X}=|X_{2}^{\circ}\times\mathbb{Z}_{2}^{2}|\). We define an \(m_{X}\times d_{X}\) matrix:_ \[A_{(\sigma;ab),\tau}=\left\{\begin{array}{ll}(-1)^{a}&\tau=x\\ (-1)^{b}&\tau=y\\ (-1)^{a+b+1}&\tau=z\\ 0&\text{otherwise,}\end{array}\right.\] _and a column vector \(b\) of size \(m_{X}\):_ \[b_{(\sigma,ab)}=\frac{\left((-1)^{a}+(-1)^{b}-(-1)^{a+b}\right)-1}{2}.\] _Then \(P_{X}\) is described as \(P(A,b)\)._ We adopt a notation where if \(\mathcal{Z}\subseteq\{1,\cdots,m\}\) then \(A[\mathcal{Z}]\) is the matrix obtained by keeping only those rows indexed by \(\mathcal{Z}\) and discarding the rest, and similarly for \(b[\mathcal{Z}]\). Let \(i\in\{1,\cdots,m\}\) index a single inequality and \(x\in P\), then we call an inequality \(i\) at \(x\)_tight_ if the inequality is satisfied with equality, i.e., \(A_{i}x=b_{i}\). For a point \(x\in P\) we write \(\mathcal{Z}_{x}\) for the set of tight inequalities at \(x\). **Definition 2.17**.: The _rank_\(\text{rank}(p)\) of a simplicial distribution \(p\in\mathsf{sDist}(X)\) is defined to be the rank of the matrix \(A[\mathcal{Z}_{p}]\). **Corollary 2.18**.: _A simplicial distribution \(p\in\mathsf{sDist}(X)\) is a vertex if and only if \(\text{rank}(p)=|X_{1}^{\circ}|\)._ Proof.: For a full-dimensional polytope \(P\subset\mathbb{R}^{d}\), a point \(v\in P\) is a vertex if and only if it is the unique solution to \(d\) tight inequalities. More explicitly, if \(\mathcal{Z}\subset\{1,\cdots,m\}\) indexes \(d\) inequalities such that \(A[\mathcal{Z}]\) has full rank, then a vertex is given by \[v=A[\mathcal{Z}]^{-1}b.\] This basic fact applied to \(P_{X}\), where \(d=|X_{1}^{\circ}|\), combined with Proposition 2.15 gives the result. ### Monoid structure on simplicial distributions An additional algebraic feature that comes for free in the theory of simplicial distributions is the monoid structure on \(\mathsf{sDist}(X,Y)\) when \(Y\) is a simplicial set which also has the structure of a group. Such a group-like simplicial set is called a simplicial group. Our outcome space \(N\mathbb{Z}_{2}\) has this additional algebraic feature, which comes from the following simplicial set map: \[\cdot:N\mathbb{Z}_{2}\times N\mathbb{Z}_{2}\to N\mathbb{Z}_{2}\] defined by \[(a_{1},\cdots,a_{n})\cdot(b_{1},\cdots,b_{n})=(a_{1}+b_{1},\cdots,a_{n}+b_{n}). \tag{8}\] It is straight-forward to verify that this assignment respects the face and the degeneracy maps. This product gives the set \(\mathsf{dDist}(X)\) of deterministic distributions the structure of a group. Given two such distributions \(\delta^{s}\) and \(\delta^{r}\) their product is given by \(\delta^{s\cdot r}\) where \[s\cdot r:X\xrightarrow{(s,r)}N\mathbb{Z}_{2}\times N\mathbb{Z}_{2}\xrightarrow {\cdot}N\mathbb{Z}_{2}.\] We will write \(\delta^{s}\cdot\delta^{r}\) to denote this product of deterministic distributions. **Lemma 2.19**.: 1. _The product on_ \(\mathsf{dDist}(\Delta^{1})\) _is given by_ \[\delta^{a}\cdot\delta^{b}=\delta^{a+b}.\] 2. _The product on_ \(\mathsf{dDist}(\Delta^{2})\) _is given by_ \[\delta^{ab}\cdot\delta^{cd}=\delta^{(a+c)(b+d)}.\] Proof.: Let \(\tau=\sigma^{01}\) denote the generating simplex of \(\Delta^{1}\). Consider two deterministic distributions \(\delta^{s}\) and \(\delta^{r}\) such that \(s_{\tau}=a\) and \(r_{\tau}=b\). The product \(s\cdot r\) is determined by its value at \(\tau\). Using Eq. (8) we have \[(s\cdot r)_{\tau}=a\cdot b=a+b.\] For \(\Delta^{2}\), we will consider the generating simplex \(\sigma=\sigma^{012}\). By a similar argument applied to \(s_{\sigma}=(a,b)\) and \(r=(c,d)\) we observe that \[(s\cdot r)_{\sigma}=(a,b)\cdot(c,d)=(a+c,b+d).\] Lemma 2.19 can be used to describe the product on \(\mathsf{dDist}(X)\) when \(X\) is \(2\)-dimensional. This product can be extended to \(D(\mathsf{dDist})\). Given \(d,e\in D(\mathsf{dDist}(X))\) we define \[(d\cdot e)(s)=\sum_{r\cdot t=s}d(r)d(t)\] where the summation runs over \((\delta^{r},\delta^{t})\in(\mathsf{dDist}(X))^{2}\) satisfying \(r\cdot t=s\). With this product \(D(\mathsf{dDist}(X))\) is a monoid. Next, we turn to the monoid structure on \(\mathsf{sDist}(X)\). Given two simplicial distributions \(p,q\) on \(X\) the product \(p\cdot q\) is defined by \[(p\cdot q)_{\sigma}^{a}=\sum_{b+c=a}p_{\sigma}^{b}q_{\sigma}^{c} \tag{9}\] where the summation runs over \((b,c)\in(\mathbb{Z}_{2}^{n})^{2}\) satisfying \(b+c=a\). This formula works for an \(n\)-simplex \(\sigma\). For us the main interest is the cases \(n=1,2\). **Lemma 2.20**.: _Let \(X\) be a simplicial set and \(p,q\in\mathsf{sDist}(X)\)._ 1. _For_ \(\tau\in X_{1}\)_, we have_ \[(p\cdot q)^{0}_{\tau}=p^{0}_{\tau}\cdot q^{0}_{\tau}+p^{1}_{\tau}\cdot q^{1}_{\tau },\quad(p\cdot q)^{1}_{\tau}=p^{0}_{\tau}\cdot q^{1}_{\tau}+p^{1}_{\tau}\cdot q^{ 0}_{\tau}\] 2. _For_ \(\sigma\in X_{2}\)_, we have_ \[(p\cdot q)^{00}_{\sigma}=p^{00}_{\sigma}\cdot q^{00}_{\sigma}+p^{01}_{ \sigma}\cdot q^{01}_{\sigma}+p^{10}_{\sigma}\cdot q^{10}_{\sigma}+p^{11}_{ \sigma}\cdot q^{11}_{\sigma},\quad(p\cdot q)^{01}_{\sigma}=p^{00}_{\sigma} \cdot q^{01}_{\sigma}+p^{01}_{\sigma}\cdot q^{00}_{\sigma}+p^{10}_{\sigma} \cdot q^{11}_{\sigma}+p^{11}_{\sigma}\cdot q^{10}_{\sigma}\] \[(p\cdot q)^{10}_{\sigma}=p^{00}_{\sigma}\cdot q^{10}_{\sigma}+p^{01 }_{\sigma}\cdot q^{11}_{\sigma}+p^{10}_{\sigma}\cdot q^{00}_{\sigma}+p^{11}_ {\sigma}\cdot q^{01}_{\sigma},\quad(p\cdot q)^{11}_{\sigma}=p^{00}_{\sigma} \cdot q^{11}_{\sigma}+p^{01}_{\sigma}\cdot q^{10}_{\sigma}+p^{10}_{\sigma} \cdot q^{01}_{\sigma}+p^{11}_{\sigma}\cdot q^{00}_{\sigma}\] Proof.: Follows directly from Eq. (9). Moreover, the map \(\Theta:D(\mathsf{dDist}(X))\to\mathsf{sDist}(X)\) is a homomorphism of monoids. For more on the monoid structure and its interaction with convexity see [14]. We will use the action of the group \(\mathsf{dDist}(X)\) on the monoid \(\mathsf{sDist}(X)\) that comes from the product in Eq. (9). Explicitly, for \(\sigma\in X_{2}\) and \(\tau\in X_{1}\) this action is described as follows: \[(\delta^{a}\cdot q)^{c}_{\tau}=q^{c+a}_{\tau},\quad(\delta^{ab}\cdot q)^{cd}_ {\sigma}=q^{(c+a)(d+b)}_{\sigma}. \tag{10}\] Note that this action maps vertices of \(\mathsf{sDist}(X)\) to vertices. **Proposition 2.21**.: 1. _For two non-contextual simplicial distributions_ \(p\) _and_ \(q\) _in_ \(\mathsf{sDist}(X)\)_, the product_ \(p\cdot q\) _is a non-contextual distribution._ 2. _A simplicial distribution_ \(p\in\mathsf{sDist}(X)\) _is non-contextual if and only if_ \(\delta^{s}\cdot p\) _is non-contextual._ 3. _A simplicial distribution_ \(p\in\mathsf{sDist}(X)\) _is vertex if and only if_ \(\delta^{\varphi}\cdot p\) _is a vertex._ Proof.: Part 1 follows from the fact that the map \(\Theta:D(\mathsf{dDist}(X))\to\mathsf{sDist}(X)\) is a homomorphism of monoids [14, Lemma 5.1]. Part 2 follows from part 1. Part (2) of this proposition implies that the action of \(\mathsf{dDist}(X)\) on \(\mathsf{sDist}(X)\) maps a (non)-contextual vertex to a (non)-contextual vertex. We describe the action in the case of the well-known CHSH scenario in Example 2.24 below. The following simplicial distributions on \(\Delta^{2}=\mathcal{C}(\Delta^{1})\) will play a distinguished in later sections when we study 2-dimensional scenarios more closely: \[p^{ab}_{+}=\left\{\begin{array}{ll}1/2&b=0\\ 0&\text{otherwise}.\end{array}\right.\quad p^{ab}_{-}=\left\{\begin{array}{ ll}0&b=0\\ 1/2&\text{otherwise}.\end{array}\right. \tag{11}\] We follow the convention in Fig. (2b). **Definition 2.22**.: Let \(X\) be a 1-dimensional simplicial set. We will write \(G_{\pm}(CX)\) for the subset of simplicial distributions \(p\in\mathsf{sDist}(CX)\) satisfying \(p|_{(c,\tau)}=p_{\pm}\) for every \(\tau\in X^{\circ}_{1}\). Next we show that this set is a group. We will denote the distribution in \(G_{\pm}(CX)\) with \(p|_{(c,\tau)}=p_{+}\) for every \(\tau\in X^{\circ}_{1}\) by \(e_{+}\). **Proposition 2.23**.: \(G_{\pm}(CX)\) _is an abelian group with \(e_{+}\) as the identity. In addition, every element has order \(2\), that is,_ \[G_{\pm}(CX)\cong\mathbb{Z}_{2}^{X^{\circ}_{1}}.\] Proof.: By part 2 of Lemma 2.20 we have \[p_{+}\cdot p_{+}=p_{+}\ \,\ \ p_{+}\cdot p_{-}=p_{-}\ \,\ \ p_{-}\cdot p_{-}=p_{+}\ \,\] Therefore the statement holds for \(X=\Delta^{1}\), that is we have \[G_{\pm}(\mathcal{C}(\Delta^{1}))\cong\mathbb{Z}_{2}.\] Now for arbitrary \(X\) and \(p,q\in G_{\pm}(CX)\) the product is computed triangle-wise, i.e., \((p\cdot q)_{\sigma}=p_{\sigma}\cdot q_{\sigma}\). Therefore the statement easily generalizes. **Example 2.24**.: The CHSH scenario consists of four triangles organized into a disk with vertices \(v_{0},v_{1},w_{0},w_{1}\) and \(c\). For each pair \((v_{i},w_{j})\) there is an edge which we denote by \(\tau_{ij}\). This constitutes the boundary of the disk. There are four non-degenerate triangles \(\sigma_{ij}=(c,\tau_{ij})\) as depicted in Fig. (3). The interior edges \((c,v_{i})\) and \((c,w_{j})\) will be denoted by \(x_{i}\) and \(y_{j}\), respectively. This scenario is a particular case of the \(N\)-cycle scenario in Definition 4.1. Here \(N\) is the number of edges on the boundary, hence in this case \(N=4\). Using the edge coordinates of Proposition 2.15 a simplicial distribution \(p\) on the CHSH scenario can be described by the tuple \((p_{x_{0}},p_{\tau_{00}},p_{y_{0}},p_{\tau_{10}},p_{x_{1}},p_{\tau_{11}},p_{y_ {1}},p_{\tau_{01}})\). It is well-known that \(p\) is non-contextual if and only if it satisfies the CHSH inequalities [4]: \[\begin{split} 0\leq p_{\tau_{00}}^{0}+p_{\tau_{10}}^{0}+p_{\tau_{11} }^{0}-p_{\tau_{01}}^{0}\leq 2\\ 0\leq p_{\tau_{00}}^{0}+p_{\tau_{10}}^{0}-p_{\tau_{11}}^{0}+p_{ \tau_{01}}^{0}\leq 2\\ 0\leq p_{\tau_{00}}^{0}-p_{\tau_{10}}^{0}+p_{\tau_{11}}^{0}+p_{ \tau_{01}}^{0}\leq 2\\ 0\leq-p_{\tau_{00}}^{0}+p_{\tau_{10}}^{0}+p_{\tau_{11}}^{0}+p_{ \tau_{01}}^{0}\leq 2\\ \end{split} \tag{12}\] Also the contextual vertices are known. They are given by the Popescu-Rohrlich (PR) boxes [19]: A PR box is a simplicial distribution \(p\) such that \(p_{\sigma_{ij}}=p_{\pm}\) for \(i,j\in\{0,1\}\) with the further restriction that the number of \(p_{-}\)'s is odd. We begin with the action on the PR boxes. By Eq. (10) we see that \(\delta^{00}\) and \(\delta^{10}\) are the only deterministic distributions on the triangles that fix \(p_{+}\) and \(p_{-}\). From this observation we conclude that among the 16 deterministic distributions on the CHSH scenario the ones that fix a given PR box are \((\delta^{00}_{\sigma_{00}},\delta^{00}_{\sigma_{01}},\delta^{00}_{\sigma_{10}},\delta^{00}_{\sigma_{11}})\) and \((\delta^{10}_{\sigma_{00}},\delta^{10}_{\sigma_{01}},\delta^{10}_{\sigma_{10}},\delta^{10}_{\sigma_{11}})\). Thus the size of the orbit is \(16/2=8\), which gives all the PR boxes. To describe the action of \(\delta^{s}\) on the Bell inequalities we need to switch back to the edge coordinates. For notational convenience we will write \(p_{i}\) for the \(i\)-th entry of this tuple. Then the deterministic distribution \(\delta^{s}\) is given by \((\delta^{a_{0}},\delta^{a_{0}+b_{0}},\delta^{b_{0}},\delta^{a_{1}+b_{0}},\delta^ {a_{1}},\delta^{a_{1}+b_{1}},\delta^{b_{1}},\delta^{a_{0}+b_{1}})\). Using the notational convenience introduced above, in these coordinates the action of \(\delta^{s}\) on \(p\) is given by \[(\delta^{s}\cdot p)_{i}=(\delta^{s})_{i}\cdot p_{i}.\] Now, substituting these new values to the Bell inequality gives the action. For example, the action of \((\delta^{0},\delta^{1},\delta^{1},\delta^{0},\delta^{1},\delta^{1},\delta^{0}, \delta^{0})\) on the Bell inequality \[p^{0}_{\sigma_{00}}+p^{0}_{\sigma_{10}}+p^{0}_{\sigma_{11}}-p^{0}_{\sigma_{01 }}\leq 2 \tag{13}\] gives \(1-p^{0}_{\sigma_{00}}+p^{0}_{\sigma_{10}}+1-p^{0}_{\sigma_{11}}-p^{0}_{\sigma _{01}}\leq 2\), which can be put in a more familiar form \[p^{0}_{\sigma_{00}}-p^{0}_{\sigma_{10}}+p^{0}_{\sigma_{11}}+p^{0}_{\sigma_{01 }}\geq 0.\] We can compute the stabilizer of the Bell inequality in Eq. (13). The relevant edge coordinates are \(\sigma_{ij}\) that constitute the boundary of the CHSH scenario. The relevant coordinates of \(\delta^{s}\) that can change the inequality are \(\delta^{a_{i}+b_{j}}\) where \(i,j\in\{0,1\}\). Then the stabilizer consists of those deterministic distributions that satisfy \(a_{i}+b_{j}=0\mod 2\) for every \(i,j\in\{0,1\}\). The size of this group is \(2\) and therefore there are \(8=16/2\) elements in the orbit. This covers all \(8\) of the Bell inequalities. See Section 5.1 for more on the action on Bell inequalities. ## 3 Distributions on the classical \(N\)-disk The classical \(N\)-disk scenario has the measurement space given by a disk triangulated in a way that results in only non-contextual (or classical) distribution. **Definition 3.1**.: For \(N\geq 3\) let \(D_{N}\) denote the following simplicial set: * Generating \(2\)-simplices: \(\sigma_{1},\cdots,\sigma_{N-2}\). * Identifying relations: \[d_{j_{1}}(\sigma_{1})=d_{j_{2}}(\sigma_{2}),\;d_{j^{\prime}_{2}}(\sigma_{2})= d_{j_{3}}(\sigma_{3}),\;d_{j^{\prime}_{3}}(\sigma_{3})=d_{j_{4}}(\sigma_{4})\; \cdots\;d_{j^{\prime}_{N-3}}(\sigma_{N-3})=d_{j_{N-2}}(\sigma_{N-2})\] where \(j_{1},j_{2},j^{\prime}_{2},\ldots,j_{N-3},j^{\prime}_{N-3},j_{N-2}\in\{0,1,2\}\) and \(j_{k}\neq j^{\prime}_{k}\) for \(2\leq k\leq N-3\). The classical \(N\)-disk can be constructed by successive gluing. To see this, starting from an initial non-degenerate simplex \(\sigma_{1}\) we successively glue simplices along a single edge so that \(\sigma_{i+1}\) shares a single common edge with \(\sigma_{i}\), terminating with the simplex \(\sigma_{N-2}\). The simplices \(\sigma_{1}\) and \(\sigma_{N-2}\) in any classical \(N\)-disk will be referred to as the initial and terminal simplices, respectively. In particular, the gluing described by the face relations is such that the boundary of the disk has \(N\) edges and form an \(N\)-circle in the sense of Definition 3.11. Letting \((\partial D_{N})^{\circ}_{1}\) be the non-degenerate edges on the boundary of the classical \(N\)-disk, non-degenerate simplicies in the classical \(N\)-disk are distinguished by \[|(\partial D_{N})^{\circ}_{1}\cap(\sigma_{i})^{\circ}_{1}|=\left\{\begin{array} []{ll}2&i=1,N-2\\ 1&\mbox{otherwise.}\end{array}\right. \tag{14}\] Such edges in the classical \(N\)-disk are called boundary edges, otherwise we call them interior edges. The classical \(3\)-disk is \(\Delta^{2}\), while the diamond space \(D\) is an example of a classical \(4\)-disk. See Fig. (4) for an example of a classical \(6\)-disk. **Proposition 3.2**.: _Any simplicial distribution on the classical \(N\)-disk scenario is non-contextual._ Proof.: This follows from (Gluing) Lemma 2.11 since \(D_{N}\) is constructed by gluing \(N\) triangles along a \(\Delta^{1}\). At each step we can apply the Gluing Lemma. ### Fourier-Motzkin elimination As is well-known, systems of linear equations can be solved using Gaussian elimination. For systems of linear inequalities there exists a related technique known as Fourier-Motzkin (FM) elimination; see e.g., [18]. A linear inequality in \(d\) variables can be written as \(a^{T}x\geq b\), where \(a,x\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\). For \(m\) such linear inequalities we have \(a_{i}^{T}x\geq b_{i}\) (\(i=1,\cdots,m\)). Taking each vector \(a_{i}^{T}\) to be a row of a matrix \(A\), this set of \(m\) inequalities can be compactly written as \(Ax\geq B\), where \(A\in\mathbb{R}^{m\times d}\) and \(B\in\mathbb{R}^{m}\). The feasible region defined by \(Ax\geq B\) (if one exists) forms a polyhedron. To perform FM elimination of a variable \(x_{j}\), let us first index all inequalities where \(x_{j}\) appears with positive, negative, or zero coefficients as \(\mathcal{I}_{j}^{+}\), \(\mathcal{I}_{j}^{-}\), and \(\mathcal{I}_{j}^{0}\), respectively. We then solve for \(x_{j}\): \[x_{j} \geq \frac{B_{i}}{a_{ij}}-\sum_{k\neq j}\frac{a_{ik}}{a_{ij}}x_{k}, \quad\forall i\in\mathcal{I}_{j}^{+},\] \[x_{j} \leq -\frac{B_{i}}{|a_{ij}|}+\sum_{k\neq j}\frac{a_{ik}}{|a_{ij}|}x_{k },\quad\forall i\in\mathcal{I}_{j}^{-}.\] Then for every \((i,i^{\prime})\in\mathcal{I}_{j}^{+}\times\mathcal{I}_{j}^{-}\) we have that such an \(x_{j}\) exists so long as \[\frac{B_{i}}{a_{ij}}-\sum_{k\neq j}\frac{a_{ik}}{a_{ij}}x_{k}\leq x_{j}\leq- \frac{B_{i^{\prime}}}{|a_{i^{\prime}j}|}+\sum_{k\neq j}\frac{a_{i^{\prime}k}}{ |a_{i^{\prime}j}|}x_{k},\] which is equivalent to \[\frac{B_{i}}{a_{ij}}-\sum_{k\neq j}\frac{a_{ik}}{a_{ij}}x_{k}\leq-\frac{B_{i^{ \prime}}}{|a_{i^{\prime}j}|}+\sum_{k\neq j}\frac{a_{i^{\prime}k}}{|a_{i^{ \prime}j}|}x_{k}. \tag{15}\] This can be rearranged to give a new set of inequalities in \(d-1\) variables whose solution, should it exist, is the same as the original set of inequalities. Figure 4: Classical 6-disk with initial simplex \(\sigma_{1}\) and terminal simplex \(\sigma_{4}\). #### 3.1.1 Application to the Diamond scenario As a warm up we begin by considering the diamond scenario \(D\) described in Example 2.2. We will adapt a more convenient notation for the generating simplices of the two triangles \(A\) and \(B\). The first one will be denoted by \(\sigma^{012}\) and the other one by \(\sigma^{01^{\prime}2}\). The diamond \(D\) is obtained by gluing \(A\) and \(B\) along the \(d_{1}\) face, i.e., the simplex \(\sigma^{02}\). Again for ease of notation the probabilities \(p^{ab}_{\sigma_{A12}}\) and \(p^{a^{\prime}b^{\prime}}_{\sigma_{B^{\prime}2}^{b^{\prime}2}}\) will be denoted by \(p^{ab}_{012}\) and \(p^{a^{\prime}b^{\prime}}_{01^{\prime}2}\); respectively. In this section we will use the expectation coordinates introduced in Eq. (3) and (4). These eight probabilities that are required to be non-negative is equivalent (up to an overall constant factor) to the inequalities \[1+(-1)^{a}\bar{\sigma}^{01}+(-1)^{b}\bar{\sigma}^{12}+(-1)^{a+b }\bar{\sigma}^{02}\geq 0, \tag{16}\] \[1+(-1)^{a^{\prime}}\bar{\sigma}^{01^{\prime}}+(-1)^{b^{\prime }}\bar{\sigma}^{1^{\prime}2}+(-1)^{a^{\prime}+b^{\prime}}\bar{\sigma}^{02} \geq 0, \tag{17}\] for all \(a,b,a^{\prime},b^{\prime}\in\mathbb{Z}_{2}\). **Proposition 3.3**.: _Let \(D\) be a diamond and \(\partial D\) denote its boundary. Then a distribution \(p\in\mathsf{sDist}(\partial D)\) extends to \(\tilde{p}\in\mathsf{sDist}(D)\) if and only if the CHSH inequalities are satisfied:_ \[(-1)^{a}\bar{\sigma}^{01}+(-1)^{b}\bar{\sigma}^{12}+(-1)^{a^{\prime}}\bar{ \sigma}^{01^{\prime}}+(-1)^{b^{\prime}}\bar{\sigma}^{1^{\prime}2}\geq 0 \tag{18}\] _where \(a,a^{\prime}b,b^{\prime}\in\mathbb{Z}_{2}\) satisfying \(a+b+a^{\prime}+b^{\prime}=1\) mod \(2\)._ Proof.: Proof of this result is given in [7, Proposition 4.10]. We provide an exposition here for completeness. All of the coefficients that appear in Eqns. (16-17) are just \(\pm 1\), to perform FM elimination it suffices to sum up the inequalities where \(\sigma^{02}\) has positive and negative coefficient. For inequalities coming from the same triangle this just yields that \(-1\leq\bar{\sigma}^{ij}\leq 1\) -- we call such inequalities _trivial_. When we combine inequalities from different triangles we obtain the inequalities in Eq. (18). **Remark 3.4**.: We can interpret FM elimination geometrically as deleting an edge from a topological space; see Fig. (5) In an abuse of terminology, we will sometimes say that we eliminate an edge \(\sigma\), when what we actually mean is that we perform FM elimination on the corresponding expectation value \(\bar{\sigma}\) that appears in the inequalities. Figure 5: (a) The classical 4-disk is the diamond scenario. (b) FM elimination is interpreted geometrically as deleting an edge from a topological space. ### Extending to the classical \(N\)-disk **Definition 3.5**.: _Let \(\tau_{1},\cdots,\tau_{N}\) denote the generating edges on the boundary of \(D_{N}\). We define the \(N\)-circle inequalities by_ \[0\leq n-2+\sum_{i=1}^{n}(-1)^{a_{i}}\bar{\tau}_{i} \tag{19}\] _where \(\sum_{i=1}^{n}a_{i}=n+1\text{ mod }2\)._ **Example 3.6**.: Clearly a triangle \(\Delta^{2}\) is just a classical \(3\)-disk and the \(3\)-circle inequalities come from Eq. (4): \[p_{x}^{a}+p_{y}^{b}-p_{z}^{a+b+1}\geq 0.\] Note also that the diamond space is an example of a classical \(4\)-disk and the CHSH inequalities correspond to the \(4\)-circle inequalities. **Lemma 3.7**.: _Consider a set of \(N\)-circle inequalities with a common coordinate \(\bar{z}\). Applying FM elimination to \(\bar{z}\) the resulting inequalities are satisfied if the remaining coordinates \(\bar{\tau}_{i}\) each satisfies \(-1\leq\bar{\tau}_{i}\leq 1\)._ Proof.: Consider two inequalities where \(\bar{z}\) appears with opposite signs \[0 \leq n-2+\bar{z}+\sum_{i=1}^{n-1}(-1)^{a_{i}}\bar{\tau}_{i}\] \[0 \leq n-2-\bar{z}+\sum_{i=1}^{n-1}(-1)^{b_{i}}\bar{\tau}_{i}\] where \(\sum_{i=1}^{n-1}a_{i}=n+1\text{ mod }2\) and \(\sum_{i=1}^{n-1}b_{i}=n\text{ mod }2\). To perform FM elimination of \(\bar{z}\) we add these inequalities together and observe that due to the conditions on \(a_{i}\) and \(b_{j}\) that at least one other variable will cancel after summing. Let \(N\subset\{1,\cdots,n-1\}\) index all variables that do _not_ cancel. (Note that \(|N|\leq n-2\).) The inequality after summing becomes \[0\leq 2\left(n-2+\sum_{i\in N}(-1)^{a_{i}}\bar{\tau}_{i}\right), \tag{20}\] or equivalently \[0\leq(n-2-|N|)+\sum_{i\in N}\left(1+(-1)^{a_{i}}\bar{\tau}_{i}\right).\] Using Eq. (3) we see that these inequalities are satisfied if \(-1\leq\bar{\tau}_{i}\leq 1\) for all \(i\in N\): This condition gives us \[0\leq\frac{n-2-|N|}{2}+\sum_{i\in N}p_{\tau_{i}}^{a_{i}},\] where each term is non-negative since \(|N|\leq n-2\) and \(0\leq p_{\tau_{i}}^{a_{i}}\leq 1\). Thus the inequalities in Eq. (20) are satisfied. **Lemma 3.8**.: _Suppose we have a set of \(N\)-circle and \(M\)-circle inequalities that overlap on only a single variable \(\bar{z}\). FM elimination of \(\bar{z}\) yields a set of \((N+M-2)\)-circle inequalities (plus trivial inequalities)._ Proof.: We begin by noting that if we sum up inequalities coming from the same set of circle inequalities then by Lemma 3.7 we get trivial inequalities. Let us consider the other case where \(\bar{z}\) comes from two different sets; see Fig. (6). First note that there are \(2^{K-1}\) (\(K\geq 1\)) inequalities in a set of \(K\)-circle inequalities. Let \(\mathcal{I}_{M}^{\pm}\) index the \(M\)-circle inequalities where \(\bar{z}\) has a positive (or negative) coefficient and observe that \(|\mathcal{I}_{M}^{\pm}|=2^{M-2}\). Similarly for \(\mathcal{I}_{N}^{\pm}\). FM elimination proceeds by summing up inequalities indexed by \((i,i^{\prime})\in\mathcal{I}_{M}^{+}\times\mathcal{I}_{N}^{-}\) and \((j,j^{\prime})\in\mathcal{I}_{M}^{-}\times\mathcal{I}_{N}^{+}\). This amounts to \(2\times 2^{(M-2)+(N-2)}=2^{(N+M-2)-1}\) new inequalities, which is precisely the amount needed for a set of \((N+M-2)\)-circle inequalities. To find the precise form of these inequalities, let us consider explicitly two inequalities indexed by \((i,i^{\prime})\in\mathcal{I}_{M}^{-}\times\mathcal{I}_{N}^{+}\). We denote the variables appearing in the \(M\)-circle and \(N\)-circle inequalities as \(\tau_{j}\) (\(j=1,\cdots,M\)) and \(\tau_{k}^{\prime}\) (\(k=1,\cdots,N\)), respectively, and denote \(\bar{z}=\tau_{M}=\tau_{N}^{\prime}\). Summing the two inequalities we obtain \[0 \leq N-2+\bar{z}+\sum_{k=1}^{N-1}(-1)^{a_{k}}\tau_{k}^{\prime}+\left(M-2- \bar{z}+\sum_{j=1}^{M-1}(-1)^{b_{j}}\tau_{j}\right),\] where \(\sum_{k=1}^{N-1}a_{k}=N+1\) mod \(2\) and \(\sum_{j=1}^{M-1}b_{j}=M\) mod \(2\). This is equivalent to \[0 \leq (N+M-2)-2+\sum_{k=1}^{N-1}(-1)^{a_{k}}\tau_{k}^{\prime}+\sum_{j=1 }^{M-1}(-1)^{b_{j}}\tau_{j} \tag{21}\] where \(\sum_{k=1}^{n-1}a_{k}+\sum_{j=1}^{M-1}b_{j}=N+M+1\) mod \(2\). Noting that \(N+M+1\) mod \(2=(N+M-2)+1\) mod \(2\), this is precisely an \((N+M-2)\)-circle inequality. A similar argument holds for \(\mathcal{I}_{M}^{+}\times\mathcal{I}_{N}^{-}\) and this proves the result. We have the following corollary of Lemma 3.8: **Corollary 3.9**.: _Suppose we have a set of \(N\)-circle and \(3\)-circle inequalities that overlap on only a single variable \(\bar{z}\). FM elimination of \(\bar{z}\) yields a set of \((N+1)\)-circle inequalities (plus trivial inequalities). See Fig. (6)._ Figure 6: FM elimination of an edge (black) common to an \(N\)-circle and \(M\)-circle in (a) yields an inequality for the \(N+M-2\)-circle in (b). Next we apply these preliminary results to the classical \(N\)-disk scenario. **Proposition 3.10**.: _A distribution \(p\in\mathsf{sDist}(\partial D_{N})\) extends to a distribution \(\tilde{p}\) on \(D_{N}\) if and only if the \(N\)-circle inequalities (and the trivial inequalities \(-1\leq\bar{\tau}_{i}\leq 1\)) are satisfied._ Proof.: We consider a classical \(N\)-disk (e.g., see Fig. (4)) such that the edges on the boundary are labeled by \(\tau_{i}\) (\(i=1,\cdots,n\)) and those on the interior are denoted \(z_{j}\) (\(j=1,\cdots,n-3\)). For the first part of our proof, our strategy is to perform FM elimination successively on the interior edges \(z_{j}\) beginning2 with \(z_{1}\) and ending in \(z_{n-3}\). Consider the two classical \(3\)-disks bounded by \(\{\tau_{1},\tau_{n},z_{1}\}\) and \(\{z_{1},\tau_{2},z_{2}\}\), respectively. By Corollary 3.9, FM elimination of \(\bar{z}_{1}\) yields a set of \(4\)-circle inequalities (plus trivial inequalities) together with the remaining inequalities in which \(\bar{z}_{1}\) does not appear. For each successive application of FM for the edges \(z_{j}\) we can apply Corollary 3.9. After \(n-3\) iterations we are left with an \(n\)-circle inequality, as well as trivial inequalities. This proves one direction. On the other hand, FM elimination guarantees that we can find a set of \(\{z_{i}\colon i=1,\cdots,n-3\}\) such that we can reverse this process and extend from the boundary to the \(n\)-order disk. Footnote 2: It is well known that the order in which FM elimination is performed does not affect the final result, however, a “bad” ordering can lead to an explosion in intermediate inequalities to keep track of; see e.g., [20]. ### Bouquet of classical \(N\)-disks It is possible to extend Proposition 3.10 slightly by considering the union of \(N\) disks of varying size. **Definition 3.11**.: Let \(C_{1}\) denote the simplicial set with a single generating \(1\)-simplex \(\tau\) with the relation \[d_{0}\tau=d_{1}\tau.\] For \(N\geq 2\), let \(C_{N}\) denote the \(1\)-dimensional simplicial set consisting of the generating \(1\)-simplices \(\tau_{1},\cdots,\tau_{N}\) together with the identifying relations \[d_{i^{\prime}_{1}}\tau_{1}=d_{i_{2}}\tau_{2},\;d_{i^{\prime}_{2}}\tau_{2}=d_{ i_{3}}\tau_{3}\;\cdots\;d_{i^{\prime}_{N}}\tau_{N}=d_{i_{1}}\tau_{1}\] where \(i_{k}\neq i^{\prime}_{k}\in\{0,1\}\) for \(1\leq k\leq N\). We call \(C_{N}\) the \(N\)_-circle_ space, or simply the circle space when \(N=1\); see Fig. (9b). A circle of length \(N\) on a simplicial set \(X\) is given by an injective simplicial set map \(C_{N}\to X\). We will also write \(C_{N}\) for the image of this map. Figure 7: (a) A bouquet of classical \(N\)-disks is constructed by gluing each \(D_{N_{i}}\) along a common edge (purple) that is a boundary edge of its initial triangle. (b) For each disk we perform FM elimination beginning with the interior edge of the terminal triangle. (c) We continue until only the common edge (purple) remains. **Corollary 3.12**.: _Let \(X\) be a \(2\)-dimensional simplicial set obtained by gluing \(D_{N_{1}},\cdots,D_{N_{k}}\) along a common edge, and let \(\partial X\) be the \(1\)-dimensional given by the boundary of \(X\). Then \(p\in\mathsf{sDist}(\partial X)\) extends to a distribution \(\tilde{p}\) on \(X\) if and only if for every circle \(C_{M_{j}}\subset\partial X\), where \(j=1,\cdots,{k\choose 2}\), we have that \(p|_{C_{M_{j}}}\) satisfies the corresponding \(M_{j}\)-circle inequality._ Proof.: For each \(D_{N_{i}}\) its initial and terminal triangles, which we denote by \(\sigma_{1}^{(i)}\) and \(\sigma_{N_{i}-2}^{(i)}\), respectively, are distinguished via Eq. (14). A bouquet \(X\) of classical \(N\)-disks is then constructed by gluing each \(D_{N_{i}}\) along the single common edge \(\tau\), which we take to be either boundary edge of the initial triangle \(\sigma_{1}^{(i)}\); see Fig. (7a). For each disk \(D_{N_{i}}\) in \(X\) we perform FM elimination on all interior edges, beginning with the interior edge of the terminal triangle and concluding with the interior edge of the initial triangle; see Fig. (7b). The ordering of which disks FM elimination is applied to is arbitrary and does not affect the calculations. For each disk we stop before eliminating the edge \(\tau\); see Fig. (7c). By Proposition 3.10 this will result in \(N\)-circle inequalities (plus trivial inequalities), each corresponding to a circle of length \(N_{i}\). Since \(\tau\) appears in all \(k\) sets of circle inequalities, and it is the only edge in the intersection of these circles, then Lemma 3.8 applies. We will have \({k\choose 2}\) circles \(C_{M_{j}}\), where \(j=1,\cdots,{k\choose 2}\). The extension result of Corollary 3.12 will be useful in proving Fine's theorem in Section 4.1 for various types of scenarios. **Example 3.13**.: Bipartite \((m_{A},m_{B},d_{A},d_{B})\) Bell scenarios consist of parties Alice and Bob performing one of \(m_{A}\), \(m_{B}\) measurements with one of \(d_{A}\), \(d_{B}\) outcomes, respectively. In [12] it was shown that, by generalizing an argument due to Fine [2, 3], that the CHSH inequalities are also necessary and sufficient for this more general scenario. Figure 8: (a) Topological realization of the \((2,m,2,2)\) Bell scenario. (b) Measurement space for the \((2,3,2,2)\) Bell scenario. (c) The bouquet of classical \(N\)-disks (here, all \(3\)-disks) used in proof of Fine’s theorem for the \((2,3,2,2)\) Bell scenario. (d) A distribution on the \((2,3,2,2)\) Bell scenario is classical if and only if \(6={4\choose 2}\) sets of CHSH inequalities are satisfied, corresponding to the \(6\) possible circles. A topological realization for \((2,m,2,2)\) Bell scenario is given in Fig. (8a). Note that this scenario is a special case of the flower scenario depicted in Fig. (1). In Theorem 4.9 we will generalize Fine's characterization of non-contextual distributions to to flower scenarios. The basic idea of our approach can be sketched in the case of \((2,3,2,2)\) Bell scenario; see Fig. (8b). In this case we use the bouquet of \(3\)-disks depicted in Fig. (8c). By Corollary 3.12 a distribution on the boundary Fig. (8d) extends to the whole space if and only if the \(6=\binom{4}{2}\) sets of \(4\)-circle inequalities are satisfied. ## 4 Distributions on the \(N\)-cycle scenario and beyond The measurement space of the \(N\)-cycle scenario is a disk triangulated into \(N\) triangles as in Fig. (9a). **Definition 4.1**.: Let \(\tilde{C}_{N}\) denote the following simplicial set: * Generating \(2\)-simplices: \(\sigma_{1},\cdots,\sigma_{N}\). * Identifying relations: \[d_{i^{\prime}_{1}}\sigma_{1}=d_{i_{2}}\sigma_{2},\ d_{i^{\prime}_{2}}\sigma_{2 }=d_{i_{3}}\sigma_{3}\ \cdots\ d_{i^{\prime}_{N}}\sigma_{N}=d_{i_{1}}\sigma_{1}\] where \(i_{k}\neq i^{\prime}_{k}\in\{1,2\}\) for \(1\leq k\leq N\). \(\tilde{C}_{1}\) has a single generating simplex \(\sigma\) with the identifying relation \(d_{1}\sigma=d_{2}\sigma\). Topologically \(\tilde{C}_{N}\) is obtained from its boundary, which is circle consisting of \(N\) edges, by introducing a new point, the vertex in the middle, and coning off the boundary. This construction will be very useful in our analysis of simplicial distributions on \(2\)-dimensional measurement spaces. Observe that \(\tilde{C}_{N}\) is precisely the cone of \(C_{N}\). In later sections, we will study the vertices of the polytope of simplicial distributions on this scenario and describe the Bell inequalities bounding the non-contextual distributions. Note that \(\tilde{C}_{1}\) is a new scenario in the sense that it cannot be realized in the conventional picture of non-signaling distributions, such as in the language of sheaf theory [5]. This is the smallest space on which a contextual simplicial distribution is defined. Figure 9: The \(N\)-cycle scenario (\(N=8\)) depicted in (a) is the cone of the \(N\)-circle scenario depicted in (b) consisting of the edges \(\tau_{1},\cdots,\tau_{8}\) on the boundary. ### Topological proof of Fine's theorem The proof of Fine's theorem for the CHSH scenario given in [7, Theorem 4.13] relies on topological methods. Here we show that these methods can be generalized to other interesting scenarios including the \(N\)-cycle scenario and the flower scenario obtained by gluing cycle scenarios as in Fig. (1). **Lemma 4.2**.: _Let \(X\) be a simplicial set. The map_ \[\mathsf{dDist}(\mathcal{C}(X))\to\mathbb{Z}_{2}^{|X_{0}|}\] _that send \(\delta^{s}\) to \((s(c,v))_{v\in X_{0}}\) is a bijection._ Proof.: A deterministic distribution on \(\Delta^{2}\) is given by an assignment \((x,y,z)\mapsto(a,b,c)\) such that \(a+b+c=0\mod 2\). Therefore in \(\mathcal{C}(X)\) once the edges \((c,v)\) are assigned an outcome the remaining edges will be determined. **Lemma 4.3**.: _Let \(X\) be a simplicial set. Given a non-contextual distribution \(p\in\mathsf{sDist}(\mathcal{C}(X))\), the restriction \(p|_{C_{N}}\) to an \(N\)-circle \(C_{N}\subset X\) satisfies the \(N\)-circle inequalities._ Proof.: Let \(D_{N}\) be a classical \(N\)-disc with \(C_{N}\) as the boundary. Recall that we can think of \(\tilde{C}_{N}\) as the cone of \(C_{N}\). Note that \((C_{N})_{0}=(D_{N})_{0}\), thus using Lemma 4.2 we obtain that the map \(\mathsf{dDist}(\mathcal{C}(D_{N})))\to\mathsf{dDist}(\tilde{C}_{N})\) induced by the inclusion \(C_{N}{\to}D_{N}\) is an isomorphism. We have the commutative diagram (22) The simplicial distribution \(p\) is non-contextual, thus by Corollary 2.10\(p|_{\tilde{C}_{N}}\) is also non-contextual. Therefore by Diagram (22) the distribution \(p|_{\tilde{C}_{N}}\) can be extended to a distribution on \(\mathcal{C}(D_{N})\). In particular, \(p|_{C_{N}}\) extended to a distribution on \(D_{N}\). By Proposition 3.10 we obtain the result. **Proposition 4.4**.: _A distribution \(p\in\mathsf{sDist}(\tilde{C}_{N})\) is non-contextual if and only if \(p|_{C_{N}}\) satisfies the \(N\)-circle inequalities._ Proof.: Forward direction is proved in Lemma 4.3. For the converse, we will need the following simplicial sets: Figure 10: (a) \(Z\) is obtained from the \(N\)-circle by attaching two edges. (b) \(W\) is obtained from a classical \(N\)-disk by attaching a triangle. (c) \(V\) is obtained from \(W\) by attaching an \(N\)-cycle space. * \(Z\subset\tilde{C}_{N}\) denotes the \(1\)-dimensional simplicial subset obtained by gluing two edges to \(C_{N}\) as depicted in Fig. (10a). * \(W\) is obtained by gluing a triangle \(\Delta^{2}\) to \(D_{N}\) as in Fig. (10b). * \(V\) is the simplicial set obtained by gluing \(\tilde{C}_{N}\) and \(W\) along \(Z\) as in Fig. (10c): \[V=\tilde{C}_{N}\cup_{Z}W.\] (23) Let \(p\) be a simplicial distribution on \(\tilde{C}_{N}\) such that \(p|_{C_{N}}\) satisfies the \(N\)-circle inequalities. By Proposition 3.10 we conclude that the restriction of \(p\) on the other two circles on \(Z\) also satisfies the circle inequalities. Therefore Corollary 3.12 implies that \(p|_{Z}\) extends to a simplicial distribution \(q\) on \(W\). Since \(p\) and \(q\) match on \(Z\) there is a simplicial distribution \(P\) on \(V\) such that \(P|_{\tilde{C}_{N}}=p\) and \(P|_{W}=q\). Now, consider the following decomposition given in Fig. (11): \[V=L\cup_{M}R\] where \(L=\partial\Delta^{3}\) is the boundary of the tetrahedron, \(M=\Delta^{2}\) a triangle. Let \(\tilde{R}\) denote the simplicial subset of \(R\) obtained by omitting the bottom triangles. Note that \(\tilde{R}\) is an \((N-1)\)-cycle scenario. Let \(Q\) be the restriction of \(P\) on \(L\cup_{M}\tilde{R}\). The simplicial distribution \(Q\) is non-contextual if and only if \(Q|_{L}\) and \(Q|_{\tilde{R}}\) are both non-contextual by (Gluing) Lemma 2.11. By [7, Proposition 4.12] every simplicial distribution on \(\partial\Delta^{3}\) is non-contextual. Therefore it suffices to have \(Q|_{\tilde{R}}\) non-contextual to guarantee that \(Q\) is non-contextual. In fact, \(Q|_{\tilde{R}}\) is the restriction of \(P|_{R}\) on \(\tilde{R}\), thus by Proposition 3.10 we conclude that \(Q\) satisfies the circle inequalities on the lower circle of \(\tilde{R}\). By induction on \(N\), we conclude that \(Q|_{\tilde{R}}\) is non-contextual. Finally, by Corollary 2.10 this implies that \(p=Q|_{\tilde{C}_{N}}\) is non-contextual. Combining this result with Proposition 3.10 gives the following result which will be used in the generalization of Proposition 4.4 to the flower scenario. **Corollary 4.5**.: _A distribution \(p\) on \(\tilde{C}_{N}\) is non-contextual if and only if it extends to a distribution on \(\tilde{C}_{N}\cup_{C_{N}}D_{N}\)._ Taking \(N=3\) in Corollary 4.5 gives us a sufficient and necessary condition when a distribution on the boundary of a triangle can be extended to a distribution on the triangle. Thus we obtain a useful result that characterizes the image of the map \(f:\mathsf{sDist}(X)\to\mathsf{sDist}(X^{(1)})\) introduced in Figure 11: The space \(V\) partitioned into three simplicial subsets \(L\), \(M\) and \(R\). (7). (We remark that the following result still holds when the restriction that \(\partial\sigma_{i}\) does not contain non-degenerate edges is removed in Proposition 2.15.) **Corollary 4.6**.: _Let \(X\) be a \(2\)-dimensional simplicial set. A distribution \(q\in\mathsf{sDist}(X^{(1)})\) is in the image of the map \(f\) in (7) if and only if \((q^{0}_{d_{0}\sigma},q^{0}_{d_{1}\sigma},q^{0}_{d_{2}\sigma})\) satisfies the \(3\)-circle inequality for all \(\sigma\in X_{2}\)._ To generalize Proposition 4.4 to the flower scenario we need a stronger version of the Gluing lemma. **Lemma 4.7**.: _Let \(X=\cup_{i=1}^{m}A_{i}\) such that \(A_{i}\cap A_{j}=\Delta^{n}\) for every \(i\neq j\). Then \(p\in\mathsf{sDist}(X)\) is non-contextual if and only if \(p|_{A_{i}}\) is non-contextual for every \(1\leq i\leq m\)._ Proof.: Follows by induction and Lemma 2.11. The flower scenario is obtained by gluing lines at their end points. Let \(L_{N}\) denote the simplicial set consisting of the generating \(1\)-simplices \(\tau_{1},\cdots,\tau_{N}\) together with the identifying relations \[d_{i^{\prime}_{1}}\tau_{1}=d_{i_{2}}\tau_{2},\;d_{i^{\prime}_{2}}\tau_{2}=d_ {i_{3}}\tau_{3}\;\cdots\;d_{i^{\prime}_{N-1}}\tau_{N-1}=d_{i_{N}}\tau_{N}\] where \(i_{k}\neq i^{\prime}_{k}\in\{0,1\}\). Topologically this simplicial set represents a line of length \(N\). **Definition 4.8**.: Let \(X(N_{1},\cdots,N_{k})\) denote the simplicial set obtained by gluing the lines \(L_{N_{1}},\cdots,L_{N_{k}}\) at their boundary, i.e., the two terminal points \(v\) and \(w\); see Fig. (12a). We will call the cone of \(X(N_{1},\cdots,N_{k})\) a _flower scenario_. **Theorem 4.9**.: _Let \(\mathcal{C}(X)\) denote the flower scenario where \(X=X(N_{1},\cdots,N_{k})\). A distribution \(p\in\mathsf{sDist}(\mathcal{C}(X))\) is non-contextual if and only if for every circle \(C_{N}\) on \(X\) the restriction \(p|_{C_{N}}\) satisfies the \(N\)-circle inequalities._ Proof.: Forward direction follows from Lemma 4.3. For the converse, we introduce the following simplicial sets: * \(Z\subset\mathcal{C}(X)\) denotes the \(1\)-dimensional simplicial set obtained by gluing two edges \(\tau_{1}\) and \(\tau_{2}\) to \(X\) as depicted in Fig. (12b). Figure 12: (a) \(X\) is obtained by gluing lines at their end points. (b) \(Z\) comes with an additional piece consisting of two edges. (c) \(W\) is obtained by filling the circles in \(Z\) with classical disks. * \(W\) is obtained by filling in the circles in \(Z\) by classical disks as in Fig. (12c). * Gluing \(\mathcal{C}(X)\) with \(W\) along the intersection \(Z\) we obtain \[V=\mathcal{C}(X)\cup_{Z}W.\] Let \(p\) be a simplicial distribution on \(\mathcal{C}(X)\) satisfying the circle inequalities for every circle in \(X\). Moreover, by Proposition 3.10 the distribution \(p\) also satisfies the circle inequalities for the remaining circles in the larger space \(Z\) since on these circles the distribution extends to classical disks contained in \(\mathcal{C}(X)\). Then Corollary 3.12 implies that \(p|_{Z}\) extends to a simplicial distribution \(q\) on \(W\). The two distributions \(p\) and \(q\) give a distribution \(P\) on \(V\). Now, we define the following simplicial subsets of \(V\): * \(M\) denotes the triangle in Fig. (12c) with two of the edges given by \(\tau_{1}\) and \(\tau_{2}\). * \(\tilde{V}_{i}\) is obtained by gluing \(\mathcal{C}(L_{N_{i}})\) and \(M\) along \(\tau_{1}\) and \(\tau_{2}\). * \(V_{i}\) is obtained by gluing \(\tilde{V}_{i}\) and the classical disk contained in \(W\) whose boundary coincides with \(\partial\tilde{V}_{i}\); see Fig. (13). Note that \(M=\tilde{V}_{i}\cap\tilde{V}_{j}=V_{i}\cap V_{j}\) for distinct \(i,j\). In addition, we obtain another decomposition of \(V\) as given in Fig. (13): \[V=V_{1}\cup_{M}V_{1}\cup_{M}\cdots\cup_{M}V_{k}.\] By Corollary 4.5 the simplicial distribution \(P|_{\tilde{V}_{i}}\) is non-contextual since it is the restriction of \(P|_{V_{i}}\). Therefore by Lemma 4.7 the distribution \(P|_{\tilde{V}_{1}\cup_{M}\tilde{V}_{2}\cup_{M}\cdots\cup_{M}\tilde{V}_{k}.}\) is non-contextual, so by Corollary 2.10 the restriction \(p=P|_{\mathcal{C}(X)}\) is non-contextual. ## 5 Collapsing measurement spaces In this section we study the effect of collapsing simplices in the measurement space. This method is very effective in describing the vertices of the polytope of simplicial distributions. Let us begin with the simplest case of collapsing a single edge to a point. Recall that \(\Delta^{1}\) is the simplicial set representing an edge. It has a single generating simplex in dimension \(1\) denoted by \(\sigma^{01}\). A point is represented by the simplicial set \(\Delta^{0}\). Its \(n\)-simplices are given by \(c_{n}\) obtained by applying the \(s_{0}\) degeneracy map \(n\)-times: \(s_{0}\cdots s_{0}(\sigma^{0})=\sigma^{0\cdots 0}\). Collapsing an edge to a point can be represented by a simplicial set map \[\pi:\Delta^{1}\to\Delta^{0}\] that sends the generating simplex \(\sigma^{01}\) to the degenerate simplex \(\sigma^{00}\). Now, applying the cone construction to this map we obtain a simplicial set map \[\mathcal{C}\pi:\mathcal{C}(\Delta^{1})\to\mathcal{C}(\Delta^{0})\] Recall from Fig. (2a) that \(\mathcal{C}(\Delta^{1})\) can be identified with a triangle whose generating simplex is given by \((c,\sigma^{01})\). A similar topological intuition works for \(\mathcal{C}(\Delta^{0})\). It represents an edge whose generating simplex is \((c,\sigma^{0})\). From this we can work out the map \(\mathcal{C}\pi\) as follows: The generating simplex \((c,\sigma^{01})\) is mapped to \((c,s_{0}\sigma^{0})\) since \(\pi\) sends \(\sigma^{01}\) to the degenerate simplex \(s_{0}\sigma^{0}\). By the simplicial structure of the cone described in Definition 2.7 we have \[(c,s_{0}\sigma^{0})=s_{1}(c,\sigma_{0}). \tag{24}\] As we have seen in Section 2.2 the map \(\mathcal{C}\pi\) between the cone spaces induces a map between the associated simplicial distributions \[(\mathcal{C}\pi)^{*}:\mathsf{sDist}(\mathcal{C}(\Delta^{0}))\to\mathsf{sDist}( \mathcal{C}(\Delta^{1}))\] A simplicial distribution \(p\in\mathsf{sDist}(\mathcal{C}(\Delta^{0}))\) is determined by \(p_{(c,\sigma^{0})}\in D_{R}(\mathbb{Z}_{2})\). Let \(q\) denote the image of \(p\), i.e., \(q=(\mathcal{C}\pi)^{*}(p)\). Then \(q\) will be determined by \(q_{(c,\sigma^{01})}\), a distribution on \(\mathbb{Z}_{2}^{2}\). It is given as follows; see Figure (14): \[q^{ab}_{(c,\sigma^{01})} =p^{ab}_{\mathcal{C}\pi(c,\sigma^{01})} \tag{25}\] \[=p^{ab}_{(c,s_{0}\sigma^{0})}\] \[=p^{ab}_{s_{1}(c,\sigma^{0})}\] \[=\left(D_{R}(s_{1})(p_{(c,\sigma^{0})})\right)^{ab}\] \[=\left\{\begin{array}{ll}p^{0}_{(c,\sigma^{0})}&(a,b)=(0,0)\\ 1-p^{0}_{(c,\sigma^{0})}&(a,b)=(1,0)\\ 0&\text{otherwise},\end{array}\right.\] where in the first line we use \(q=p\circ\mathcal{C}\pi\), in the second line the definition of \(\mathcal{C}\pi\), in the third line Eq. (24), in the fourth line compatibility of \(p\) with the simplicial structure, and in the fifth line the definition of \(D_{R}(s_{1})\). We will refer to this distribution as a _collapsed distribution_ on the triangle. Next we consider the general case. Let \(X\) be a 1-dimensional simplicial set and \(\sigma\) denote a non-degenerate 1-simplex such that \[d_{0}(\sigma)\neq d_{1}(\sigma). \tag{26}\] We will write \(X/\sigma\) for the simplicial set obtained by collapsing3 this edge. More formally, \(X/\sigma\) consists of the same generating simplices as \(X\) except \(\sigma\) and the simplicial relations are inherited from \(X\). We will write \(\pi:X\to X/\sigma\) for the collapsing map as before. We also have \(\mathcal{C}\pi:\mathcal{C}(X)\to\mathcal{C}(X/\sigma)\) that collapses the triangle obtained as the cone of \(\sigma\). In Fig. (15) we represent the collapsing map \(\pi:C_{4}\to C_{3}\) between two circle scenarios. Footnote 3: The simplicial set \(X/\sigma\) is the quotient of \(X\) by the simplicial subset generated by \(\sigma\) in the sense of [7, Section 5.3]. **Lemma 5.1**.: _For a collapsing map \(\pi:X\to X/\sigma\), the following properties hold._ 1. _The map_ \[(\mathcal{C}\pi)^{*}:\mathsf{sDist}(\mathcal{C}(X/\sigma))\to\mathsf{sDist}( \mathcal{C}(X))\] _is injective. Moreover, a distribution_ \(q\in\mathsf{sDist}(\mathcal{C}(X))\) _lies in the image of_ \((\mathcal{C}\pi)^{*}\) _if and only if_ \(q_{(c,\sigma)}\) _is a collapsed distribution._ 2. _The map_ \[(\mathcal{C}\pi)^{*}:\mathsf{dDist}(\mathcal{C}(X/\sigma))\to\mathsf{dDist}( \mathcal{C}(X))\] _is injective. Moreover, a deterministic distribution_ \(\delta^{t}\in\mathsf{dDist}(\mathcal{C}(X))\) _lies in the image of_ \((\mathcal{C}\pi)^{*}\) _if and only if_ \(t_{(c,\sigma)}\in\{(0,0),(1,0)\}\)_._ Proof.: The surjectivity of \((\mathcal{C}\pi)_{n}\) for every \(n\geq 0\) implies the injectivity of \((\mathcal{C}\pi)^{*}\) in both cases. For \(p\in\mathsf{sDist}(\mathcal{C}(X))\) the definition of the collapsing map implies that for every generating \(1\)-simplex \(\tau\neq\sigma\) in \(X\) we have \((\mathcal{C}\pi)^{*}(p)_{(c,\tau)}=p_{(c,\tau)}\). Using Eq. (25) we obtain that \((C\pi)^{*}(p)_{(c,\sigma)}\) is a collapsed distribution. By part (1), a deterministic distribution \(\delta^{t}\in\mathsf{dDist}(\mathcal{C}(X))\) lies in the image of \((\mathcal{C}\pi)^{*}\) if and only if \(\delta^{t_{(c,\sigma)}}\) is a collapsed distribution. This is equivalent to \(t_{(c,\sigma)}\in\{(0,0),(1,0)\}\). **Theorem 5.2**.: _Let \(X\) be a \(1\)-dimensional simplicial set, and \(\pi:X\to X/\sigma\) denote a collapsing map. For \(p\in\mathsf{sDist}(\mathcal{C}(X/\sigma))\) and \(q=(\mathcal{C}\pi)^{*}(p)\), the following holds._ 1. \(p\) _is contextual if and only if_ \(q\) _is contextual._ 2. \(p\) _is strongly contextual if and only if_ \(q\) _is strongly contextual._ 3. \(p\) _is a vertex if and only if_ \(q\) _is a vertex._ 4. \(p\) _is deterministic distribution if and only if_ \(q\) _is deterministic distribution._ Proof.: Part (1): Proposition 2.9 implies that if \(q\) is contextual then \(p\) is contextual. For the converse assume that \(q\) is non-contextual. Then there exists \(d=\sum_{i=1}^{n}d(s^{i})\delta^{s^{i}}\in D_{R}\left(\mathsf{dDist}(\mathcal{C }(X))\right)\), where \(d(s^{i})\neq 0\), such that for every simplex \(\tau\) in \(\mathcal{C}(X)\) we have \[q_{\tau}=\sum_{i=1}^{n}d(s^{i})\delta^{s^{i}_{\tau}}. \tag{27}\] By part (1) of Lemma 5.1 the distribution \(q_{(c,\sigma)}\) is collapsed. By Eq. (27) we conclude that \(s^{i}_{(c,\sigma)}\in\{(0,0),(1,0)\}\) for every \(i=1,\cdots,n\). Therefore, by part (2) of Lemma 5.1 we have Figure 14: Collapsed distribution on the triangle. \(\mathsf{dDist}(\mathcal{C}(X/\sigma))\) such that \((\mathcal{C}\pi)^{*}(\delta^{r^{i}})=\delta^{s^{i}}\). We define \(\tilde{d}\in D_{R}\left(\mathsf{sDist}(\mathcal{C}(X/\sigma))\right)\) by \(\sum_{i=1}^{n}d(r^{i})\delta^{r^{i}}\). Then \(D_{R}((\mathcal{C}\pi)^{*})(\tilde{d})=d\). Therefore, using the commutativity of Diagram (5) for \(f=\mathcal{C}\pi\), we obtain that \[(\mathcal{C}\pi)^{*}(\Theta(\tilde{d}))=\Theta(D_{R}\left((\mathcal{C}\pi)^{* }\right)(\tilde{d}))=\Theta(d)=(\mathcal{C}\pi)^{*}(p)\] Since \((\mathcal{C}\pi)^{*}\) is injective \(\Theta(\tilde{d})=p\), which means that \(p\) is non-contextual. Part (2): If \(q\) is strongly contextual then \(p\) is strongly contextual by [14, Lemma 5.19, part (1)]. For the converse assume that \(s\in\operatorname{supp}(q)\). Then \(q_{(c,\sigma)}\left(s(c,\sigma)\right)\neq 0\). By part (1) of Lemma 5.1 the distribution \(q_{(c,\sigma)}\) is collapsed. We conclude that \(s_{(c,\sigma)}\in\{(0,0),(1,0)\}\). By part (2) of Lemma 5.1 there exists \(\delta^{r}\in\mathsf{dDist}(X/\sigma)\) such that \((\mathcal{C}\pi)^{*}(\delta^{r})=\delta^{s}\). To show that \(r\in\operatorname{supp}(p)\) it is enough to prove that for every non-degenerate simplex \(\tau\in(X/\sigma)_{1}\) we have \(p_{(c,\tau)}(r_{(c,\tau)})\neq 0\). Note that since \(\tau\neq\sigma\) we have \[p_{(c,\tau)}=q_{(c,\tau)}\;\;\text{and}\;\;\delta^{r}_{(c,\tau)}=(\mathcal{C} \pi)^{*}(\delta^{r})_{(c,\tau)}=\delta^{s}_{(c,\tau)}.\] Therefore \(p_{(c,\tau)}(r_{(c,\tau)})\neq 0\) since \(s\in\operatorname{supp}(q)\). Part (3): According to [14, Corollary 5.16] every vertex in the preimage of \(q\) under \((\mathcal{C}\pi)^{*}\) is a vertex in \(\mathsf{sDist}(\mathcal{C}X/\sigma)\). Because of the injectivity of \((\mathcal{C}\pi)^{*}\) this preimage contains just \(p\), thus \(p\) is a vertex. For the converse, suppose we have distributions \(q_{1},q_{2}\in\mathsf{sDist}(\mathcal{C}X)\) and \(0<\alpha<1\) such that \[q=\alpha q^{1}+(1-\alpha)q^{2}\] By part (1) of Lemma 5.1 the distribution \(q_{(c,\sigma)}\) is a collapsed distribution, thus \(q_{(c,\sigma)}^{1}\) and \(q_{(c,\sigma)}^{2}\) are also collapsed. Again, by part (1) of Lemma 5.1 there exists \(\tilde{q}^{1},\tilde{q}^{2}\in\mathsf{sDist}(\mathcal{C}(X/\sigma))\) such that \((\mathcal{C}\pi)^{*}(\tilde{q}^{i})=q^{i}\), which implies that \[q=\alpha(\mathcal{C}\pi)^{*}(\tilde{q}^{1})+(1-\alpha)(\mathcal{C}\pi)^{*}( \tilde{q}^{2})=(\mathcal{C}\pi)^{*}(\alpha\tilde{q}^{1}+(1-\alpha)\tilde{q}^ {2})\] The map \((\mathcal{C}\pi)^{*}\) is injective, therefore \(p=\alpha\tilde{q}^{1}+(1-\alpha)\tilde{q}^{2}\). Since \(p\) is a vertex and \(0<\alpha<1\), we conclude that \(\tilde{q}^{1}=\tilde{q}^{2}\). Therefore \(q^{1}=q^{2}\). Part (4): By [14, Proposition 5.14] every deterministic distribution is a vertex in the polytope of simplicial distributions. Thus we can characterize deterministic distributions as the only non-contextual vertices. Then we obtain this result from parts (1) and (3). **Corollary 5.3**.: _A distribution \(p\in\mathsf{sDist}(\mathcal{C}(X/\sigma))\) is a contextual vertex if and only if \((\mathcal{C}\pi)^{*}(p)\) is a contextual vertex._ Proof.: This follows directly from parts (1) and (3) of Theorem 5.2. **Remark 5.4**.: The conclusions of Theorem 5.2 and Corollary 5.3 hold for more general kinds of collapsing maps obtained by collapsing a set of edges in sequence. ### Application to Bell inequalities Consider the collapsing map \(\pi:X\to X/\sigma\) and suppose that the Bell inequalities for the scenario \(\mathcal{C}(X)\) are known. By part (1) of Theorem 5.2 a simplicial distribution \(p\in\mathsf{sDist}(\mathcal{C}(X/\sigma))\) is non-contextual if and only if \(q=(\mathcal{C}\pi)^{*}(p)\in\mathsf{sDist}(\mathcal{C}(X))\) is non-contextual. This is equivalent to the condition that \(q\) satisfies the Bell inequalities for the scenario \(\mathcal{C}(X)\). From these Bell inequalities we can extract those for the collapsed scenario \(\mathcal{C}(X/\sigma)\). Let us illustrate how the collapsing technique can be applied to cycle scenarios. Let \(\pi:C_{4}\to C_{3}\) denote the map that collapses one of the edges in the 4-circle space as in Fig. (15). By Proposition 2.15 a simplicial distribution \(p\in\mathsf{sDist}(\tilde{C}_{3})\) is specified by a tuple \[p=(p_{(c,v_{0})},p_{\tau_{00}},p_{(c,w_{0})},p_{\tau_{10}},p_{(c,u)},p_{\tau_{01}})\] where each entry is a distribution on \(\mathbb{Z}_{2}\). On the other hand, a simplicial distribution \(q\) on \(\tilde{C}_{4}\) is specified by a tuple \[(q_{(c,v_{0})},q_{\tau_{00}},q_{(c,w_{0})},q_{\tau_{10}},q_{(c,v_{1})},q_{\tau_{ 11}},q_{(c,w_{1})})\] Then the image of \(p\) under the map \((\mathcal{C}\pi)^{*}\) gives us \(q=(p_{(c,v_{0})},p_{\tau_{00}},p_{(c,w_{0})},p_{\tau_{10}},p_{(c,u)},1,p_{(c,u )},p_{\tau_{01}})\). This latter simplicial distribution is non-contextual if and only if it satisfies the 4-circle inequalities (see Eq. (12)): \[0\leq p_{\tau_{00}}+p_{\tau_{10}}+1-p_{\tau_{01}}\leq 2\] \[0\leq p_{\tau_{00}}+p_{\tau_{10}}-1+p_{\tau_{01}}\leq 2\] \[0\leq p_{\tau_{00}}-p_{\tau_{10}}+1+p_{\tau_{01}}\leq 2\] \[0\leq-p_{\tau_{00}}+p_{\tau_{10}}+1+p_{\tau_{01}}\leq 2.\] Half of these inequalities are trivial, so this set of inequalities is equivalent to \[p_{\tau_{00}}+p_{\tau_{10}}-p_{\tau_{01}}\leq 1\] \[p_{\tau_{00}}+p_{\tau_{10}}+p_{\tau_{01}}\geq 1\] \[p_{\tau_{00}}-p_{\tau_{10}}+p_{\tau_{01}}\leq 1\] \[-p_{\tau_{00}}+p_{\tau_{10}}+p_{\tau_{01}}\leq 1\] which constitute the non-trivial 3-circle inequalities. Now, we will apply this technique to find the Bell inequalities for the cone of the 1-dimensional space given in Fig. (16b). This space will be our collapsed space \(X/\sigma\). The 1-dimensional simplicial set \(X\) is the complete bipartite graph \(K_{3,3}\) given in Fig. (16a). We denote the edge from \(v_{i}\) to \(w_{j}\) by \(\tau_{ij}\). Note that the measurement space \(\mathcal{C}(X)\) represents the \((3,3,2,2)\) Bell scenario. It has three kinds of Bell inequalities: (1) trivial, (2) circle inequalities, and (3) Froissart inequalities [13]. We are interested in the latter type. An example of Froissart inequalities is the following (see [21, Eq. (21)]): \[p^{0}_{(c,v_{0})}+p^{0}_{(c,w_{0})}-p^{00}_{(c,\tau_{00})}-p^{00}_{(c,\tau_{0 1})}-p^{00}_{(c,\tau_{02})}-p^{00}_{(c,\tau_{10})}-p^{00}_{(c,\tau_{20})}-p^{00 }_{(c,\tau_{11})}+p^{00}_{(c,\tau_{12})}+p^{00}_{(c,\tau_{21})}\geq-1\] It will be convenient for us to convert this inequality to one that only contains distribution on edges. For this we will use Eq. (2). This substitution gives us the following inequality: \[\begin{split} p^{0}_{\tau_{12}}+p^{0}_{\tau_{21}}-p^{0}_{\tau_{0 0}}-p^{0}_{\tau_{01}}&-p^{0}_{\tau_{02}}-p^{0}_{\tau_{10}}-p^{0} _{\tau_{20}}-p^{0}_{\tau_{11}}\\ &-p^{0}_{(c,v_{0})}-p^{0}_{(c,w_{0})}-p^{0}_{(c,v_{1})}-p^{0}_{(c, w_{1})}\geq-6\end{split} \tag{28}\] Figure 15: The edge \(\tau_{11}\) (blue) is collapsed. We observe that the only edges that do not appear in this inequality are \((c,v_{2}),(c,w_{2}),\tau_{22}\). Applying the symmetries of \(X\), more precisely the automorphism group of the graph \(K_{3,3}\), we can obtain 9 distinct such inequalities in which the edges \((c,v_{i}),(c,w_{j}),\tau_{ij}\) do not appear where \(i,j\in\{0,1,2\}\). For example, for \(i=1,j=2\) we have \[\begin{split} p^{0}_{\tau_{02}}+p^{0}_{\tau_{10}}-p^{0}_{\tau_{0 0}}-p^{0}_{\tau_{01}}&-p^{0}_{\tau_{22}}-p^{0}_{\tau_{21}}-p^{0} _{\tau_{20}}-p^{0}_{\tau_{11}}\\ &-p^{0}_{(c,v_{0})}-p^{0}_{(c,w_{0})}-p^{0}_{(c,v_{2})}-p^{0}_{(c,w_{1})}\geq-6\end{split} \tag{29}\] Note that these 9 inequalities are in distinct orbits under the action of \(\mathsf{dDist}(\mathcal{C}(X))\) since different edges appear in every one of them (see Example 2.24). We can find the number of the Froissart inequalities in every orbit. A deterministic distribution \(\delta^{s}\) fixes the inequality (28) if and only if \((\delta^{s}_{\tau})^{0}=1\) for every edge \(\tau\) that appears in the inequality. In this case, \(\delta^{s}\) is the identity, i.e., \((\delta^{s}_{\sigma})^{00}=1\) for every triangle \(\sigma\) in \(\mathcal{C}(X)\). This implies that the size of the orbit of this inequality is equal to \(|\mathsf{dDist}(\mathcal{C}(X))|=2^{6}=64\). Same counting argument works for the rest of the 9 inequalities. Therefore there are \(9\cdot 64=576\) Bell-inequalities of this type. Next, we apply our collapsing technique to generate a new Bell inequality, i.e., one that is not a circle inequality for the cone of the scenario given in Fig. (16b). We will use the following collapsing map \(\pi:X\to X/\sigma\) where \(\sigma=\tau_{22}\). For a given circle on \(X\) there is a corresponding circle inequality. Every such inequality will appear as a Bell inequality in the collapsed scenario \(X/\sigma\) if the corresponding circle does not contain the collapsed edge \(\tau_{22}\). If the circle contains \(\tau_{22}\) the resulting Bell inequality will be a circle inequality of size one less as in Definition 3.5. Given a simplicial distribution \(p\in\mathsf{sDist}(X/\sigma)\) the image \(q=(\mathcal{C}\pi)^{*}(p)\) satisfies \[q^{0}_{\tau_{22}}=1\;\;\text{and}\;\;q^{0}_{(c,w_{2})}=q^{0}_{(c,v_{2})}=p^{0} _{(c,u)}.\] Substituting this in (29) we obtain the following Bell inequality of the scenario \(X/\sigma\): \[p^{0}_{\tau_{02}}+p^{0}_{\tau_{10}}-p^{0}_{\tau_{00}}-p^{0}_{\tau_{01}}-p^{0} _{\tau_{21}}-p^{0}_{\tau_{20}}-p^{0}_{\tau_{11}}-p^{0}_{(c,v_{0})}-p^{0}_{(c,w _{0})}-p^{0}_{(c,u)}-p^{0}_{(c,w_{1})}\geq-5 \tag{30}\] This inequality is a new Bell inequality, that is, it is not a circle inequality, and it belongs to a scenario that is not a Bell scenario. The latter observation implies that going beyond Bell scenarios can produce simpler Bell inequalities that are not circle inequalities; see [22]. **Remark 5.5**.: Let \(X\) be a 1-dimensional simplicial set. Consider a Bell inequality for the cone scenario \(\mathcal{C}(X)\) expressed in the edge coordinates (see Proposition 2.15). Then in the known examples Figure 16: (a) The complete bipartite graph \(K_{3,3}\). (b) The graph obtained by collapsing the edge \(\tau_{22}\) to the vertex \(u\). the edges that appear with non-trivial coefficients in this Bell inequality form a loop (i.e., a circle with possible self intersections) on \(X\). It is a curious question whether this observation holds for every \(1\)-dimensional \(X\). If so, it gives a topological restriction on the form of possible Bell inequalities hence a nice structural result in contextuality. ### Detecting contextual vertices In this section, the \(1\)-circle \(C_{1}\) will play a fundamental role to detect contextual vertices in the scenarios of interest. Let \(\tau\) denote the generating \(1\)-simplex of \(C_{1}\). A simplicial distribution \(p\in\mathsf{sDist}(\tilde{C}_{1})\) is specified by \[(p_{(c,\tau)}^{00},p_{(c,\tau)}^{01},p_{(c,\tau)}^{10},p_{(c,\tau)}^{11})\] where \[p_{(c,\tau)}^{ab}\in[0,1],\ \ \sum_{a,b}p_{(c,\tau)}^{ab}=1\ \ \text{ and }\ \ p_{(c,\tau)}^{00}+p_{(c,\tau)}^{01}=p_{(c,\tau)}^{00}+p_{(c,\tau)}^{11}.\] This implies that \(p_{(c,\tau)}^{01}=p_{(c,\tau)}^{11}\). Therefore the polytope \(\mathsf{sDist}(\tilde{C}_{1})\subset\mathbb{R}^{3}\) is a triangle with two deterministic vertices and a unique contextual vertex \(p_{-}\) given in Fig. (17). The following example shows how to obtain a contextual vertex in an arbitrary cycle scenario from the contextual vertex in Fig. (17b) using the collapsing technique. **Example 5.6**.: Let \(\pi:C_{N}\to C_{1}\) denote the map that collapses \(\tau_{2},\cdots,\tau_{N}\) in the \(N\)-circle scenario. We will write \(\tau=\tau_{1}\) for notational simplicity. Let \(q=(\mathcal{C}\pi)^{*}(p_{-})\) where \(p-\) is the unique contextual vertex of \(\mathsf{sDist}(\tilde{C}_{1})\). We have \[q_{(c,d_{i}(\tau))}^{0}=(p_{-})_{(c,d_{i}(\tau))}^{0}=1/2\] (see Fig. (17b)). Then by Eq. (25) for \(\tau^{\prime}=\tau_{2}\) and \(\tau_{N}\) we have \[q_{(c,\tau^{\prime})}=p_{+}.\] We can continue this way using Eq. (25) to obtain that for an edge \(\tau^{\prime}\) of the \(N\)-cycle scenario we have \[q_{(c,\tau^{\prime})}=\left\{\begin{array}{ll}p_{-}&\tau^{\prime}=\tau\\ p_{+}&\text{otherwise}.\end{array}\right.\] According to Corollary 5.3, \(q\) is a contextual vertex in the \(N\)-cycle scenario; see Fig. (18). The vertex detected in Example 5.6 generalizes the PR boxes defined in Example 2.24. Figure 17: The \(1\)-cycle scenario is obtained by identifying the edges \((0,1)\) and \((1,2)\) of a triangle. Deterministic (a) and contextual (b) vertices of the \(1\)-cycle scenario. **Definition 5.7**.: A _PR box_ on an \(N\)-cycle scenario is a simplicial distribution \(p\in\mathsf{sDist}(\tilde{C}_{N})\) such that \(p_{\sigma}=p_{\pm}\) for \(\sigma\in\{\sigma_{i}:\,i=1\cdots,N\}\) with the further restriction that the number of \(p_{-}\)'s is odd. It is clear from the definition that there are \(2^{N-1}\) PR boxes on the \(N\)-cycle scenario. All of them can be obtained from the one given in Example 5.6 by the action of \(\mathsf{dDist}(\tilde{C}_{N})\) in a way similar to the CHSH scenario discussed in Example 2.24. Therefore by Proposition 2.21 the PR boxes are contextual vertices in the cycle scenario. Next we describe a \(1\)-dimensional simplicial set obtained by gluing \(n\) copies of \(C_{1}\) at their vertex. More explicitly, this simplicial set, denoted by \(\vee_{i=1}^{n}C_{1}\), consists of the generating \(1\)-simplices \(\tau_{1},\cdots,\tau_{n}\) with the identifying relations \[d_{0}\tau_{i}=v=d_{1}\tau_{i},\ \ i=1,\cdots,n,\] where \(v\) is the unique vertex. The cone space \(\mathcal{C}(\vee_{i=1}^{n}C_{1})\) consists of \(n\) triangles given by \((c,\tau_{i})\) where \(i=1,\cdots,n\). We consider simplicial distributions on the cone of \(\vee_{i=1}^{n}C_{1}\). Such a distribution \(p\) is determined by the \(n\) distributions \(p_{(c,\tau_{i})}\) on \(\mathbb{Z}_{2}^{2}\). For convenience of notation we will write \((p_{1},\cdots,p_{n})\) for this tuple of distribution, i.e., \(p_{i}=p_{(c,\tau_{i})}\). **Proposition 5.8**.: _The polytope \(\mathsf{sDist}(\mathcal{C}(\vee_{i=1}^{n}C_{1}))\) can be identified with the following subpolytope of the \((n+1)\)-cube_ \[\{(a_{1},\ldots,a_{n},b)\in[0,1]^{n}:\,\frac{1-a_{i}}{2}\leq b\leq\frac{1+a_{i }}{2},\;\forall 1\leq i\leq n\} \tag{31}\] Proof.: The non-degenerate edges in \(\mathcal{C}(\vee_{i=1}^{n}C_{1})\) are \(\tau_{1},\ldots,\tau_{n},(c,v)\), and for every \(1\leq i\leq n\) the non-degenerate triangle \((c,\tau_{i})\) has the edges \((c,v),\tau_{i},(c,v)\). By Proposition 2.15 and Corollary 4.6 we see that \(\mathsf{sDist}(\mathcal{C}(\vee_{i=1}^{n}C_{1}))\) can be identified with the set of \((a_{1},\ldots,a_{n},b)\in[0,1]^{n}\) satisfying the following inequalities: \[\begin{array}{c}b+a_{i}+b\geq 1\\ b+a_{i}-b\leq 1\\ b-a_{i}+b\leq 1\\ -b+a_{i}+b\leq 1\end{array} \tag{32}\] for every \(1\leq i\leq n\). The set of inequalities in (32) is equivalent to \(\frac{1-a_{i}}{2}\leq b\leq\frac{1+a_{i}}{2}\), \(1\leq i\leq n\). **Proposition 5.9**.: _The polytope \(\mathsf{sDist}(\mathcal{C}(\vee_{i=1}^{n}C_{1}))\) has \(2^{n}+1\) vertices:_ Figure 18: 1. _There are two deterministic vertices given by_ \((\delta^{00},\cdots,\delta^{00})\) _and_ \((\delta^{11},\cdots,\delta^{11})\)_._ 2. _The contextual vertices are of the form_ \((p_{1},\cdots,p_{n})\) _where_ \(p_{i}\in\{p_{+},p_{-}\}\) _for every_ \(1\leq i\leq n\) _with at least one_ \(j\) _satisfying_ \(p_{j}=p_{-}\)_._ Proof.: The edge \((c,v)\) appears twice in every non-degenerate triangle of \(\mathcal{C}(\vee_{i=1}^{n}C_{1}))\), thus every outcome assignment \(s\) on this measurement space is determined by \(s_{(c,v)}\in\{0,1\}\). Therefore we have only \((\delta^{00},\cdots,\delta^{00})\) and \((\delta^{11},\cdots,\delta^{11})\) as deterministic distributions. Now, let us denote the polytope in (31) by \(P\) and find its vertices. Given an element \((a_{1},\ldots,a_{n},b)\in P\) such that \(a_{j}\notin\{0,1\}\) for some \(1\leq j\leq n\), there exists distinct \(a_{j}^{\prime},a_{j}^{\prime\prime}\in[0,1]\), and \(0<\alpha<1\) such that \[\frac{1-a_{j}^{\prime}}{2}\leq q\leq\frac{1+a_{j}^{\prime}}{2},\ \ \frac{1-a_{j}^{\prime\prime}}{2}\leq q\leq\frac{1+a_{j}^{\prime\prime}}{2},\ \ \text{and}\ \ \alpha a_{j}^{\prime}+(1-\alpha)a_{j}^{\prime\prime}=a_{j}\] Therefore we have \[(a_{1},\ldots,a_{n},q)=\alpha(a_{1},\ldots,a_{j}^{\prime},\ldots,a_{n},q)+(1- \alpha)(a_{1},\ldots,a_{j}^{\prime\prime},\ldots,a_{n},q)\] We conclude that if \((a_{1},\ldots,a_{n},q)\) is a vertex in \(P\), then \(a_{1},\ldots,a_{n}\in\{0,1\}\). In the case that \(a_{1}=\cdots=a_{n}=1\), we have two vertices \((1,\ldots,1,0)\) and \((1,\ldots,1,1)\). Let \(f:\mathcal{C}(\vee_{i=1}^{n}C_{1})\to P\) denote the bijection given in Proposition 5.8. One can see that by applying the inverse of \(f\) we obtain the two deterministic vertices \((\delta^{00},\cdots,\delta^{00})\) and \((\delta^{11},\cdots,\delta^{11})\). On the other hand, if \(a_{j}=0\) for some \(0\leq j\leq 1\), then \[\frac{1}{2}=\frac{1-0}{2}\leq q\leq\frac{1+0}{2}=\frac{1}{2}.\] We obtain that \(q=\frac{1}{2}\). Therefore the rest of the vertices are of the form \((a_{1},\ldots,a_{n},\frac{1}{2})\) where \(a_{i}\in\{0,1\}\) for every \(i\) and \(a_{j}=0\) for at least one \(j\). By applying the inverse of \(f\) we obtain the desired contextual vertices. Our main result in this section relates a topological invariant, the fundamental group, to the number of contextual vertices. Given a \(1\)-dimensional simplicial set \(X\) regarded as a graph consider a maximal tree \(T\subset X\). The collapsing map can be applied to the edges in \(T\) to obtain a map \(\pi:X\to X/T\) where \(X/T\) is of the form \(\vee_{i=1}^{n_{X}}C_{1}\). The number \(n_{X}\) is a topological invariant of the graph that gives the non-contractible circles. This number is independent of the chosen maximal tree. The fundamental group \(\pi_{1}(X)\) is defined to be the free group on the set of \(n_{X}\) edges in \(X_{1}^{\circ}-T_{1}^{\circ}\); see [23, Section 1.A]. **Theorem 5.10**.: _Let \(X\) be a connected \(1\)-dimensional measurement space and \(n_{X}\) denote the number of generators of the fundamental group \(\pi_{1}(X)\). Then there exists at least \((2^{n_{X}}-1)2^{|X_{0}|-1}\) contextual vertices in \(\mathsf{sDist}(\mathcal{C}(X))\)._ Proof.: For simplicity we will write \(n=n_{X}\). Let \(T\) be a maximal tree in \(X\). We have the collapsing map \(\pi:X\to X/T=\vee_{i=1}^{n}C_{1}\). According to Corollary 5.3 applying \((\mathcal{C}\pi)^{*}\) to the contextual vertices described in Proposition 5.9 we obtain contextual vertices of \(\mathsf{sDist}(\mathcal{C}(X))\). First, we will show that these vertices are in different orbits under the action of \(\mathsf{sDist}(\mathcal{C}(X))\). Given two different contextual vertices \((p_{1},\ldots,p_{n})\) and \((q_{1},\ldots,q_{n})\) of \(\mathsf{sDist}(\mathcal{C}(\vee_{i=1}^{n_{X}}C_{1}))\), there exists \(1\leq j\leq n\) such that \(p_{j}=p_{-}\) and \(q_{j}=p_{+}\). Fix one circle \(C\) in \(X\) such that the image of \(\pi|_{C}\) contains only \(\tau_{j}\) as a non-degenerate \(1\)-simplex (i.e, \(C\) collapsed to the circle generated by \(\tau_{j}\)). We have \[(\mathcal{C}\pi)^{*}(p_{1},\ldots,p_{n})|_{\mathcal{C}(C)}=\mathcal{C}(\pi|_{C })^{*}(p_{j})=\mathcal{C}(\pi|_{C})^{*}(p_{-})\] which is the PR box of Example 5.6. On the other hand, one can see using the same technique of Example 5.6 that the restriction of \((\mathcal{C}\pi)^{*}(q_{1},\dots,q_{n})\) to \(\mathcal{C}(C)\) is the non-contextual distribution \(e_{+}\), the identity element of \(G_{\pm}(\mathcal{C}(X))\) (see Definition 2.22). Therefore these two restrictions are not in the same orbits under the action of \(\mathsf{sDist}(\mathcal{C}(C))\). We conclude that \((\mathcal{C}\pi)^{*}(p_{1},\dots,p_{n})\) and \((\mathcal{C}\pi)^{*}(q_{1},\dots,q_{n})\) are not in the same orbit under the action of \(\mathsf{sDist}(\mathcal{C}(X))\). So far we have proved that there are \(2^{n}-1\) contextual vertices in \(\mathsf{sDist}(\mathcal{C}(X))\) that lie in different orbits. Observe that every such vertex has \(p_{\pm}\) on every non-degenerate triangle of \(\mathcal{C}(X)\), thus the only two outcome assignments that fix this vertex are those that restrict to \(\delta^{00}\) on every non-degenerate triangle or \(\delta^{11}\) on every non-degenerate triangle. We conclude that the orbit of such a vertex has \(\frac{|\mathsf{dDist}(\mathcal{C}(X))|}{2}=\frac{2^{|X_{0}|}}{2}=2^{|X_{0}|-1}\) elements. By Proposition 2.21 all these distributions are contextual vertices. **Corollary 5.11**.: _A simplicial distribution in the group \(G_{\pm}(CX)\) (Proposition 2.23) is non-contextual if and only if it belongs to the subgroup_ \[e_{+}\cdot\mathsf{dDist}(\mathcal{C}(X))=\{e_{+}\cdot\delta^{s}:\,\delta^{s} \in\mathsf{dDist}(\mathcal{C}(X))\}.\] Proof.: The element \(e_{+}\) is non-contextual since we have \[e_{+}=\frac{1}{2}\delta^{t}+\frac{1}{2}\delta^{r}\] where \(t_{(c,\tau)}=(0,0)\) and \(r_{(c,\tau)}=(1,0)\) for every \(\tau\in X_{1}^{\circ}\), and \((e_{+}\cdot\delta^{s})_{\sigma}=p_{\pm}\) for every non-degenerate \(2\)-simplex \(\sigma\) of \(\mathcal{C}(X)\). Therefore the coset \(e_{+}\cdot\mathsf{dDist}(\mathcal{C}(X))\) is a subset of \(G_{\pm}(CX)\) and all its elements are non-contextual. Moreover, since \(\mathsf{sDist}(\mathcal{C}(X))\) is a commutative monoid and \(e_{+}\cdot e_{+}=e_{+}\), the subset \(e_{+}\cdot\mathsf{dDist}(\mathcal{C}(X))\) is in fact a subgroup of \(G_{\pm}(CX)\). To conclude that the remaining distributions are all contextual we will use Theorem 5.10. For a \(1\)-dimensional (connected) simplicial set \(X\), the Euler characteristic [23] is given by \(\chi(X)=1-n_{X}\). Alternatively, it can be computed using the formula \[\chi(X)=|X_{0}|-|X_{1}^{\circ}|.\] Therefore we have \(n_{X}=|X_{1}^{\circ}|-|X_{0}|+1\). Using this we find that the number of contextual vertices detected in Theorem 5.10 is equal to \(2^{|X_{1}^{\circ}|}-2^{|X_{0}|-1}\). On the other hand, we have \[|G_{\pm}(\mathcal{C}X)|-|e_{+}\cdot\mathsf{dDist}(\mathcal{C}X)|=2^{|X_{1}^{ \circ}|}-2^{|X_{0}|-1}\] where we used the fact that the size of the coset is half the size of \(\mathsf{dDist}(\mathcal{C}(X))\) since it is the orbit of \(e_{+}\). Therefore the contextual vertices detected in Theorem 5.10 are precisely those distribution in \(G_{\pm}(\mathcal{C}X)-(e_{+}\cdot\mathsf{dDist}(\mathcal{C}X))\). **Example 5.12**.: The Bell scenario \((m,n,2,2)\) represented by the complete bipartite graph \(K_{m,n}\). This graph has \(m+n\) vertices and \(m\cdot n\) edges. Therefore, by Theorem 5.10 we have at least \(2^{mn}-2^{m+n-1}\) contextual vertices in the scenario \((m,n,2,2)\). ### Contextual vertices of the cycle scenario We conclude this section by showing that PR boxes constitute all the contextual vertices in the cycle scenario using the collapsing method. Let us set \(X=\tilde{C}_{N}\). The polytope \(P_{X}\) associated to the cycle scenario has dimension \(\mathbb{R}^{2N}\). This is a consequence of Proposition 2.15 since the number of non-degenerate edges of the \(N\)-cycle space is \(2N\). **Lemma 5.13**.: _Let \(p\) be a simplicial distribution on the \(N\)-cycle scenario such that \(p_{x_{i}}\) is deterministic for some \(1\leq i\leq N\). Then \(p\) is non-contextual._ Proof.: Assume that \(p_{x_{i}}=\delta^{a}\) for some \(a\). Let \(q\) be the deterministic distribution given by \(q_{\sigma_{j}}=\delta^{00}\) for all \(1\leq j\leq N\) distinct from \(i\) and \(q_{\sigma_{i}}=\delta^{11}\). Since \((q\cdot p)_{x_{i}}=\delta^{0}\) and \(p\) is non-contextual if and only if \(q\cdot p\) is non-contextual, we can assume that \(a=0\). Let \(\bar{X}\) denote the quotient \(X\) obtained by collapsing \(x_{i}\). The resulting space \(\bar{X}\) is a classical \(N\)-disk. Consider the map \[\pi^{*}:\mathsf{sDist}(\bar{X})\to\mathsf{sDist}(X)\] induced by the quotient map \(\pi:X\to\bar{X}\). There exists a simplicial distribution \(\tilde{p}\) on the classical \(N\)-disk such that \(\pi^{*}(\tilde{p})=p\). Since every distribution on the \(N\)-disk is non-contextual \(p\) is non-contextual. **Proposition 5.14**.: _Contextual vertices of the polytope of simplicial distributions on the \(N\)-cycle scenario are given by the PR boxes._ Proof.: By Lemma 5.13 for a vertex \(p\) there cannot be a deterministic edge on any of the \(x_{i}\)'s. By Corollary 2.18\(p\) is a vertex if and only if \(\operatorname{rank}(p)=2N\). Therefore there are precisely \(N\) deterministic edges \(z_{1},\cdots,z_{N}\), all of which lie on the boundary. The distribution \(p_{\sigma_{i}}\), which is given by \(p_{\pm}\), for each triangle has rank \(2\). Let \((A,b)\) be as in Corollary 2.16 so that \(P_{X}=P(A,b)\). Let \(p\) be the distribution such that \(p|_{\sigma_{i}}=p_{\pm}\) for every \(\sigma_{i}\) and let \(\mathcal{Z}_{p}\) index the inequalities tight at \(p\). There are \(2N\) such tight inequalities since there are \(N\) non-degenerate simplices \(\sigma_{i}\) and each corresponding distribution \(p_{\sigma_{i}}\) has two zeros; see Fig. (18). Denoting \(A[\mathcal{Z}_{p}]:=A_{p}\), we order the columns of \(A_{p}\) by \(x_{1}\cdots,x_{N},z_{1},\cdots,z_{N}\). Up to elementary row operations we have \[A_{p}=\begin{pmatrix}E&\mathbb{0}\\ \mathbb{0}&I\end{pmatrix}\] Then \(\operatorname{rank}(A_{p})=N+\operatorname{rank}(E)\). Multiplying each row of \(E\) by \(-1\) if necessary we can write \[E=\begin{pmatrix}1&(-1)^{c_{1}+1}&0&\cdots&0\\ 0&1&(-1)^{c_{2}+1}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&\cdots&0&1&(-1)^{c_{N}+1}\\ (-1)^{c_{1}+1}&0&\cdots&0&1\end{pmatrix}\] Let us define \(c=\sum_{i=1}^{N}c_{i}\mod 2\). Then \(\operatorname{rank}(E)=N\) if \(c=1\), otherwise \(\operatorname{rank}(E)=N-1\). In the former case we have \(p_{x_{i}}^{0}=1-p_{x_{i}}^{0}\), which implies that \(p_{x_{i}}^{0}=1/2\). Hence \(p_{\sigma_{i}}\)'s are all given by \(p_{\pm}\). The condition that \(c=1\) implies that the number of \(\sigma_{i}\)'s with \(p_{\sigma_{i}}=p_{-}\) is odd. Two vertices \(v,v^{\prime}\) of a full-dimensional polytope \(P\subset\mathbb{R}^{d}\) are called neighbors if \(\operatorname{rank}(A[\mathcal{Z}_{v}\cap\mathcal{Z}_{v^{\prime}}])=d-1\). **Corollary 5.15**.: _All neighbors of a PR box are deterministic distributions._ Proof.: A PR box \(p\) corresponds to a non-degenerate vertex, meaning that the number \(2N\) of tight inequalities is precisely the dimension of the polytope. One property of non-degenerate vertices is that, if \(\mathcal{Z}_{p}\) indexes the tight inequalities of a PR box \(p\), and \(\mathcal{Z}\) is a set differing from \(\mathcal{Z}_{p}\) by one element, then \(p^{\prime}=A[\mathcal{Z}]^{-1}b\) is also a vertex so long as it satisfies the remaining inequalities. In this case \(p\) and \(p^{\prime}\) are neighbors. A neighbor is obtained by replacing a tight inequality with another, which amounts to replacing one zero with another. Doing so will make one of \(x_{i}\)'s a deterministic edge. Lemma 5.13 implies that \(p^{\prime}\), if it is a vertex of \(P_{X}\), is non-contextual, thus a deterministic distribution. ### Conclusion In this paper, we demonstrate novel techniques from the theory of simplicial distributions introduced in [7]. We present topological proofs for the sufficiency of the circle inequalities for the non-contextuality of distributions on the cycle scenario. This proof extends the topological proof of the CHSH scenario in [7]. We go beyond the cycle scenarios and study the flower scenario depicted in Fig.(1) that generalizes the bipartite Bell scenarios consisting of 2 measurements for Alice and \(m\) measurements for Bob. Our main insight in the proof is the topological interpretation of Fourier-Motzkin elimination and the gluing and extension methods of distributions on spaces. We also explore two new features of scenarios available in the simplicial setting: (1) Collapsing measurement spaces to detect contextual vertices and (2) Applying the monoid structure of simplicial distributions to generate vertices. An appealing feature of the collapsing technique featured here is that previously unknown types of Bell inequalities can be discovered from those that are known; see Section 5.1. These Bell inequalities may have desirable properties, such as having quantum violations that are more robust to noise, which may be of both theoretical and practical interest. Acknowledgments.This work is supported by the US Air Force Office of Scientific Research under award number FA9550-21-1-0002.
2303.03490
Computational study of optical absorption spectra of helicenes as applied to strain sensing
Helicenes, a class of organic molecules consisting of ortho-fused benzene rings in a spring-like configuration have found several interesting applications in nonlinear optical materials and opto-electronic devices. Under the action of strain, i.e., via mechanical stretching or compression, the optical absorption spectra of helicenes change which can be employed for strain sensing. The present study presents a detailed investigation of the optical absorption spectra of helicenes using density functional theory along with calculations of the changes in the spectra during mechanical axial stretching or compression of helicenes. The electronic band gap followed a non-symmetric parabolic form with the amount of applied strain. A lowering of the gap in stretched or compressed helicenes compared to the pristine helicene was observed. The compressed state shows a smaller energy gap compared to tension for the same strain magnitude. A detailed inspection of the optical absorption spectra shows that compressive states show significantly lower absorption at higher optical energies (shorter wavelengths) which can provide greater sensitivity to the strain measurement.
Veera Sundararaghavan, Vikas Varshney, Davide Simone
2023-03-06T20:45:02Z
http://arxiv.org/abs/2303.03490v1
# Computational study of optical absorption spectra of helicenes as applied to strain sensing ###### Abstract Helicenes, a class of organic molecules consisting of ortho-fused benzene rings in a spring-like configuration have found several interesting applications in nonlinear optical materials and opto-electronic devices. Under the action of strain, i.e., via mechanical stretching or compression, the optical absorption spectra of helicenes change which can be employed for strain sensing. The present study presents a detailed investigation of the optical absorption spectra of helicenes using density functional theory along with calculations of the changes in the spectra during mechanical axial stretching or compression of helicenes. The electronic band gap followed a non-symmetric parabolic form with the amount of applied strain. A lowering of the gap in stretched or compressed helicenes compared to the pristine helicene was observed. The compressed state shows a smaller energy gap compared to tension for the same strain magnitude. A detailed inspection of the optical absorption spectra shows that compressive states show significantly lower absorption at higher optical energies (shorter wavelengths) which can provide greater sensitivity to the strain measurement. ## Introduction Helicenes comprise a class of organic molecules in which planar cyclic rings, such as benzene, thiophene, and their combination thereof, etc., are fused at their _ortho_-positions, creating a helical structure [1]. Even though these molecules are devoid of 'classical' chiral centers, they exhibit chiral characteristics that arise from their intrinsic molecular architecture (i.e., handedness of the helix), leading to their unique optical properties, such as exceptionally large optically rotatory dispersion (ORD) [2]. Due to their unique properties, helicenes and their functional derivatives have been employed in a wide variety of chemical disciplines such as catalysis [3, 4], polymer chemistry [5, 6], biological sciences [7], non-linear optics [8, 9], and optoelectronics [10, 11, 12]. In addition to their ability to greatly influence polarized light, helicenes present themselves as potential sensing entities due to their intrinsic spring-like characteristics, where a change in either electronic, optical, or piezoelectric response, could be examined while deforming the helicene either under compression or elongation. To date, there has been very limited non-theoretical research towards investigating the different types of responses due to external mechanical perturbations or stimuli in helicenes. Investigation of charge transport in helicenes was carried out [13, 14], indicating that current along the helicene molecule depends on its length (or number of fused rings) and degree of deformation. Specifically, under stretching or compression along the helical axis, a non-monotonic (U-shaped curve) current-strain relationship was observed by Guo et al. [13]. In a different study, Vacek and co-workers computed several orders of modulation in on-off ratios in conductance, when helicenes were utilized as a molecular switch [14]. In the context of modulating optical features, while experimental optical absorption and circular dichroism spectra of [5, 6, 7, 8, 9] helicenes have been reported, the effect of tensile and compressive deformation of the helicenes on their optical absorption is yet to be studied. In principle, while it is possible to isolate and strain individual helicene molecules via optical tweezers [15, 16], it is challenging to systematically investigate the effect of strain on opto-spectral response for individual helicene molecules via experiment. Theoretical investigation of such responses and identifying helicene characteristics that maximize the response sensitivity is key for their successful incorporation as _in-situ_, non-invasive sensing components in various applications. Computational chemistry provides a powerful and complementary tool for assessing the physical and chemical properties of organic molecules, often allowing direct comparison of predictions with experimental observations. In this work, we focus on modeling the electronic characteristics via density functional theory (DFT) and computation of excited states in the form of UV-Vis and electronic circular dichroism (EDC) spectra of pristine and strained helicenes using time-dependent density functional theory (TDDFT) to explore their feasibility as strain sensing entities. ## Results ### Electronic gap Prediction Fig. 1 shows the predicted gap (from local density approximation (LDA)) between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) for helicenes and its planar analogues (phenacenes and acenes) as a function of the number of benzene units within the modeled structures. Although LDA under-predicted the true HOMO-LUMO gap (i.e., HF predicted gap), the observed trends are in good agreement with what has been observed in experimental data as aggregated in Ref. [17]. It should be noted that the similar HOMO-LUMO gaps of helicenes and phenacenes are due to their cross-conjugated nature of the sp\({}^{2}\) bonds in the aromatic rings relative to the conjugated structure of acenes. We find that the gap of helicenes is marginally lower than that of phenacenes and is primarily attributed to the non-planar nature of helicenes and the deviation of bond lengths associated with external aromatic bonds of the helix versus internal aromatic bonds [18]. On the contrary, no cross-conjugation occurs in acenes, leading to sharper decay in the gap with increasing acene length. Interestingly, the HOMO-LUMO gap for all the structures was predicted to decrease with increasing length. Due to the conjugated nature of studied structures, it is imperative that the \(\pi\)-electrons are involved in electronic transitions between the HOMO and LUMO. As the length of the helicene and its planar analogue increases, the conjugated network on the molecule becomes increasingly delocalized, and hence, the transition from HOMO to LUMO is expected to require progressively lower energy, as can be seen from the computed gap. The decrease of energy gap as a function of length for conjugated organic chains is embodied in Kuhn's equation [19] as follows: \[\Delta E=\frac{h^{2}}{8mL^{2}}(N+1)+V_{o}(1-\frac{1}{N}) \tag{1}\] where \(N=4n+2\) is the number of \(\pi\) electrons, \(h\) is the Planck's constant, \(m\) is the electron mass, \(L\) is the zig-zag length (as described next), and \(V_{0}\) is the HOMO-LUMO gap of an infinitely long helicene, which is a fitting parameter. The zig-zag length, \(L\), of a representative even helicene is shown in yellow in the inset in Fig. 2(a) and is equal to 3\(n\)+3 times the C-C bond length of 1.4 A and follows the bond conjugation. Kuhn's equation is a single parameter fit based on an appropriate choice of \(V_{0}\), the HOMO-LUMO gap of an infinitely long helicene. The value of \(V_{0}\) is taken to be 0.06 Ha (1.6 eV) in the fit shown in Fig. 2(a). Band gap of planar analogs have been studied in the past in the form of zero-dimensional (finite length) and one-dimensional (infinite length) graphene nano ribbons [20]. Among these 1D ribbons, while armchair ribbons are either calculated as semiconducting or metallic depending on their width, zigzag ribbons are found to be metallic for all widths [21]. 1D armchair ribbons follow a rule that the nanoribbon is metallic if the number of rows of carbon (width-wise) is 3\(M\)-1, where \(M\) is an integer. Phenacenes and acenes analogs studied in our study can respectively be treated as finite length arm-chair (4 rows of carbon atoms) and zig-zag nanoribbons. It can be seen that acene analogs decay toward a zero HOMO-LUMO gap with Figure 1: Comparison of the HOMO–LUMO gap of helicenes compared to its planar phenacenes and acene counterparts (shown in the left) using DFT-LDA method. The x–axis indicates the number of aromatic rings (the ‘n’ in [n]-helicene) increasing length, corroborating with the metallic nature of infinitely long metallic nanoribbons. Similarly, phenacenes, which have four rows of carbon atoms along ribbon-width are expected to have a non-zero band gap at infinite lengths. The reported DFT band gap for phenacenes of infinite length is 2.6 eV [20]. Helicenes are expected to have a lower HOMO-LUMO gap and the estimated value of 1.6 eV appears reasonable. Note that Kuhn's equation does not account for features such as non-planarity of molecules other than through the chosen parameter \(V_{0}\). Fig. 2(b) shows that odd numbered \(n\)-helicenes have similar energy gaps as (\(n\)+1) even helicenes although one would expect the gap to be slightly higher owing to the shorter zig-zag length (\(3n\)+2) times the C-C bond length for the same \(V_{0}\). ### Prediction of Optical Absorption Spectra Next, we discuss and compare the predicted optical absorption with the experimental literature in Figs. 3-4. The vertical bars in these plots indicate the absorption intensity computed at the first 25 excitation frequencies and the data is directly compared against published experimental spectra for helicene molecules of various lengths reported in Refs. [22, 23, 24, 25]. Fig(s). 2(a) & 2(b) compare our predictions against the experimental data of Clar and Stewart (1952) [22] for [4]-helicene and Figure 3: Comparison of predicted optical absorption with experimentally observed values. The bars indicate the absorption computed at the first 25 excitation frequencies. (a) [4]-helicene (b) [5]-helicene (c) [6]-helicene. Experimental data from Ref. [22, 23]. Figure 2: Decrease in HOMO–LUMO gap with an increase in helicene length. (a) The trend follows Kuhn’s equation (shown as inset), plotted here for even numbered helicenes. (b) Trend zoomed for shorter helicenes. Inset in (a) shows the zigzag length of even helicenes used in the formula as highlighted in yellow. [5]-helicene, respectively. The figures also contain the experimental spectra of the planar phenacene counterpart (solid line) which, as previously shown in Fig. 1, shows an optical gap at a marginally lower wavelength (higher energy) compared to the helicene. The [6]-helicene optical absorption data is compared against another experiment by Newman et al. [23] in Fig. 3(c). The optical absorption spectra of helicenes contain two prominent peaks near the HOMO-LUMO gap called the para (\(p\)) band and the \(\beta\) band. The \(p\)-bands are related to fused benzene rings, while \(\beta\) peaks are associated with higher energy, high intensity B type excitations [26, 27, 28]. Both peak wavelengths are well captured by our simulations. The experimental data also shows that helicene \(\beta\) peak is shifting to lower energy with respect to its phenacene analogue. This shift has been previously explained to be the result of greatly increased overlapping of H atoms due to the twisting and the subsequent loss in resonance energy [22]. With [4]-, [5]- and [6]-helicenes, coplanar overlap of end benzene rings is not possible. Optical spectra has been analyzed in terms of electronic transitions in Table 1 below. Only the three states with the lowest energy (longest wavelengths) are analyzed. The configurations with highest contributions to the excited state are given in a notation having commonality across molecules. The HOMO-LUMO transition is always considered 1-1* regardless of system size. HOMO-1 to LUMO+1 is 2-2* etc. The percentage contribution of these orbitals to the excited state are indicated. The oscillator and rotatory strengths (in \(10^{-40}\) cgs) are provided. The electronic excitations of \(C_{2}\)-symmetric helicenes can be classified according to the two types of polarized transitions: 1-1* and 2-2* are polarized along the short \(C_{2}\)-axis and are of the 'a' type, while 1-2* and 2-1* are 'b'-type excitations are polarized along the long axis (orthogonal to \(C_{2}\)). In even helicenes, the lowest energy excitations tend to be of the 'b' type and for odd helicenes, these are of the 'a' type, which is consistent with past calculations [26, 29]. In the context of helicenes, the three excited states indicated correspond to \(\alpha\), p and \(\beta\) bands respectively. The \(\alpha\) peak always consists of the low energy low intensity ('L' type) CI states. The \(\beta\) bands are of the high energy high intensity ('B' type) and is prominent compared to the other two bands in helicenes of lower lengths. The prominence of the \(\beta\) peak reduces with higher helicenes as described in Ref. [27], which we postulate arises from end benzene rings axially overlapping due to a complete pitch of rotation. The overlap gives rise to an enhanced probability of intra-molecular pi-stacking interactions leading to lower energy electronic transitions. Fig. 4(a) compares the optical gap and absorption spectra of [7]-, [8]-, and [9]- helicenes, which involve a direct and increasing overlap of benzene rings (due to complete pitch), with experiments reported in Refs [24, 25]. The figure shows excellent agreement with experimental spectra and highlights the decreasing trend of lower energy optical gap with increasing helicene length. The trend is also consistent with the Kuhn's equation which predicts lower energy gaps due to extended delocalization as a function of helicene length. The optical absorption spectra for [7]-, [8]-, and [9]- helicenes are aggregated together along with experimental data from Refs [24, 25] in Fig. 5(a). In addition to optical absorption spectra, we also calculated the ECD spectra for [6]- to [8]-helicene and compare the findings against experimental data in Ref. [30]. The predicted as well as experimental ECD spectra are shown in Fig. 5(b). We find that the predicted ECD spectra compare favorably with experiments. It should be noted that due to the calculation of only 25 excited states, the higher wavelength (low energy) results can be more accurately compared against the experiments. Helicenes with even-numbered rings typically show a higher peak differential absorption of left and right circularly polarized light as compared to odd-helicenes as seen in both experiments and DFT simulations. The good agreement between measured and predicted optical spectra (as discussed in Figs 3 and 4) validated the level of accuracy/fidelity of first-principle calculations necessary to carry out helicene deformation (compression and elongation) calculations, which we discuss next. Figure 4: Comparison of predicted optical absorption with experimentally observed values for (a) [7]-helicene (b) [8]-helicene (c) [9]-helicene. Experimental data from [24, 25] ### Effect of Applied Strain on HOMO-LUMO gap & Spectra Modulation In this section, we discuss how the tensile and compressive axial deformation affects the optical absorption spectra of helicene molecules towards exploring their potential as embedded sensing modalities using the representative example of [9]-helicene. In this context, we start our discussion with the visualization of different strained states of [9]-helicene as well as the estimation of change in respective HOMO-LUMO gap with respect to an undeformed helicene as shown in Fig(s). 6(a) & (b). It can be seen from the figure that the change in gap follows a non-monotonic, asymmetric, inverted parabola-like form with respect to the amount of applied strain. For both large compressive and tensile deformation, a lower band gap is observed as compared to the pristine, un-deformed helicene. The parabolic form is unique to helical graphene structure and its analogs. For example, under bending as shown in Ref [31], bandgap of semiconducting nanotubes monotonically decreases with bending, while that of metallic nanotubes would increase with bending. In helicenes, the LUMO energies are sensitive to straining and follow a similar parabolic form to strain as the HOMO-LUMO gap itself. This unique trend in LUMO energies can be associated with \begin{table} \begin{tabular}{||c c c c c||} \hline [n] & \%contribution & CI parentage & Wavelength (nm) & Oscillator strength & Rotatory strength \\ \hline \hline 4 & 52,47 & 2-1*,1-2* & 339.5 & 0 & -0.1 \\ & 88,10 & 2-2*,1-1* & 330.59 & 0.02 & -19 \\ & 48,45 & 1-2*,2-1* & 279.82 & 0.78 & 112 \\ 5 & 40,22 & 1-1*,2-2* & 342.13 & 0 & -3 \\ & 53,32 & 2-1*,1-1* & 326.34 & 0.03 & 88 \\ & 49,20 & 1-2*,2-2* & 305.19 & 0.17 & 197 \\ 6 & 57,40 & 2-1*,1-2* & 358.35 & 0 & 2 \\ & 22,77 & 2-2*,1-1* & 343.64 & 0 & -0.6 \\ & 55,39 & 1-2*,2-1* & 315.59 & 0.42 & 686 \\ 7 & 62,36 & 1-1*,2-2* & 377.41 & 0 & 3 \\ & 95,2.7 & 2-1*,1-2* & 361.79 & 0.03 & 274 \\ & 78,17 & 1-2*,3-1* & 349.44 & 0.05 & 291 \\ 8 & 60,28 & 1-2*,2-1* & 392.47 & 0 & 25 \\ & 62,21 & 1-1*,2-2* & 373.21 & 0.01 & 112 \\ & 66,21 & 2-1*,1-2* & 368.05 & 0.08 & 702 \\ \hline \end{tabular} \end{table} Table 1: Calculated spectral data (%contribution corresponding to CI parentage, corresponding wavelength, oscillator and rotatory strengths (in \(10^{-40}\) cgs) are indicated. Only top three long wavelength transitions are considered. Figure 5: Comparison of predicted spectra against data from [24, 25, 30] (a) UV-Vis spectra (b) ECD spectra. the twisted structure of helicenes. On compression, the gap decreases due to the higher overlap of orbitals and subsequent loss of resonance energy. Fig. 6(c)(top) depicts the HOMO and LUMO orbitals for helicene axially strained under compression (40% strain) depicting overlap of orbitals from the center towards the benzene units at the free ends. The behavior on stretching is more interesting, as one would expect the gap to increase towards those what is observed in planar phenacenes. This is indeed the case until a critical strain of 25%. Beyond this strain, although HOMO energies continue to increase upon tensile straining as expected, the LUMO energies show an opposing trend, i.e., decrease upon further tensile straining. In a previous study by Guo et al. [13] on [12]-helicene, a critical strain of 23.53% was reported where the inflection occurs towards a decreasing gap under increased tension (curve depicted in Fig. 6(b)). In Fig. 6(c), we show the behavior of orbital wave functions at the critical strain of 25% where the highest gap is realized. At this strain, overlap of orbitals out-of-plane of the benzene rings as seen at 40% compression is completely removed, leading to an increase in the HOMO-LUMO gap. Beyond this critical strain, the HOMO-LUMO gap begins to decrease. The HOMO-LUMO orbital structure is illustrated for the case of 100% tensile strain in Fig. 6(c)(bottom). In Ref. [13], it was found that stretching beyond 23.53% tensile strain tends to localize the LUMO wave function toward the inner helix. However, that study enforced proportional elongation of all helicene atoms before optimization which could have biased the structure towards a geometry with a homogeneous strain distribution. In our study, we see that the geometry optimization after straining leads to inhomogeneous straining of helicenes (see Fig 6(a)). At 100% strain, the free ends take up significant deformation via bending of the chain, while the central helix is subject to a relatively smaller amount of stretching. Alternating inner helix bonds are highly deformed during tension, with a few bonds stretching towards the \(sp^{3}\) bond failure length of 1.54 A(at 100% strain) making conjugation along the inner helix unlikely. Fig 7 shows the inner helix bond lengths at 100% strain at which the bond failure length close to 1.54 A is reached. Bond lengths at 75% and 125% strains are also shown as comparison (also see Fig S1 in supplementary for additional bond lengths along inner helix). A different explanation of lower gap upon elongation compared to Ref. [13] can be provided in terms of the outer helix bond lengths in strained helicenes. An \(sp^{2}\) carbon in an alkene is 1.34 A in length, an \(sp^{3}\) carbon is 1.46 A in length; an \(sp^{2}\) benzene bond is 1.40 A in length. The inner helix of the helicene has bond lengths of 1.45 A similar to \(sp^{3}\) carbon and is thus not very aromatic or conjugated. The outer helix has alternating bond lengths of 1.36 A and 1.43 A and are more conjugated and aromatic than the inner helix [18]. A decrease in the HOMO-LUMO gap implies better conjugation of bonds. However, one would expect bond lengths to increase during tensile straining greater than 25% lowering the strength of bonds and affecting conjugation in the structure. On inspection of the structures, it is seen that length of longer bonds in the outer helix decreases towards 1.40 A with increasing strain. Figure 8 shows the decrease of certain outer helix bond lengths, indicated with arrows, towards 1.4 A with straining and the accompanying change in the distribution of the highest occupied orbital wavefunctions. Fig. 6(c)(bottom) depicts the HOMO and LUMO orbitals for helicene under tension (at 100% strain) showing that orbitals continue to localize with improved conjugation along the outer helix compared to the 25% strain case. Equalization of these outer helix bond lengths towards the \(sp^{2}\) benzene bond length contributes to lower band gap through improved conjugation in the outer helix. Figure 6: (a) Axial straining of [9]–helicene. (b) The change in HOMO-LUMO gap as a function of strain for [9]-helicene, the data is plotted against data from Guo et al (2015) for [12]-helicene. (c) The HOMO and LUMO orbitals (plotted together) for 40% compression, 25% and 100% tension. Additional analysis of the change in electronic transitions upon straining has been included in Table 2. Although the change in the two lowest energy excitations is small across the large change in strains, the \(\beta\) peak (third entry) shows a larger sensitivity, with a red shift seen under compression and a blue shift under tension. This can be attributed to the improved out-of-plane conjugation observed under compression, lowering the excitation energy. The strength of the \(\beta\) peak is also seen to significantly increase under tension allowing differentiation of the strain state. ## Discussion The spring-like configuration of Helicene molecules leads to a smaller HOMO-LUMO gap than that of its planar phenacene analogs. As the length of the helicene chain increases, the conjugated network on the molecule becomes more spread out, and hence, the transition from HOMO to LUMO requires progressively lower energy, following Kuhn's equation. The computed optical absorption spectra of selected helicene molecules are found to compare well against published experimental data. The spectra contain two prominent peaks near the HOMO-LUMO gap called the para (\(p\)) band and the \(\beta\) band. The helicene peaks shift to lower energy with respect to its phenacene analogue as a result of increased overlapping of atoms due to the twisting. When helicenes are strained, the changes in the HOMO-LUMO gap followed a non-symmetric parabolic trend with the amount of straining. Compressed helicenes have a lower gap due to increased conjugation along the helical axis due to chain overlap. The inflection in the HOMO-LUMO gap with tensile strain can be explained as a transition from increasing Figure 8: Change of few outer helix bond lengths towards 1.4 Å, the \(sp^{2}\) benzene bond length, at higher strains tends to improve conjugation along these bonds. The bonds indicated with arrows. Strain levels indicated below each case. Figure 7: HOMO and LUMO orbitals (together) are indicated at three strain levels around failure strain. HOMO-LUMO gap due to decreased overlap to decreasing HOMO-LUMO gap due to improved conjugation as few outer helix bonds reach the \(sp^{2}\) benzene bond length during stretching. The observed parabolic trend poses an issue for strain sensing as the fact that for the same measured optical gap, there are two possible strain states of opposite manner (either compression or elongation). This can be seen in the optical absorption spectra plotted for different strain states in Fig. 9(a) where selected compressive and tensile strain states show similar spectra near the optical gap. However, over a larger range of wavelengths, compressive states show lower absorption at higher optical energies (lower wavelengths), resolving the issue of differentiating the sign of strains. Fig. 9(b) and Fig 10(a) compare the absorption spectra (linear scale) for tension versus compression states. It is seen that the para (\(p\)-band) and \(\beta\) peaks are decreased due to the overlap of orbitals in compression states. Similar behavior is also seen in the ECD spectra (Fig. 10b). Such a feature would allow for a higher sensitivity measurement of strain states, coupled with optical gap measurements. These insights provide promise for helicenes as a potential strain sensing molecule when used in composite materials. Additional investigations on how the optical gap as well as optical spectra gets modulated with respect to different helicene parameters such as helicene length, heteroatom functionalization (S, N, P atoms), ring substituents, and electronic doping, would be valuable in highlighting their capability for future strain sensing applications. ## Methods All structures were generated and optimized using COMPASS force-field in Forcite code [32] to be used as starting structures for first-principle calculations. Preliminary band gap calculations for various helicenes (as a right handed helix) as well as their planar phenacene and acene analogs (see Fig. 1) were carried out using DFT calculations with local density approximation with Vosko-Wilk-Nusair [33](LDA-VWN) exchange correlation. In order to generate optimized geometries of various helicene \begin{table} \begin{tabular}{||c c c c c||} Strain & \%contribution & CI parentage & Wavelength (nm) & Oscillator strength & Rotatory strength \\ \hline \hline 0 & 57,30 & 1-1*,2-2* & 404.4 & 0 & -0.4 \\ & 88,8 & 2-1*,1-2* & 395.55 & 0.04 & 398 \\ & 60,33 & 1-2*,3-1* & 381.2 & 0.02 & 271 \\ -40 & 74,20 & 1-1*,3-2* & 404.82 & 0 & -20 \\ & 83,13 & 2-1*,1-2* & 403.64 & 0 & 22 \\ & 92,2 & 3-1*,1-2* & 398.4 & 0.03 & 358 \\ 50 & 54,37 & 1-1*,2-2* & 407.97 & 0 & -0.01 \\ & 66,27 & 2-1*,1-2* & 387.7 & 0.03 & 116 \\ & 64,27 & 1-2*,2-1* & 369.72 & 0.3 & 1197 \\ \hline \end{tabular} \end{table} Table 2: Calculated spectral data for [9]-helicene as a function of strain (other entries similar to table 1). Only the top two main contributing configurations are indicated. Figure 9: (a) Optical absorption spectra for selected strain states (a) comparison at wavelengths near the optical gap (b) Comparison over a larger set of wavelengths for 50 % tension and 40% compression. structures for TDDFT calculations, Hartree-Fock (HF) theory was employed as it provides exact exchange correlation. Furche et al (2000)[29] presented the first TDDFT simulations of circular dichroism spectra of [4]-[7] carbohelicenes using geometry relaxed using Hatree-Fock theory. They found that DFT calculations with GGA and hybrid functionals lead to inferior results through comparison with experimental molecular structure. Double numeric basis sets with polarization functions were used with an energy convergence criteria of 1.0e\({}^{-4}\) Ha to obtain the optimized structures of different helicenes studied here. Optical absorption simulations of helicenes were carried out using TDDFT method in Gaussian-09 code (frequency space implementation is described in detail in Ref. [34]). The first 25 excitation frequencies were computed to model the optical spectra. The B3LYP exchange correlation functional has been used based on its past success in modeling polyaromatic hydrocarbons [35, 36, 37]. In the context of carbohelienes, Nakai (2012)[30] performed a comparison of different exchange functionals and found that the amount of exact exchange in the DFT functionals plays an important role. B3LYP was found to reasonably reproduce the experimental CD spectrum of [6] helicene, although slightly underestimating the experimental CD intensities. The correlation consistent polarized valence double zeta (ccPVDZ) basis set was chosen, which has been successfully employed in the past for polyaromatic hydrocarbons [38] including carbohelicenes [39] in combination with B3LYP exchange correlation functional. The smoothed response was computed using a peak broadening parameter (half width of the peak at half height) of 0.233 eV. To model the effect of helicene deformation on optical properties, the helicenes were strained along the helical axis direction by displacing the end aromatic carbon atoms of the helical structure by a displacement corresponding to the needed strain level (in both elongation and compression). Then, the structure was re-optimized using HF theory (as detailed above) but with the two end-atoms fixed. The optical absorption spectra of the strained structure were then computed using TDDFT.
2306.09005
Modularity Trumps Invariance for Compositional Robustness
By default neural networks are not robust to changes in data distribution. This has been demonstrated with simple image corruptions, such as blurring or adding noise, degrading image classification performance. Many methods have been proposed to mitigate these issues but for the most part models are evaluated on single corruptions. In reality, visual space is compositional in nature, that is, that as well as robustness to elemental corruptions, robustness to compositions of corruptions is also needed. In this work we develop a compositional image classification task where, given a few elemental corruptions, models are asked to generalize to compositions of these corruptions. That is, to achieve compositional robustness. We experimentally compare empirical risk minimization with an invariance building pairwise contrastive loss and, counter to common intuitions in domain generalization, achieve only marginal improvements in compositional robustness by encouraging invariance. To move beyond invariance, following previously proposed inductive biases that model architectures should reflect data structure, we introduce a modular architecture whose structure replicates the compositional nature of the task. We then show that this modular approach consistently achieves better compositional robustness than non-modular approaches. We additionally find empirical evidence that the degree of invariance between representations of 'in-distribution' elemental corruptions fails to correlate with robustness to 'out-of-distribution' compositions of corruptions.
Ian Mason, Anirban Sarkar, Tomotake Sasaki, Xavier Boix
2023-06-15T10:04:10Z
http://arxiv.org/abs/2306.09005v1
# Modularity Trumps Invariance ###### Abstract By default neural networks are not robust to changes in data distribution. This has been demonstrated with simple image corruptions, such as blurring or adding noise, degrading image classification performance. Many methods have been proposed to mitigate these issues but for the most part models are evaluated on single corruptions. In reality, visual space is compositional in nature, that is, that as well as robustness to elemental corruptions, robustness to compositions of corruptions is also needed. In this work we develop a compositional image classification task where, given a few elemental corruptions, models are asked to generalize to compositions of these corruptions. That is, to achieve _compositional robustness_. We experimentally compare empirical risk minimization with an invariance building pairwise contrastive loss and, counter to common intuitions in domain generalization, achieve only marginal improvements in compositional robustness by encouraging invariance. To move beyond invariance, following previously proposed inductive biases that model architectures should reflect data structure, we introduce a modular architecture whose structure replicates the compositional nature of the task. We then show that this modular approach consistently achieves better compositional robustness than non-modular approaches. We additionally find empirical evidence that the degree of invariance between representations of 'in-distribution' elemental corruptions fails to correlate with robustness to 'out-of-distribution' compositions of corruptions. ## 1 Introduction Biologically intelligent systems show a remarkable ability to generalize beyond their training stimuli, that is to learn new concepts from no, or few, examples by combining previously learned concepts [1; 2; 3; 4]. In contrast, artificial neural networks are surprisingly brittle, failing to recognize known categories when presented with images with fairly minor corruptions [5; 6; 7; 8; 9]. To improve robustness many methods have been proposed for learning more robust representations, including data augmented training techniques [9; 10; 11; 12; 13], and encouraging invariant representations or predictions [14; 15; 16; 17]. However, when the robustness of these methods is evaluated it tends to be on single corruptions of the type seen in ImageNet-C [8]. In reality, the space of possible corruptions is compositional. If we draw a loose correspondence between corruptions and real world weather conditions, with noise akin to rain on a windshield, blur as fog and a contrast change as a change in brightness, we see it is in fact possible to have rain, fog and bright sun simultaneously. In this work we extend the notion of robustness over corruptions to robustness over _compositions of corruptions_. We construct a compositional image classification task where a neural network is trained on single _elemental corruptions_ and evaluated on _compositions_ of these corruptions (Figure 1). Importantly, this is not an adversarial or no-free-lunch task, as we want the AI systems we develop to be capable of compositional generalization [18; 19; 20; 21; 22; 23; 24]. If natural visual data can be decomposed into a set of elemental functions (or mechanisms [25; 26]), we do not yet know how to find them. The compositional robustness task we create allows us to experiment with a compositional structure where the underlying elemental functions are known. By studying the behaviors of neural networks under this structure, we aim to gain insights into how we might develop methods for better compositional robustness. Such insights could be applied to create systems that generalize more robustly or allow for lower data collection costs, needing only to collect or synthesize the elemental corruptions instead of the exponentially large number of compositions. Finally, this task creates a new domain generalization task on which we can evaluate the generality of proposed methods for domain generalization. In domain generalization parlance, a system is trained on data from multiple training domains (the elemental corruptions), and then evaluated on data from a related set of test domains (the compositions), from which no data samples are seen during training. To better understand how neural networks behave on out-of-distribution compositional data we evaluate different methods for domain generalization on this task. Firstly, we explore empirical risk minimization (ERM), which has been shown to be a strong baseline when correctly tuned [27]. Secondly, we evaluate a setup where invariance between the same image under different corruptions is explicitly encouraged using the contrastive loss [28; 29; 30], since a central theme in domain generalization has been to encourage the learning of invariant representations [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. Finally, we introduce a modular architecture to better reflect the compositional structure of the task [42]. Here, rather than all parameters jointly modelling all corruptions, each elemental image corruption is 'undone' by a separate module in latent space. Counter to our initial expectations we find that training to encourage invariant representations with the contrastive loss offers only minor improvements in terms of out-of-distribution accuracy, whilst the modular architecture consistently outperforms other methods. Additionally, we find that the degree of invariance between representations of elemental corruptions fails to correlate with performance on out-of-distribution compositions of corruptions. At their narrowest interpretation, these results empirically show that for compositional robustness, when training domains consist only of the elemental components, modular approaches tend to outperform monolithic (non-modular) approaches. At their broadest interpretation our results question whether encouraging non-trivially1 invariant representations is sufficient to achieve compositional domain generalization. This indicates that there is still work to be done on understanding the additional properties required for compositional robustness and suggests more modular architectures as a promising candidate for one such property. Footnote 1: The trivial case with constant representations has maximal invariance but cannot achieve good generalization. ## 2 Related Work We now briefly recap related works from the areas of domain generalization, invariant representations, modularity, compositional generalization and robustness. Figure 1: The compositional robustness task. A model is trained jointly on images corrupted with elemental corruptions (left) and evaluated on images corrupted with compositions of these corruptions (right). Shown is _all_\(7\) elemental corruptions and a _subset_ of the \(160\) compositions of corruptions. **Domain Generalization and Invariant Representations.** The creation of models that are robust to unseen changes in data distribution is the work of domain generalization. Given certain training domains, the aim of domain generalization is to build models that can generalize to related unseen test domains. One common approach is to encourage the learning of invariant representations between training domains whilst achieving high performance [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41], with the idea that this will lead to invariant representations between training and test domains and hence good generalization performance. However, this relies on an implicit assumption that we have sufficient training domains that are reasonably representative samples from some meta-distribution of domains (this has been made explicit in some works [43; 44]). It is not clear that this will be true in general, and arguably replaces the problematic assumption of _i.i.d_ data with an equally problematic assumption of _i.i.d_ domains. What's more, such generalist approaches may be unable to take structure amongst training domains into account. It should be noted that there has also been substantial work on encouraging invariance for the related task of domain adaptation where (unlabelled) data from test domains is available [45; 46; 47; 48; 49; 50]. Despite being motivated by theoretical work [51; 52], the central role of invariance in domain adaptation and generalization has been questioned [53; 54; 55; 56; 57]. In Section 4.3 we discuss the limitations of encouraging invariance for compositional robustness. **Relational Inductive Biases and Modularity.** A closely related approach to learning robust representations aims to take advantage of explicit structure in data. These relational inductive biases [58] aim to include knowledge about entities and the relations between them into neural network architectures. For example, we can encode that entities should not change under certain transformations by building invariance to these transformations into our architectures. Work on equivariance beyond translation explicitly creates such robustness [59; 60; 61] but is usually formulated in terms of group actions [62] so is limited to invertible transformations. More general approaches aim to uncover structure by decomposing data into independent (causal) mechanisms [21; 25; 63; 64] or disentangled factors of variation [65; 66; 67; 68; 69; 70; 71; 72; 73]. Ways to explicitly model decomposable structures in data include pre-training on primitive components [1] and using modular architectures to encode structure [74; 75; 76; 77; 78; 79; 80; 81; 42]. In contrast, in this work we know how the data structure decomposes and explore the performance of modular and non-modular architectures on the recomposition of known elemental components. **Compositional Generalization.** The visual world is compositional [82; 83; 84; 85; 23]. Whilst much has been made of compositionality in language (linguistic compositionality) and reasoning (conceptual compositionality) [86; 87; 88; 89; 90; 22; 23; 24], compositional robustness has received relatively little attention. Recent AI systems still fail on compositional tasks [93; 94; 22] where the space of generalization grows exponentially with the number of elemental components. Whilst practically it is not possible to sample all combinations of elemental components, one interpretation of large models [95; 6; 96] is that they aim to sample densely enough to generalize to unseen combinations. However, for real world data, it is unclear how big the compositional space is and how densely we need to sample, with this being particularly pertinent if the data distribution is high-dimensional [97; 98] or fat tailed [99]. To that end, several works have analyzed controlled settings, aiming to understand the best settings for training in order to achieve the best generalization [31; 71; 100]. **Robustness Over Image Corruptions.** Whilst the aforementioned work aims to improve the robustness of neural networks, many have worked specifically on improving robustness for common image corruptions and adversarial examples [8; 9; 101; 11; 12; 13; 14; 15; 16; 17; 101]. However, the majority of previous works are evaluated only on single corruptions, ignoring the true compositional space formed by the corruptions. ## 3 Methods ### A Framework for Evaluating Compositional Robustness We design a framework for evaluating compositional robustness on any dataset for image classification. We first create elemental components by applying six different corruptions separately to all images. These corruptions along with the original, _Identity (ID)_, data create \(7\) training domains. We use the corruptions _Contrast (CO)_, _Gaussian Blur (GB)_, _Impulse Noise (IM)_, _Invert (IN)_, _Rotate 90\({}^{\circ}\) (R90)_ and _Swirl (SW)_, seen in Figure 1 (left). We choose these corruptions to include a mixture of long-range and local effects as well as invertible and non-invertible corruptions. A further exploration of the choice and parameter settings of corruptions is given in Appendix A. To test compositional robustness we create images from compositions of the elemental corruptions, see Figure 1 (right). We consider every possible permutation of compositions of two corruptions (excluding _Identity_) giving \({}^{6}\!P_{2}=30\) possible compositions. For compositions of more than two corruptions we sample the possible permutations to approximately balance the contributions of compositions containing different numbers of elemental corruptions (the sampling process is described in Appendix A). This creates \(40\) possible compositions of \(3\) corruptions and \(30\) possible compositions for each of \(4\), \(5\) and \(6\) corruptions. Altogether the compositions form \(160\) test domains. The task we then try to solve is to achieve the highest classification accuracy on images from the \(160\) compositional test domains whilst training only on the \(7\) elemental training domains. ### Monolithic Approaches A domain generalization task consists of data from related domains or environments \(\mathcal{D}_{e}=\{(\mathbf{x}_{e}^{(i)},y_{e}^{(i)})\}_{i=1}^{N_{e}}\), with \(e\in\mathcal{E}_{all}\), where \(\mathcal{E}_{all}\) is the set of all domains we wish to generalize to and \(N_{e}\) the number of datapoints in domain \(e\). However, during training we only have access to a subset of domains \(\mathcal{E}_{tr}\subset\mathcal{E}_{all}\). For our task, \(\mathcal{E}_{tr}\) is the set of elemental training domains, \(|\mathcal{E}_{tr}|=7\), and \(\mathcal{E}_{all}\) additionally includes the compositional test domains, \(|\mathcal{E}_{all}|=167\). As we use the same set of base images to create corrupted images, the number of datapoints, \(N_{e}\), is the same across all domains. For a neural network \(f_{\mathbf{\theta}}\) parameterized by \(\mathbf{\theta}\), we aim to find parameters, \(\mathbf{\theta}^{\ast}\), from parameter space \(\mathbf{\Theta}\), that optimize loss function \(\mathcal{L}\), on training domains \(\mathcal{E}_{tr}\). The accuracy of \(f_{\mathbf{\theta}^{\ast}}\) is then evaluated on the test domains. Monolithic approaches share all parameters, \(\mathbf{\theta}^{\ast}\), over all domains where, \[\mathbf{\theta}^{\ast}=\operatorname*{argmin}_{\mathbf{\theta}\in\mathbf{\Theta}}\sum_{e \in\mathcal{E}_{tr}}\sum_{i=1}^{N_{e}}\mathcal{L}(f_{\mathbf{\theta}}(\mathbf{x}_{e}^ {(i)},y_{e}^{(i)})). \tag{1}\] The first approach we evaluate is Empirical Risk Minimization (ERM), training all parameters jointly to minimize some risk function over training domains. We set \(\mathcal{L}\) to be the mean cross entropy loss. The second approach we evaluate is contrastive training. A standard domain generalization approach is to encourage invariance between representations on the training domains [102] and since we have paired data between domains we can explicitly encourage invariance using the contrastive loss [28; 29; 30]. Note that the availability of paired data creates a best-case set up for the learning of invariant representations and that learning a representation that is invariant for paired images from different domains would satisfy the invariance encouraging criteria of previous works [33; 34; 35]. We follow the SimCLR contrastive training formulation [28], taking \(B\) datapoints from each elemental training domain (created from the same base images) to get a minibatch of size \(B|\mathcal{E}_{tr}|\). Applying an additional index to each of the domains in \(\mathcal{E}_{tr}\) to get \(\mathcal{E}_{tr}=\{e_{d}\}_{d=1}^{D}\), positive pairs come from pairs of the same image under different corruptions \((\mathbf{x}_{e_{r}}^{(i)},\mathbf{x}_{e_{s}}^{(i)}),r\neq s\), and negative pairs from all other pairs in the minibatch \((\mathbf{x}_{e_{r}}^{(i)},\mathbf{x}_{e_{s}}^{(j)}),i\neq j\). We apply the contrastive loss on representations from the penultimate layer of \(f_{\mathbf{\theta}}\), noting the representation for \(\mathbf{x}_{e}^{(i)}\) as \(\mathbf{z}_{e}^{(i)}\). Using cosine similarity, \(\text{sim}(\mathbf{u},\mathbf{v})=\mathbf{u}^{T}\mathbf{v}/\|\mathbf{u}\|\|\mathbf{v}\|\), to measure similarity between representations we define the loss for a positive pair in the minibatch as \[\ell(\mathbf{x}_{e_{r}}^{(i)},\mathbf{x}_{e_{s}}^{(i)})=-\log\frac{\exp(\text{sim}( \mathbf{z}_{e_{r}}^{(i)},\mathbf{z}_{e_{s}}^{(i)})/\tau)}{\sum_{d=1}^{D}\sum_{k=1}^{B} \mathbbm{1}[k\neq i]\exp(\text{sim}(\mathbf{z}_{e_{r}}^{(i)},\mathbf{z}_{e_{s}}^{(k)}) /\tau)}, \tag{2}\] where \(\tau\) is a temperature parameter and \(\mathbbm{1}[k\neq i]\) is an indicator function equal to \(1\) when \(k\neq i\) and \(0\) otherwise. We compute this loss across all positive pairs in the minibatch to encourage invariant representations. To learn to classify, we additionally include the cross entropy loss to arrive at, \[\mathcal{L}(f_{\mathbf{\theta}}(\mathbf{x}_{e_{r}}^{(i)},y_{e_{r}}^{(i)}))=-\sum_{c=1}^ {C}\mathbbm{1}[y_{e_{r}}^{(i)}=c]\log(\sigma(f_{\mathbf{\theta}}(\mathbf{x}_{e_{r}}^{( i)}))_{c}+\lambda\sum_{\begin{subarray}{c}e_{r}\in\mathcal{E}_{tr}\\ s\neq r\end{subarray}}\ell(\mathbf{x}_{e_{r}}^{(i)},\mathbf{x}_{e_{s}}^{(i)}). \tag{3}\] Here the first term is the cross entropy loss, with \(C\) the total number of categories, \(\mathbbm{1}[y=c]\) an indicator function that is \(1\) when \(y=c\) and \(0\) otherwise, \(\sigma\) the softmax operation, and, in a slight overloading of notation, subscript \(c\) represents the \(c^{th}\) entry of the log-softmax vector. \(\lambda\) is a hyper-parameter weighting the influence of the cross entropy and contrastive terms. Note also, as described above, Equation 3 is calculated on a minibatch rather than over all datapoints simultaneously. To evaluate the monolithic approaches on compositions of corruptions we simply calculate classification accuracy on the domains in \(\mathcal{E}_{all}\). ### A Modular Approach The final approach we evaluate is a modular architecture, as it has been argued that modularity is a key feature of robust, intelligent systems [21; 103]. For each elemental corruption we add one module to our network which aims to 'undo' the corruption in latent space. In practice these modules are intermediate layers that operate on hidden representations to map the representation of a corrupted image to the representation of the same image when uncorrupted. To make this possible modules are designed to have input and output features with the same shape. When classifying a test image corrupted with a composition of elemental corruptions we sequentially apply the modules for each corruption present in the composition. For example, if we are testing on the composition _IN\(\circ\)GB_ we apply both the module trained on the _Invert_ corruption and the module trained on the _Gaussian Blur_ corruption. Modules that are located in-between earlier layers of the network are applied first, if modules are in the same layer we apply the module which appears first in the permutation ordering (Section 3.1). To formalize this idea, we split network parameters \(\mathbf{\theta}\) into one set of parameters shared over all domains, \(\mathbf{\theta}_{shared}\), and an additional set of domain specific module parameters for each training domain \(\{\mathbf{\theta}_{e}\}_{e\in\mathcal{E}_{tr}}\), similar to residual adaptation [104; 105]. In practice \(\mathbf{\theta}_{shared}\) parameterizes a neural network and \(\mathbf{\theta}_{e}\) the intermediate layers that can be inserted when working with domain \(e\). To train this system we first train parameters \(\mathbf{\theta}_{shared}\) on _Identity_ data using the cross entropy loss. We then freeze \(\mathbf{\theta}_{shared}\) and train separate modules parameterized by \(\mathbf{\theta}_{e}\) on data from each elemental training domain \(e\in\mathcal{E}_{tr}\) along with paired _Identity_ data. Since we encourage the modules to 'undo' corruptions, we use the loss function from Equation 3 with minor modifications. Firstly, the set of domains for the contrastive loss is limited to only the relevant elemental training domain and the _Identity_ domain. Secondly, for the _Identity_ data, latent representation \(\mathbf{z}\) is from the layer at which the module is inserted and for the corrupted data from the output of the module, spatially flattening the feature map if required (as opposed to from the penultimate network layer as described when introducing Equation 2 in Section 3.2). Appendix B contains a graphical depiction of this process. An important design choice for any modular approach is how to choose where to locate the modules, with recent works observing that different domain changes should be dealt with in different neural network layers [106; 107; 108]. We take a very simple approach, training separate modules between each layer of the network parameterized by \(\mathbf{\theta}_{shared}\) for \(5\) epochs. We then select the module with the best in-distribution accuracy on a held-out validation set as the module to train to completion. This is similar to using adaptation speed [109; 110] as a proxy to discover modular decompositions, although in practice we find if we use adaptation times substantially smaller than \(5\) epochs we can erroneously select module locations that do not achieve optimal in-distribution performance. ### Measuring the Invariance of Learned Representations Since encouraging invariance is so prominent in the domain generalization literature [102] we also empirically investigate the role of invariant representations in generalizing to unseen compositions of corruptions. We create two invariance scores following the methods of Madan et al. [80], with full details along with an illustrative example in Appendix C. These per-neuron scores are calculated for every neuron in the penultimate layer of the network (after applying modules if applicable), and the median score over all neurons is reported. Loosely the _elemental invariance score_, is the maximum difference in neuron activation amongst the elemental corruptions normalized to lie between 0 and 1, with the idea that this score should be high when all elemental corruptions activate a neuron in a similar way (i.e. the neuron is invariant to the elemental corruptions). We additionally calculate the _composition invariance score_, which also lies between \(0\) and \(1\) and measures how similarly a neuron activates on a composition when compared to the closest elemental corruption in the composition. We choose the closest elemental corruption because, to achieve high accuracy, it should be sufficient for a neuron to activate similarly on the composition and one elemental corruption, even if the elemental corruptions as a whole do not activate invariantly. ### Datasets, Architectures and Training Procedure We evaluate each training approach on three different datasets for image classification: emnist[111], an extended mnist with \(47\) handwritten character classes; cifar-10[112], a simple object recognition dataset with \(10\) classes, and facescrub[113], a face-recognition dataset. For facescrub we follow [114] removing classes with fewer than 100 images, resulting in \(388\) classes, with each class representing an individual identity. We train using stochastic gradient descent with momentum \(0.9\) and weight decay \(5\times 10^{-4}\), learning rate is set using a grid search over \(\{1,10^{-1},10^{-2},10^{-3}\}\) and contrastive loss weighting, \(\lambda\), over \(\{10,1,10^{-1},10^{-2}\}\), with the best setting selected based on the performance on a validation set of the _training_ domains [27]. \(\tau\) from Equation 2 is set to \(0.15\) in all experiments. We use a batch size of \(256\) (or the nearest multiple of \(|\mathcal{E}_{tr}|\) for the contrastive loss) and train for a maximum of \(200\) epochs, using early stopping on the held out validation set. Each dataset is run over three seeds from which we select one seed to report the most pedagogical results. cifar-10 and facescrub images are augmented with random cropping and flipping, ensuring positively paired examples receive exactly the same augmentation. For emnist we use a simple convolutional network with a LeNet-like [115] architecture with modules made up from convolutional layers. For cifar-10 we use ResNet18 [116] without the first max pooling layer, wherever possible using ResNet blocks as modules. For facescrub we use Inception-v3 [117] without the auxiliary classifier. As with ResNet we use additional Inception-v3 layers as modules wherever possible. For full architectural details see Appendix D. ## 4 Results In this section we evaluate the compositional robustness of the different training approaches, first by examining the accuracy of different methods on unseen compositions of corruptions. We additionally explore the relationship between compositional robustness and invariance amongst representations of elemental corruptions. We end on the practical limitations of the approaches we consider in this study. ### Monolithic Approaches Show Limited Compositional Robustness Figure 2 shows the classification accuracies of each of the three approaches for each of the three datasets. The evaluation domains, \(\mathcal{E}_{all}\), are divided into groups depending on how many elemental corruptions are in the composition applied to images in a domain. Across all methods and datasets we see domains with \(1\) corruption achieve very good, near ceiling, performance. This is not surprising as this represents the accuracy on the elemental training domains. A granular view for each of the \(167\) domains for every method can be seen in heat maps in Appendix G. In Figure 2 the blue and orange box plots show the performance of ERM and contrastive training respectively, for which we can observe some general trends. Firstly, accuracy on compositions drops as the number of elemental corruptions in a composition increases, with compositions of \(5\) or \(6\) corruptions rarely performing above chance level. Intuitively, as each additional corruption makes the image harder to recognize (see Figure 1), it makes sense that this pattern emerges. Perhaps more surprisingly, both methods achieve accuracy far above chance for compositions of \(2\) corruptions and perform relatively well for compositions of \(3\) corruptions despite these domains being outside of the training distribution. We also see that the contrastive training approach makes only minor improvements over ERM, with the most improvement for cifar-10. This runs counter to our assumption that encouraging invariance amongst training domains would increase compositional robustness. Finally, we note that neither method optimally solves the task, some compositions of Figure 2: Evaluating compositional robustness on different datasets. Evaluation domains are divided into groups depending on the number of elemental corruptions making up a composition. Different colored boxes (left to right in each triple) show the performance of ERM, contrastive training, and the modular approach. Ceiling accuracy is determined by a model trained and tested on _Identity_ data. corruptions contain only invertible corruptions, yet neither method reaches ceiling performance for any composition of \(2\) corruptions. ### The Modular Approach Achieves the Best Compositional Robustness Comparing all three training approaches, we observe that the modular approach outperforms both ERM and contrastive training, with higher mean performance in almost all cases in Figure 2. The only exception is on compositions of \(2\) corruptions for facescrub, where the modular approach is marginally outperformed by contrastive training. These results demonstrate that the monolithic approaches are unable to learn to modularize the structure of the task in the same way as the modular approach, since they do not achieve the same performance levels. Additionally we can observe that explicitly modularizing the modelling of elemental corruptions outperforms the direct encouragement of invariance in terms of compositional robustness. ### In-Distribution Invariance Does Not Correlate With Compositional Robustness To investigate our findings further we examine the invariance scores for the different approaches. We again split test domains by the number of elemental corruptions they include and plot correlations for eminst in Figure 3. Figure 3, top row, plots the elemental invariance score against accuracy on compositions. Interestingly we observe no meaningful correlation between elemental invariance scores and accuracies on compositional test domains, with high p-values and low r-values. This runs counter to our initial expectations based on the ubiquity of invariant representation learning in the domain generalization literature. For our compositional task, these results indicate that encouraging invariance between representations on the training domains may be insufficient to achieve robustness. We even see some points for the modular approach (in the upper left of the plots) that achieve higher accuracy than ERM or contrastive training achieve on any domain yet have lower invariance scores. We also note that contrastive training only slightly increases the observed invariance between elemental corruptions, with a small rightward shift of points when compared to ERM. One possible reason for this smaller than expected increase may be because we set hyper-parameters on the training domains [27] and high contrastive weights take away from in-distribution performance. Alternatively, there has been some discussion on whether the contrastive loss improves performance because of increased invariance or by other mechanisms [40; 56]. Row three of Figure 3 shows strong positive correlations between the composition invariance score and accuracy on compositions. This is as expected, since a high composition invariance score indicates a similar representation between compositions and elemental corruptions (which all achieve good accuracy). However, in row two of Figure 3 we again see limited, or even negative, correlations between elemental and composition invariance scores. This demonstrates that invariance built on elemental training domains may fail to transfer to invariance on compositional test domains, so we cannot consistently improve the composition invariance score by encouraging elemental invariance. By and large these trends are consistent over datasets (Appendix E) and seeds (Appendix F). A notable exception is the negative correlation for the modular approach in row two of Figure 3 is not seen in other datasets. We also observe a positive correlation between elemental invariance score and accuracy for ERM on cifar-10. On cifar-10, the encouraging of invariance with contrastive training builds slightly more invariant representations but then correlation between elemental invariance and accuracy disappears. ### Practical Limitations The aim of this work is to provide greater understanding of the factors that influence compositional robustness in neural networks. In particular, it is not our aim to provide an oven-ready method for improving compositional robustness. Nevertheless we now show some additional experiments to briefly highlight some of the practical limitations of the modular approach taken in this study. Firstly, compared to the monolithic approaches, the modular approach has substantially higher variance over seeds. This is primarily due to variance in the selection of the module locations. Figure 4 shows results for the same methods as in Figure 2 trained with a different random seed. Although better in some cases (cifar-10, compositions of \(5\)), we see things can also be substantially worse (facescrub). Whilst module location may have little effect on the in-distribution accuracy, putting modules in the optimum location had a large impact on compositional robustness and should be a focus of future work on modularity. We also evaluate an alternative modular method where every corruption is handled in image space, that is, we train auto-encoders using mean squared error to transform a corrupted image into the corresponding _Identity_ image. To handle compositions we chain together auto-encoders for the relevant elemental corruptions, aiming to sequentially undo the corruptions to arrive at a clean image. We train two possible classifiers to use on the images outputted by this approach; the first minimizes cross entropy loss on clean data (AE-ID) and the second jointly on the outputs of the auto-encoders for all training corruptions (AE-Joint). The compositional robustness of these methods is shown in Figure 4. In general these methods perform relatively poorly on smaller numbers of corruptions (compositions of \(2\) or \(3\)), this is largely because, as with the modular approach, the auto-encoders are sensitive to the ordering in which they are applied on a composition. On the other hand, the Figure 3: Correlating invariance scores with compositional robustness for emnist. Row one shows the level of representational invariance amongst elemental corruptions fails to correlate with compositional robustness (accuracy). Row two shows the lack of dissemination of invariance between elemental corruptions and compositions. Row three plots composition invariance scores against compositional robustness. Columns show subsets of evaluation domains depending on the number of elemental corruptions making up a composition (as in Figure 2). auto-encoders can often outperform all methods for larger numbers of corruptions, indicating that there likely exist methods that can achieve better compositional robustness than the methods we evaluate in this work. Finally, apart from ERM all of the evaluated methods require paired data between domains which is an unrealistic expectation in practical applications. Additionally, for modular approaches we must know which corruptions are applied in a given test domain in order to apply the correct modules. Another interesting angle for future, more practically minded, solutions is to remove these assumptions. ## 5 Discussions We end with several discussions on different interpretations of this work and links to larger questions that may motivate future work. **What is the structure of natural data?** In our compositional robustness framework we see only the elemental factors of variation (elemental corruptions) during training. In reality, whilst it is likely not possible to see every composition, most real-world data will contain an unstructured sampling of the compositional space. This assumes however, that it is possible to decompose data from the environment into elemental factors of variation [65; 66; 67; 68] or independent (causal) mechanisms [25; 26; 63]. At present it remains unknown if there exists a practically sized set of elemental transformations from which all visual stimuli can be composed, but if such a set exists, the ideas presented in this work suggest that modular architectures may be able to model this space more efficiently than large monolithic models. **Learning to decompose from data.** If there exists a set of elemental transformations from which all visual stimuli can be composed, and we are to make use of modularity as an inductive bias to model them, we must _learn_ how to decompose datasets into their constituent factors and how to modularize knowledge in the appropriate semantic spaces [21]. In this work we have shown that modular approaches have the potential to surpass previous approaches if the decomposition is available and progress has been made on finding appropriate semantic spaces [106; 107]. The learning of decompositions remains an open problem [26; 64; 70; 109]. **How modular should neural networks be?** The modular approach taken in this study uses neural network layers as modules which are manually assigned to handle specific corruptions, yet we have also experimented with monolithic networks and with using entirely separate networks for each corruption (Section 4.4). Even if we are able to decompose data into constituent factors, there remains a question of what degree of modularity should be used to model these factors. There have been recent exciting empirical studies in this direction [118; 77; 80] but no consensus has yet been reached. **How far will invariance take us?** Our results, and the results of others [53; 54; 55; 56], raise questions about whether encouraging invariance alone is sufficient to achieve domain generalization in general. We know that invariance is a key factor for robust generalization but we do not yet know how far invariance will be able to take us. Perhaps we simply need to better understand and implement the Figure 4: Evaluating the compositional robustness of alternate training approaches and variance over seeds. The first three boxes in each quintuple are the same as in Figure 2 but trained with a different random seed. The remaining boxes (red and purple) show the performance of a modular approach using auto-encoders to undo each elemental corruption. neural mechanisms that allow invariances to build [21; 63; 119; 120], or we may need to further explore learning representations that are only invariant over certain dimensions [56; 121; 122]. ## 6 Conclusion Since the visual space containing all corruptions is compositional in nature, we have introduced a new framework to evaluate the compositional robustness of different models. We have observed that modular approaches outperform monolithic approaches on this task, even when invariant representations are encouraged. For domain generalization tasks with compositional structure our results raise questions about the efficacy of encouraging invariance without further inductive biases. This work represents only a first step in understanding how neural networks behave under compositional structures, further research is needed into developing methods that make fewer assumptions about the information available at test time and that can work with large unstructured datasets where factors of variation are unknown. #### Reproducibility statement The code to reproduce the results herein is publicly available at the following GitHub repository: [https://github.com/ianxmason/compositional-robustness](https://github.com/ianxmason/compositional-robustness). The experimental setup is described in Section 3. #### Acknowledgements We would like to thank members of the Sinha Lab for Developmental Research and Fujitsu Research for helpful comments and feedback during the development of this project. In particular, Amir Rahimi, Hojin Jang, Avi Cooper, Ece Ozkan Elsen, Hisanao Akima and Serban Georgescu. This work was supported by Fujitsu Limited (Contract No. 40009568).
2305.10004
Rate-Limited Quantum-to-Classical Optimal Transport in Finite and Continuous-Variable Quantum Systems
We consider the rate-limited quantum-to-classical optimal transport in terms of output-constrained rate-distortion coding for both finite-dimensional and continuous-variable quantum-to-classical systems with limited classical common randomness. The main coding theorem provides a single-letter characterization of the achievable rate region of a lossy quantum measurement source coding for an exact construction of the destination distribution (or the equivalent quantum state) while maintaining a threshold of distortion from the source state according to a generally defined distortion observable. The constraint on the output space fixes the output distribution to an IID predefined probability mass function. Therefore, this problem can also be viewed as information-constrained optimal transport which finds the optimal cost of transporting the source quantum state to the destination classical distribution via a quantum measurement with limited communication rate and common randomness. We develop a coding framework for continuous-variable quantum systems by employing a clipping projection and a dequantization block and using our finite-dimensional coding theorem. Moreover, for the Gaussian quantum systems, we derive an analytical solution for rate-limited Wasserstein distance of order 2, along with a Gaussian optimality theorem, showing that Gaussian measurement optimizes the rate in a system with Gaussian quantum source and Gaussian destination distribution. The results further show that in contrast to the classical Wasserstein distance of Gaussian distributions, which corresponds to an infinite transmission rate, in the Quantum Gaussian measurement system, the optimal transport is achieved with a finite transmission rate due to the inherent noise of the quantum measurement imposed by Heisenberg's uncertainty principle.
Hafez M. Garmaroudi, S. Sandeep Pradhan, Jun Chen
2023-05-17T07:16:20Z
http://arxiv.org/abs/2305.10004v2
# A Coding Theorem for Rate-Limited Quantum-Classical Optimal Transport ###### Abstract We establish a coding theorem for rate-limited quantum-classical optimal transport systems with limited classical common randomness. This theorem characterizes the rate region of measurement protocols on a product source state for faithful construction of a given destination state while maintaining the source-destination distortion below a prescribed threshold with respect to a general distortion observable. It also provides a solution to the problem of rate-limited optimal transport, which aims to find the optimal cost of transforming a source quantum state to a destination state via an entanglement-breaking channel with a limited communication rate. The coding theorem is further extended to cover Bosonic continuous-variable quantum systems. The analytical evaluation is performed for the case of a qubit measurement system with unlimited common randomness. _Keywords: quantum information; rate-distortion; optimal transport; quantum measurement; continuous quantum system_ ## 1 Introduction The lossy source-coding problem in information theory aims to determine the minimum rate needed for compressing a given source so that it can be reconstructed to meet a prescribed distortion constraint. The fundamental tradeoff between the compression rate and the reconstruction distortion is characterized by the _rate-distortion function_[1]. This subject has also drawn attention in the field of quantum information theory. In an early attempt, Barnum [2] provided a lower bound on the rate-distortion function for a quantum channel with entanglement fidelity as the distortion measure. His lower-bound was later proved to be not tight in [3], where authors established the quantum rate-distortion theorems for both entanglement assisted and unassisted systems. The proofs in [3] rely on the reverse Shannon theorem [4], which addresses the problem of simulating a noisy channel with the help of a noiseless channel, or more generally, simulating one noisy channel with another noisy channel. In a seminal paper [5], Winter introduced the notion of information in quantum measurement and established the _measurement compression theorem_, which delineates the required classical rate and common randomness to faithfully simulate a feedback measurement \(\Lambda\) for an input state \(\rho\). In [6], variants of this measurement compression theorem were studied for the case of non-feedback simulation and the case with the presence of quantum side information. Further, in [7], Datta et. al. invoked this measurement compression theorem to give a proof of the quantum-classical rate-distortion theorem. This idea of measurement simulation was further extended to distributed measurement simulation for composite quantum states in [8, 9], where the required classical rates and common randomness to faithfully simulate a bipartite state \(\rho_{AB}\) using distributed measurements are characterized; this distributed measurement compression theorem was then leveraged to establish inner and outer bounds of the rate region for the distributed quantum-classical rate-distortion problem. In the classical setting, Cuff introduced the notion of coordination capacity [10, 11] and the problem of _distributed channel synthesis_[12]. A closely related problem, known as output-constrained lossy source coding [13], has recently found many applications in different areas [14, 15, 16, 17, 18]. In contrast to distributed channel synthesis which attempts to simulate a fixed channel, in output-constrained lossy source coding only the output distribution is fixed, rendering the problem intimately connected to optimal transport. The goal of optimal transport is to map a source probability measure into a destination one with the minimum possible cost [19]. Let \(X\) be a random variable in the source probability space \((\mathcal{X},\mathcal{F}_{\mathcal{X}},P_{X})\), where \(\mathcal{X}\) is the support, \(\mathcal{F}_{\mathcal{X}}\) is the event space defined by the \(\sigma\)-algebra of the Borel sets on \(\mathcal{X}\), and \(P_{X}\) is the probability distribution function. Let \(Y\) be a random variable from the target probability space \((\mathcal{Y},\mathcal{F}_{\mathcal{Y}},P_{Y})\). The optimal transport problem aims at finding an optimal mapping \(f:\mathcal{X}\to\mathcal{Y}\) that minimizes the expectation of the transportation cost \(c(x,y)\)[20]. However, as such deterministic mappings do not exist in many cases, one has to resort to stochastic channels to transform the source distribution to the target distribution. Thus the problem boils down to finding the optimal _coupling_\(\pi^{*}\) of marginal distributions \(P_{X}\) and \(P_{Y}\) that minimizes the transportation cost [21]: \[\pi^{*}=\min_{\pi}\int_{\mathcal{X}\times\mathcal{Y}}c(x,y)\pi(dx,dy),\] subject to \[\int_{\mathcal{X}}\,\pi(dx,B_{\mathcal{X}})=P_{Y}(B_{\mathcal{X}}),\quad\int _{\mathcal{Y}}\pi(B_{\mathcal{Y}},dy)=P_{X}(B_{\mathcal{Y}}),\] for any \(B_{\mathcal{X}}\in\mathcal{F}_{\mathcal{X}}\) and \(B_{\mathcal{Y}}\in\mathcal{F}_{\mathcal{Y}}\). In [22], the authors introduced the problem of _information-constrained optimal transport_ by imposing an additional constraint on coupling \(\pi\) in the form of a threshold on the mutual information between \(X\) and \(Y\), and established an upper bound on the information-constrained Wasserstein distance by generalizing Talagrand's transportation cost inequality. It is worth noting that the information-cost function in [22] is equivalent to the rate-distortion function of output-constrained lossy source coding with unlimited common randomness [13]. The quantum version of optimal transport has also been investigated in recent years [23, 24, 25, 26, 27]. In [25], the authors proposed a generalization of the quantum Wasserstein order 2 distance and proved that it satisfies the triangle inequality. They further showed that the associated quantum optimal transport schemes are in one-to-one correspondence with the quantum channels, and in the case of quantum thermal quantum Gaussian states, the optimal transport schemes can be realized by quantum Gaussian attenuators/amplifiers. In [27], the quantum Wasserstein distance of order 1, together with the quantum counterparts of some classical inequalities, was introduced. The present paper focuses on the rate-limited quantum-classical optimal transport systems. Specifically, we establish a single-letter characterization of the rate region of measurement protocols on a product source state for the construction of a prescribed destination state with the distortion below a given threshold (see Theorem 1), and further extend this result to the Bosonic continuous-variable quantum systems via quantum clipping on the continuous source state (see Theorem 4). Our work enables the generalization of quantum optimal transport to the rate-limited setting as well as the generalization of classical information-constrained optimal transport to the quantum setting. In particular, we provide a detailed analysis of rate-limited quantum optimal transport for the case of qubit source state and entanglement fidelity distortion measure; the minimum transportation cost is explicitly characterized and is shown to be achievable with a finite transmission rate (see Theorem 8 and 9). We would also like to mention some key differences between our source coding theorem and other works. Specifically, in comparison to [7], our system has the additional constraint that the output system must follow a predetermined distribution in exact i.i.d. format. This provides a multi-letter protocol that governs the optimal transportation of a quantum source state to a target distribution, through a rate-limited classical channel with limited common randomness. In contrast to the conventional rate-distortion theorem for which the common randomness provides no performance improvements, in this problem, the common randomness can help reduce the communication rate by providing the extra randomness required to ensure the output has the exact desired i.i.d. distribution. Consider an example where we have a sequence of product quantum Gaussian states. Suppose we want to store these states' information in a classical memory system for later use. Our goal is to prepare quantum states that are as similar as possible to the original source states, with the constraint that these states also need to be product Gaussian states. In this scenario, our theorem can help by first estimating the amount of quantum data lost due to the entanglement-breaking channel in the form of the minimum distortion from the source. It also can calculate the required storage space in the classical memory system. The proof of the discrete coding theorem builds on analytical tools such as Winter's measurement compression theorem [5], the non-feedback measurement simulation [6] and the batch post-processing of [28]. In contrast to their work where a quantum-classical channel is being faithfully simulated in a nearly perfect sense, in this work, we ensure the output is following the exact desired distribution in the perfect i.i.d. format, while maintaining the distortion threshold. Moreover, the analysis of the continuous-variable quantum systems is also one of the key contributions of this paper as the proofs of the discrete theorems do not directly apply to the continuous case as was discussed in Section 3. The rest of the paper is organized as follows. Finite dimensional quantum-classical systems are addressed in Section 2 with the statement of the coding theorem, the proof of the achievability part, the proof of the converse part, and the proof of the cardinality bound given in Sections 2.2, 2.3, 2.4, and 2.5, respectively. We extend the coding theorem to cover infinite dimensional systems in Section 3. Specifically, we introduce the continuity theorems that are needed for generalizing the definitions of measurement systems to continuous Hilbert spaces in Section 3.1, state the coding theorem for continuous Hilbert spaces in Section 3.2, and prove the achievability part in Section 3.3 (the proof of the converse part is the same as that for finite dimensional systems). In Section 4, we consider the case of qubit measurement systems with unlimited common randomness, for which a detailed analysis of rate-limited optimal transport is provided. ## 2 Finite Dimensional Quantum Systems The system comprises of an n-letter memoryless source with its product state \(\rho^{\otimes n}\) as the input of an encoder on Alice's side, where \(\rho\) is a density operator defined on a Hilbert space \(\mathcal{H}_{A}\). On Bob's side, we have a reconstruction Hilbert space \(\mathcal{H}_{X}\) with an orthonormal basis indexed by a finite set \(\mathcal{X}\). We also let the quantum state \(R\) denote the reference of the source with the associated Hilbert space \(\mathcal{H}_{R}\) with \(\text{dim}(\mathcal{H}_{R})=\text{dim}(\mathcal{H}_{A})\). ### Distortion Measure The distortion measure between two systems \(R\) and \(X\) is defined in the general form using a distortion observable \(\Delta_{RX}>0\) defined on \(\mathcal{H}_{R}\otimes\mathcal{H}_{X}\) for the single-letter composite state \(\tau^{RX}\), as described in [7]: \[d(\Delta_{RX},\tau^{RX}):=\operatorname{Tr}\bigl{\{}\Delta_{RX}\tau^{RX} \bigr{\}}. \tag{1}\] Then, having an n-letter composite state \(\tau^{R^{n}X^{n}}\), and the distortion observable for each \(i\)-th system defined as \(\Delta_{R,X_{i}}\), the average n-letter distortion is defined as \[d_{n}(\Delta^{(n)},\tau^{R^{n}X^{n}}):=\mathrm{Tr}\Big{\{}\Delta^{(n)}\tau^{R^{ n}X^{n}}\Big{\}}=\frac{1}{n}\sum_{i=1}^{n}\mathrm{Tr}\big{\{}\Delta_{R_{i}X_{i}} \tau^{R_{i}X_{i}}\big{\}}, \tag{2}\] where \(\tau^{R_{i}X_{i}}=\mathrm{Tr}_{[n]\backslash\{\tau^{R^{n}X^{n}}\}}\) is the localized \(i\)-th composite state, and \(\Delta^{(n)}\) is the average n-letter distortion observable defined as \[\Delta^{(n)}:=\frac{1}{n}\sum_{i=1}^{n}\Delta_{R_{i}X_{i}}\otimes I_{RX}^{ \otimes[n]\backslash i}. \tag{3}\] In the case of a discrete quantum-classical system, the composite state has the form \(\tau^{RX}=\sum_{x}P_{X}(x)\rho_{x}\otimes|x\rangle\!\langle x|\), where \(\rho_{x}\) is the post-measurement reference state and \(P_{X}(x)\) is the pmf of outcomes. We further decompose distortion observable as \(\Delta_{RX}=\sum_{t=1}^{\mathcal{T}}\Delta_{R}^{t}\otimes\Delta_{X}^{t}\). Thus, we get \[\mathrm{Tr}\big{\{}\tau^{RX}\Delta_{RX}\big{\}} =\mathrm{Tr}_{R}\left[\mathrm{Tr}_{X}\left[\left(\sum_{x}P_{X}(x) \rho_{x}\otimes|x\rangle\!\langle x|\right)\left(\sum_{t=1}^{\mathcal{T}}\Delta _{R}^{t}\otimes\Delta_{X}^{t}\right)\right]\right]\] \[=\sum_{x}P_{X}(x)\,\mathrm{Tr}_{R}\left\{\rho_{x}\Delta_{R}(x) \right\}=\mathbb{E}_{X}\left[\mathrm{Tr}_{R}\left\{\rho_{x}\Delta_{R}(X) \right\}\right], \tag{4}\] where \(\{\Delta_{R}(x):x\in\mathcal{X}\}\) is a mapping from the outcome space \(\mathcal{X}\) to operators in Hilbert space \(\mathcal{H}_{R}\): \[\Delta_{R}(x):=\sum_{t=1}^{\mathcal{T}}\Delta_{R}^{t}\left\langle x|\Delta_{X} ^{t}|x\right\rangle.\] Next, for the continuous quantum-classical system, by defining \(\{\Delta_{R}(x)\}_{x\in\mathbb{R}}\) in the general form, the distortion measure for the continuous q-c system can be formulated as \[\int_{x\in\mathbb{R}}\mathrm{Tr}_{R}\left[\rho_{x}\,\Delta_{R}(x)\right]\mu( dx), \tag{5}\] with \(\mu(.)\) the probability measure of the outcome space. Note that in order to prove the achievability of the output i.i.d. in perfect realism, we strict the distortion observables to be uniformly integrable according to the following definition. **Definition 2.1**.: _Consider a quantum-classical system with a distortion observable \(\Delta_{RX}\) with operator mapping \(x\rightarrow\Delta_{R}(x),x\in\mathcal{X}\), an input quantum state \(\rho\) forming \((\Delta_{RX},\rho)\). The pair is called uniformly integrable if for any \(\epsilon>0\) there exists a \(\delta>0\) such that_ \[\sup_{\Pi}\sup_{M}\mathbb{E}_{X}\left[\mathrm{Tr}_{R}\left\{\Pi_{X}\rho_{X}^{ R}\Pi_{X}\Delta_{R}(X)\right\}\right]\leq\epsilon, \tag{6}\] _where the supremum is over all POVMs \(M\) with the outputs in \(\mathcal{X}\) and all projectors of the form \(\Pi=\sum_{x}\Pi_{x}\otimes|x\rangle\!\langle x|\) such that \(\mathbb{E}_{X}\left[\mathrm{Tr}(\rho_{X}\Pi_{X})\right]\leq\delta\), and \(\rho_{x}^{R}\) is the post-measurement reference state of \(\rho\) given the outcome \(x\) with respect to \(M\)._ Furthermore, in the case of continuous quantum systems, we assume that the distortion observable also satisfies the following condition: the operator mapping \(x\mapsto\Delta_{R}(x)\) is uniformly continuous with respect to the trace norm. ### Main Results: Achievable Rate Region for Discrete States The system is comprised of an n-letter source coding scheme defined below. **Definition 2.2**.: _(**Discrete Source Coding Scheme**) An \((n,R,R_{c})\) source-coding scheme for this quantum-classical system is comprised of an encoder \(\mathcal{E}_{n}\) on Alice's side and a decoder \(\mathcal{D}_{n}\) on Bob's side, with the following elements. The encoder is a set of \(|\mathcal{M}|=2^{nR_{c}}\) collective n-letter measurement POVMs \(\Upsilon^{(m)}\equiv\{\Upsilon_{l}^{(m)}\}_{l\in\mathcal{L}}\), each comprised of \(|\mathcal{L}|=2^{nR}\) POVM operators corresponding to \(|\mathcal{L}|\) outcomes and the randomly selected shared (with Bob) common randomness value \(m\) determines which POVM will be applied to the source state. Bob receives the outcome \(L\) of the measurement through a classical channel and applies a randomized decoder to this input pair \((L,M)\) to obtain the final sequence \(X^{n}\) stored in a quantum register. Thus, the composite state of the reference and output induced by this coding scheme is_ \[\tau_{\text{ind}}^{R^{n}X^{n}}=\sum_{x^{n}}\sum_{m,l}\frac{1}{| \mathcal{M}|}\operatorname{Tr}_{A^{n}}\Big{\{}(\text{id}\otimes\Upsilon_{l} ^{(m)})[\psi_{RA}^{\rho}]^{\otimes n})\Big{\}}\otimes\mathcal{D}_{n}(x^{n}|l, m)\,|x^{n}\rangle\,\langle x^{n}|\,. \tag{7}\] We define the average n-letter distortion for the source coding system with encoder/ decoder pair \(\mathcal{E}_{n}\) and \(\mathcal{D}_{n}\), distortion observable \(\Delta_{RX}\) and source product state \(\rho^{\otimes n}\) as \[d_{n}(\rho^{\otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})= \operatorname{Tr}\!\Big{\{}\Delta^{(n)}(\text{id}_{R^{n}}\otimes\mathcal{D}_{ n}\circ\mathcal{E}_{n})(\psi_{RA}^{\rho})^{\otimes n}\Big{\}}, \tag{8}\] where \(\psi_{RA}^{\rho}\) is a purification of the source state \(\rho\). Also, note that (8) can also be written in the form of average over localized distortions as in (2). The goal is to prepare the destination quantum ensemble on Bob's side while maintaining the distortion limit from the input reference state. Consequently, the following definition of achievability is used throughout this paper. **Definition 2.3**.: _(**Achievable pair**) A desired PMF \(Q_{X}\) on the output space \(\mathcal{X}\) and a maximum tolerable distortion level \(D\) are given. Assuming a product input state of \(\rho^{\otimes n}\), a rate pair \((R,R_{c})\) is defined achievable if for any sufficiently large \(n\) and any positive value \(\epsilon>0\), there exists an \((n,R,R_{c})\) coding scheme comprising of a measurement encoder \(\mathcal{E}_{n}\) and a decoder \(\mathcal{D}_{n}\) that satisfy:_ \[X^{n}\sim Q_{X}^{n},\quad d_{n}(\rho^{\otimes n},\mathcal{D}_{n }\circ\mathcal{E}_{n})\leq D+\epsilon. \tag{9}\] The expression (9) indicates that the output sequence must be i.i.d with fixed distribution \(Q_{X}\) and that the n-letter distortion between the input state and output state must be asymptotically less than a threshold \(D\). Then using the above definition of achievable pair, we further define the achievable rate region as: **Definition 2.4**.: _(**Achievable Rate Region**) Given the output PMF \(Q_{X}\), the input state \(\rho\) and a distortion threshold \(D\), the achievable rate region \(\mathcal{R}(D,\rho||Q_{X})\) is defined as the closure of all achievable rate pairs with respect to the given \(\rho\), \(Q_{X}\) and \(D\)._ We are specifically interested in finding the value of the minimum achievable rate as a function of the distortion level for any fixed rate of common randomness, which we define as follows: **Definition 2.5**.: _(**Output-Constrained Rate-Distortion Function**) For any coding system with the achievable rate region \(\mathcal{R}(D,\rho||Q_{X})\), and given a fixed common randomness rate \(R_{c}\), the output-constrained rate-distortion function is defined as_ \[R(D;R_{c},\rho||Q_{X})\equiv\inf\left\{R:(R,R_{c})\in\mathcal{R} (D,\rho||Q_{X})\right\}. \tag{10}\] _The inverse of this function which for any fixed \(R_{c}\), is a mapping from the communication rates to their corresponding minimum transportation cost, is called the Rate-Limited Optimal Transport Cost function and expressed by \(D(R;R_{c},\rho||Q_{X})\)._ Based on the above definitions, we establish the main theorem which provides the single-letter characterization of the achievable rate region as follows: **Theorem 1**.: _Given the distortion threshold \(D\), the output PMF \(Q_{X}\) and having a product input state \(\rho\), a rate pair \((R,R_{c})\) is inside the rate region \(\mathcal{R}(D,\rho||Q_{X})\), if and only if there exists an intermediate state \(W\) with a corresponding measurement POVM \(M_{w}^{A}\) and randomized post-processing transformation \(P_{X|W}\) which satisfies_ \[R \geq I(W;R)_{\tau}, \tag{11}\] \[R+R_{c} \geq I(W;X)_{\tau}, \tag{12}\] _where \(W\), with a Hilbert space \(\mathcal{H}_{W}\) along with an orthonormal basis indexed by a finite set \(\mathcal{W}\), constructs a quantum Markov chain \(R-W-X\) with the overall post-measured composite state_ \[\tau^{RWX}=\sum_{w,x}P_{X|W}(x|w)\sqrt{\rho}M_{w}\sqrt{\rho}\otimes\left|w \right\rangle\!\!\left\langle w\right|^{W}\otimes\left|x\right\rangle\!\! \left\langle x\right|^{X},\] _from the set_ \[\mathcal{M}(\mathcal{D})=\left\{\begin{array}{l}\tau^{RWX}\left|\begin{array} []{l}\sum_{w}P_{X|W}(x|w)\operatorname{Tr}\!\left\{M_{w}^{A}\rho\right\}=Q_{ X}(x)\quad\text{for }x\in\mathcal{X}\\ \mathbb{E}_{X}\left[\operatorname{Tr}\!\left\{\Delta_{RX}\tau^{RX}\right\} \right]\leq D\\ |\mathcal{W}|\leq(\dim\mathcal{H}_{A})^{2}+|\mathcal{X}|+1\end{array}\right. \end{array}\right\}. \tag{13}\] The overall state can also be formulated in terms of the post-measured states as \[\tau^{RWX}=\sum_{w,x}\hat{\rho}_{w}\otimes P_{W}(w)\left|w\right\rangle\!\! \left\langle w\right|^{W}\otimes P_{X|W}(x|w)\left|x\right\rangle\!\!\left\langle x \right|^{X},\] where \(\hat{\rho}_{w}:=\frac{1}{P_{W}(w)}\sqrt{\rho}M_{w}\sqrt{\rho}\), and \(P_{W}(w)=\operatorname{Tr}\!\left\{M_{w}\rho\right\}\), are the conditional post-measurement reference state given the outcome \(w\), and the probability of the outcome \(w\), respectively. ### Proof of Achievability for Finite Systems In this section, we prove the achievability of Theorem 1, given that the pair \((Q_{X},D)\) is provided by the setting of the theorem. #### 2.3.1 Codebook and Encoder Construction We follow the method of construction of POVM that was developed by Winter in [5]. Using the single-letter measurement \(M_{w}\), we generate the codebook in the intermediate space \(\mathcal{W}\). Probability of the measurement \(M\equiv\{M_{w}\}_{w\in\mathcal{W}}\) resulting in the outcome \(w\) is \(P_{W}(w)=\operatorname{Tr}\!\left\{M_{w}\rho\right\}\). Then a sequence of \(n\) independent outcomes \(W\) has the i.i.d. distribution \(P_{W}^{n}(w^{n})\). The pruned distribution is then defined by only selecting \(w^{n}\) from the typical set of \(W\), \[P_{W^{n}}(w^{n})=\begin{cases}P_{W^{n}}(w^{n})/(1-\epsilon)&\text{if }w^{n} \in\mathcal{T}_{W}^{n,\delta}\\ 0&\text{o.w.}\end{cases}, \tag{14}\] where \(\epsilon:=\Pr\Bigl{\{}W^{n}\notin\mathcal{T}_{W}^{n,\delta}\Bigr{\}}\). Consequently, a total of \(|\mathcal{M}|\times|\mathcal{L}|\) random codewords \(w^{n}\) are generated from the pruned distribution \(P_{W^{m}}\) and indexed with \((m,l)\) pair, comprising a random codebook. We then repeat this process to generate \(|\mathcal{K}|\) codebook realizations. The codewords in each codebook are indexed as \(w^{n}(m,k,l)\). The random variable \(K\) is introduced as additional randomness for analytical purposes which will be de-randomized at the end. Also, for each sequence, \(w^{n}\), the following set of typically projected post-measurement reference operators are defined \[\hat{\rho}^{\prime}_{w^{n}}:=\Pi^{n}_{\rho,\delta}\;\Pi^{n}_{\hat{\rho}_{w^{n} },\delta}\;\hat{\rho}_{w^{n}}\;\Pi^{n}_{\hat{\rho}_{w^{n}},\delta}\;\Pi^{n}_{ \rho,\delta},\] where \(\Pi^{n}_{\rho,\delta}\) and \(\Pi^{n}_{\hat{\rho}_{w^{n}},\delta}\) are the typical set and conditional typical set projectors respectively [6]. We are also interested in the expectation of the above operators, which we define as \[\hat{\rho}^{\prime n}:=\mathbb{E}_{W^{m}}\left[\hat{\rho}^{\prime n}_{W^{m}} \right]=\sum_{w^{n}\in\mathcal{T}_{W}^{n,\delta}}P_{W^{m}}(w^{n})\hat{\rho}^{ \prime n}_{w^{n}}.\] Further define a cut-off projector \(\Pi\), which projects to the subspace spanned by the eigenstates of \(\hat{\rho}^{\prime n}\) with eigenvalues larger than \(\epsilon\alpha\), where \(\alpha:=2^{-n[H(R)+\delta]}\). Then the cut-off version of the operators and the expected cut-off operator are given by \[\hat{\rho}^{\prime\prime}_{w^{n}}:=\Pi\;\hat{\rho}^{\prime}_{w^{n}}\;\Pi,\quad \hat{\rho}^{\prime n}:=\Pi\;\hat{\rho}^{\prime n}\;\Pi. \tag{15}\] Consequently, similar to [6], for each \((k,m)\) we define the POVM as \[\Upsilon^{(k,m)}_{l}:=\frac{1-\epsilon}{1+\eta}\frac{1}{|\mathcal{L}|}\omega^ {-1/2}\hat{\rho}^{\prime\prime}_{wn(m,k,l)}\omega^{-1/2}, \tag{16}\] with \(\omega:=\rho^{\otimes n}\), and \(\eta\in(0,1)\) being a parameter to be determined. One can alternatively define the POVM such that it directly outputs \(W^{n}\): \[\Gamma^{(m,k)}_{w^{n}}:=\sum_{l}\mathds{1}\{W^{n}(m,k,l)=w^{n}\}\Upsilon^{(k, m)}_{l}=\gamma^{(m,k)}_{w^{n}}\;\omega^{-1/2}\;\hat{\rho}^{\prime\prime}_{w^{n }}\;\omega^{-1/2}, \tag{17}\] where \[\gamma^{(k,m)}_{w^{n}}:=\frac{1}{|\mathcal{L}|}\sum_{l=1}^{| \mathcal{L}|}\frac{1-\epsilon}{1+\eta}\mathds{1}\left\{W^{n}(m,k,l)=w^{n} \right\}. \tag{18}\] Then the measurement operator for each \(m\in\mathcal{M}\) and \(k\in\mathcal{K}\) is \[\tilde{M}^{(m,k),n}_{\Gamma}=\{\Gamma^{(m,k)}_{w^{n}}:w^{n}\in \mathcal{T}_{W}^{n,\delta}\}.\] Examining each codebook \(k\in\mathcal{K}\) generated randomly from pruned distribution \(p_{W^{m}}(w^{n})\), using the operator Chernoff bound similar to [6], we claim that as long as \(|\mathcal{L}|>2^{nI(W;R)}\), the following set of events \(E_{m,k}\) happen with probability close to \(1\) for an arbitrary value \(\eta\in(0,1)\): \[E_{m,k}:\quad\frac{1}{|\mathcal{L}|}\sum_{l}\hat{\rho}^{\prime \prime}_{wn(m,k,l)}\in[(1\pm\eta)\hat{\rho}^{\prime n}]\quad\forall m\in \mathcal{M}. \tag{19}\] Then following the analysis in [6] on the validity of POVM, we claim that the set \(\tilde{M}^{(m,k),n}_{\Gamma}\) forms a sub-POVM, i.e., \[\sum_{w^{n}\in\mathcal{N}_{W}^{n,\delta}}\Gamma_{w^{n}}^{(m,k)}\leq I \quad\forall m\in\mathcal{M},\ k\in\mathcal{K}.\] Thus, we can complete the sub-POVM by appending the following extra operator \[\Gamma_{w_{\theta}^{n}}^{(m,k)}:=I-\sum_{w^{n}}\Gamma_{w^{n}}^{(m,k)}.\] We define the new set of operators as \([\tilde{M}_{\Gamma}^{(m,k),n}]\) which is a valid POVM. Having the above construction, we create the intermediate POVM by randomly picking one of the \(m\in|\mathcal{M}|\) POVMs according to the common randomness. This way the intermediate POVM is expressed by \[\tilde{\Lambda}_{w^{n}}^{(k)^{A}}:=\frac{1}{|\mathcal{M}|}\sum_{m= 1}^{|\mathcal{M}|}\Gamma_{w^{n}}^{(m,k)},\qquad\forall k\in\mathcal{K}.\] The decoder comprises of applying the \(P_{X|W}\) classical memoryless channel to each element of \(w^{n}\) as \(P_{X|W}^{n}\). Therefore, the form of the encoder/decoder is \[\tilde{\tilde{\Lambda}}_{x^{n}}^{(k)}\equiv\sum_{w^{n}\in\mathcal{W}^{n}}P_{X |W}^{n}(x^{n}|w^{n})\tilde{\Lambda}_{w^{n}}^{(k)^{A}},\quad\forall x^{n}\in \mathcal{X}^{n}. \tag{20}\] It should be noted that this is not the final decoder. We will later modify this decoder to yield a non-product batch decoder in section 2.3.5, which is required to ensure an i.i.d. output distribution. Using the above POVMs one can write the induced composite state of the reference and output for each random codebook realization \(k\in\mathcal{K}\) as \[\tau_{\text{ind},k}^{R^{n}X^{n}}=\sum_{x^{n}}\operatorname{Tr}_{A^{n}}\left\{( \text{id}_{R}^{\otimes n}\otimes\tilde{\tilde{\Lambda}}_{x^{n}}^{(k)})( \psi_{RA})^{\otimes n}\right\}\otimes|x^{n}\rangle\!\langle x^{n}|\,. \tag{21}\] #### 2.3.2 Proof of Near I.I.D. Output Distribution It turns out that the proof of near i.i.d. output distribution does not depend on the codebook index \(k\in\mathcal{K}\). Therefore, we hereby remove the index \(k\) from all expressions of this subsection, which means the following formulations apply to any fixed \(k\in\mathcal{K}\). By tracing over the reference state in (21) we write the output state \[\sigma_{\text{ind}}^{X^{n}}=\sum_{x^{n}}\operatorname{Tr}\left\{( \text{id}_{R}^{\otimes n}\otimes\tilde{\Lambda}_{x^{n}})(\psi_{\rho}^{RA})^{ \otimes n}\right\}|x^{n}\rangle\!\langle x^{n}|\,. \tag{22}\] Also, from the conditions of the \(\mathcal{M}(\mathcal{D})\) feasible set (13), the output desired tensor state is in the following form, \[\left(\sigma_{\text{des}}^{X}\right)^{\otimes n}=\sum_{x^{n}}Q_{X}^{n}(x^{n}) \,|x^{n}\rangle\!\langle x^{n}|=\sum_{x^{n}}\left(\sum_{w}P_{X|W}(x_{i}|w) \operatorname{Tr}\!\left\{M_{w}^{A}\rho\right\}\right)^{\otimes n}|x^{n} \rangle\!\langle x^{n}|\,. \tag{23}\] Consequently, the trace distance between the induced output state and the desired product output state is expanded by using triangle inequality as, \[\left\|\left(\sigma_{\text{des}}^{X}\right)^{\otimes n}-\sigma_{\text{ind}}^{X^{ n}}\right\|_{1}=\sum_{x^{n}}\left|\left(\sum_{w}P_{X|W}(x_{i}|w)\operatorname{Tr}\{M_{w}^{A} \rho\}\right)^{\otimes n}-\operatorname{Tr}\left\{(\operatorname{id}_{R} \otimes\tilde{\bar{\Lambda}}_{x^{n}})(\psi_{\rho}^{RA})^{\otimes n}\right\} \right|.\] By substituting (20) into above expression and using the post-measurement reference state, the distance further equals \[=\sum_{x^{n}}\left|\sum_{w^{n}}P_{X|W}^{n}(x^{n}|w^{n})P_{W}^{n}(w^{n})-\frac{1 }{|\mathcal{M}|}\sum_{w^{n}}P_{X|W}^{n}(x^{n}|w^{n})\operatorname{Tr}\{\left( \sum_{m=1}^{|\mathcal{M}|}\Gamma_{w^{n}}^{(m)}\right)\omega\}\right|:=S.\] Then we split and bound the above term by \(S\leq S_{1}+S_{2}\), where we separate the extra operator from the rest of the POVM \[S_{1} \triangleq\sum_{x^{n}}\left|\sum_{w^{n}}P_{X|W}^{n}(x^{n}|w^{n})P _{W}^{n}(w^{n})-\sum_{w^{n}\neq w_{0}^{n}}P_{X|W}^{n}(x^{n}|w^{n})\operatorname {Tr}\{\left(\frac{1}{|\mathcal{M}|}\sum_{m=1}\Gamma_{w^{n}}^{(m)}\right)\omega \}\right|, \tag{24}\] \[S_{2} \triangleq\sum_{x^{n}}\Bigg{|}P_{X|W}^{n}(x^{n}|w_{0}^{n}) \operatorname{Tr}\{\frac{1}{|\mathcal{M}|}\sum_{m=1}\left(I-\sum_{w^{n}\neq w _{0}^{n}}\Gamma_{w^{n}}^{(m)}\right)\omega\}\Bigg{|}. \tag{25}\] We further simplify \(S_{1}\) by substituting (17) into (24) and bound it again by \(S_{1}\leq S_{11}+S_{12}\) by adding and subtracting a proper term and using triangle inequality: \[S_{11} \triangleq\sum_{x^{n}}\Bigg{|}\sum_{w^{n}}P_{X|W}^{n}(x^{n}|w^{n})P _{W}^{n}(w^{n})-\frac{1}{|\mathcal{M}||\mathcal{L}|}\frac{1-\epsilon}{1+\eta} \sum_{m,l}P_{X|W}^{n}(x^{n}|W^{n}(l,m))\Bigg{|}, \tag{26}\] \[S_{12} \triangleq\frac{1}{|\mathcal{M}||\mathcal{L}|}\frac{1-\epsilon}{1+ \eta}\sum_{x^{n}}\left|\sum_{m,l}P_{X|W}^{n}(x^{n}|W^{n}(l,m))\left(1- \operatorname{Tr}\{\hat{\rho}_{W^{n}(l,m)}^{\prime\prime}\}\right)\right|\] \[=\frac{1}{|\mathcal{M}||\mathcal{L}|}\frac{1-\epsilon}{1+\eta} \sum_{x^{n}}\sum_{m,l}P_{X|W}^{n}(x^{n}|W^{n}(l,m))\left(1-\operatorname{Tr} \{\hat{\rho}_{W^{n}(l,m)}^{\prime\prime}\}\right)\] \[=\frac{1}{|\mathcal{M}||\mathcal{L}|}\frac{1-\epsilon}{1+\eta} \sum_{m,l}\left(1-\operatorname{Tr}\{\hat{\rho}_{W^{n}(l,m)}^{\prime\prime} \}\right). \tag{27}\] For \(S_{11}\), using the classical soft-covering lemma [Lemma 2 [12]] with the condition that \(R+R_{c}>I(X;W)\) one can provide a decaying upper bound for its expectation as \[\mathbb{E}\left[S_{11}\right]\leq\frac{3}{2}\exp\{-tn\}, \tag{28}\] for some \(t>0\). Also, by taking the expectation of \(S_{12}\) we have \[\mathbb{E}\left[S_{12}\right]=\frac{1-\epsilon}{1+\eta}(1-\operatorname{Tr}\{ \hat{\rho}^{\prime\prime n}\})\leq\frac{1-\epsilon}{1+\eta}(2\epsilon+2\sqrt{ \epsilon})\triangleq\epsilon_{2}, \tag{29}\] where the equality follows from (15) and the inequality appeals to the properties of the typical set and the Gentle Measurement Lemma [6, 29, 30]. Next, we bound and simplify the expectation of \(S_{2}\) by substituting (17) into (25): \[\mathbb{E}\left[S_{2}\right] \leq\mathbb{E}\left[\frac{1}{|\mathcal{M}|}\sum_{m}\sum_{x^{n}}P_{X|W }^{n}(x^{n}|w_{0}^{n})\Bigg{|}\text{Tr}\Bigg{\{}\omega-\sum_{w^{n}\neq\mu_{0}^{ n}}\gamma_{w^{n}}^{(m)}\hat{\rho}_{w^{n}}^{\prime\prime}\Bigg{\}}\Bigg{|}\right]\] \[=\frac{1}{|\mathcal{M}|}\sum_{m}\mathbb{E}\left[\left|1-\text{Tr} \Bigg{\{}\sum_{w^{n}\neq w_{0}^{n}}\gamma_{w^{n}}^{(m)}\hat{\rho}_{w^{n}}^{ \prime\prime}\Bigg{\}}\right|\right]\] \[\overset{a}{=}1-\frac{1}{|\mathcal{M}|}\sum_{m}\text{Tr}\Bigg{\{} \sum_{w^{n}\neq w_{0}^{n}}\mathbb{E}\left[\gamma_{w^{n}}^{(m)}\right]\hat{\rho }_{w^{n}}^{\prime\prime}\Bigg{\}}\] \[\overset{b}{\leq}1-\frac{1-\epsilon}{1+\eta}(1-2\epsilon-2 \sqrt{\epsilon})=\frac{\eta+\epsilon}{1+\eta}+\epsilon_{2}\triangleq\epsilon_ {3}, \tag{30}\] where in (a) we remove the absolute sign because the trace is always less than or equal to one, and (b) uses the result from [6]. Hence, combining (30), (29) and (28) we proved that the expected distance between the output state induced by the random codebook and the product single-letter state is arbitrarily small for sufficiently large n: \[\mathbb{E}\left[\left\|(\sigma_{\text{des}}^{X})^{\otimes n}- \sigma_{\text{ind}}^{X^{n}}\right\|_{1}\right]\leq\epsilon_{2}+\epsilon_{3}+c \exp\{-tn\}\triangleq\epsilon_{os}. \tag{31}\] #### 2.3.3 Proof of Distortion Constraint The average distortion for a codebook \(k\in\mathcal{K}\) is given by \[d_{n}{}^{\{k\}}(\rho^{\otimes n}, \mathcal{D}_{n}\circ\mathcal{E}_{n})=\text{Tr}\Big{\{}\Delta^{(n )}\tau_{\text{ind},k}^{R^{n}X^{n}}\Big{\}}\] \[=\text{Tr}\Bigg{\{}\Delta^{(n)}\bigg{(}\sum_{x^{n}}\text{Tr}_{A^{ n}}\left\{(\text{id}_{R}^{\otimes n}\otimes\tilde{\Lambda}_{x^{n}}^{(k)})( \psi_{RA})^{\otimes n}\right\}\otimes|x^{n}\rangle\!\langle x^{n}|\,\bigg{)} \Bigg{\}}\] \[=\text{Tr}\left\{\Delta^{(n)}\bigg{(}\sum_{m,l}\frac{1}{| \mathcal{M}|}\text{Tr}_{A^{n}}\left\{(\text{id}_{R}^{\otimes n}\otimes\Upsilon _{l}^{(k,m)})(\psi_{RA})^{\otimes n}\right\}\otimes\sigma_{w^{n}(m,k,l)} \bigg{)}\right\}. \tag{32}\] Recall from Section 2.3.2 that in order to have a faithful near i.i.d. output state, we need to satisfy the conditions of soft-covering lemma \(|\mathcal{M}||\mathcal{L}|>2^{nI(X;W)}\), which is needed for (28). On the other hand, according to the non-feedback measurement compression theorem [6], we need a sum rate of at least \(I(XR;W)\) to have a faithful measurement simulation. Thus, by setting \(|\mathcal{K}|>2^{n(I(XR;W)-I(X;W))}\), we define the inter-codebook average state as \[\tau_{\text{avg}}^{n}\equiv\sum_{k,m,l}\frac{1}{|\mathcal{K}|| \mathcal{M}|}\operatorname{Tr}_{A^{n}}\Big{\{}\left(\text{id}_{R}\otimes \Upsilon_{l}^{(k,m)}\right)(\psi_{\rho}^{\otimes n})^{RA}\Big{\}}\otimes\sigma_ {w^{n}(m,k,l)}. \tag{33}\] Consequently, according to non-feedback measurement compression theorem [6], this inter-codebook average state is a faithful simulation of the ideal product measurement system, i.e., for any \(\epsilon_{mc}>0\) and for all sufficiently large \(n\), \[\mathbb{E}_{c}\left[\left\|\tau^{\otimes n}-\tau_{\text{avg}}^{n} \right\|_{1}\right]\leq\epsilon_{mc}. \tag{34}\] Then, we bound the expected average distortion as follows: \[\mathbb{E}_{K}\left[d_{n}{}^{\{K\}}(\rho^{\otimes n},\mathcal{D}_{n} \circ\mathcal{E}_{n})\right]=\frac{1}{|\mathcal{K}|}\sum_{k}d_{n}{}^{\{k\}}( \rho^{\otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})\] \[=\mathrm{Tr}\Bigg{\{}\Delta^{(n)}\left(\sum_{k,m,l}\frac{1}{| \mathcal{K}||\mathcal{M}|}\,\mathrm{Tr}_{A^{n}}\left\{\left(\mathbf{id}_{R}^{ \otimes n}\otimes\Upsilon_{l}^{(k,m)}\right)(\psi_{\rho}^{\otimes n})RA\right\} \otimes\sigma_{w^{n}(m,k,l)}\right)\Bigg{\}}\] \[=d_{\max}\,\mathrm{Tr}\bigg{\{}\frac{\Delta^{(n)}}{d_{\max}}( \tau_{\mathrm{avg}}^{(n)}-\tau^{\otimes n})\bigg{\}}+\mathrm{Tr}\Big{\{} \Delta^{(n)}\tau^{\otimes n}\Big{\}}\] \[\leq d_{\max}\Big{\|}\tau_{\mathrm{avg}}^{(n)}-\tau^{\otimes n} \Big{\|}_{1}+\mathrm{Tr}\Big{\{}\Delta^{(n)}\tau^{\otimes n}\Big{\}}\] \[\leq d_{\max}\Big{\|}\tau_{\mathrm{avg}}^{(n)}-\tau^{\otimes n} \Big{\|}_{1}+D, \tag{35}\] where \(d_{\max}\) is the largest eigenvalue of the distortion observable. The first inequality holds by definition of the trace distance and the fact that \(0\leq\frac{\Delta^{(n)}}{d_{\max}}\leq I\). The second inequality holds because the average distortion of \(n\) identical copies of the single-letter system is the same as single-letter distortion. Next, we take the expectation of both sides with respect to all possible codebook realizations. Thus, for all sufficiently large \(n\), \[\mathbb{E}_{c}\left[\mathbb{E}_{K}\left[d_{n}{}^{\{k\}}(\rho^{ \otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})\right]\right]\leq d_{\max} \mathbb{E}_{c}\left[\left\|\tau_{\mathrm{avg}}^{(n)}-\tau^{\otimes n}\right\| _{1}\right]+D\leq d_{\max}\epsilon_{mc}+D,\] where for the first inequality we take the expectation of (35) and the second inequality follows from (34). Further, the LHS of above inequality can be rewritten as follows by changing the order of expectations, \[\mathbb{E}_{c}\left[\mathbb{E}_{K}\left[d_{n}{}^{\{K\}}(\rho^{ \otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})\right]\right]=\mathbb{E}_{K} \left[\mathbb{E}_{c}\left[d_{n}{}^{\{K\}}(\rho^{\otimes n},\mathcal{D}_{n} \circ\mathcal{E}_{n})\right]\right]=\mathbb{E}_{c}\left[d_{n}{}^{\{k\}}(\rho^ {\otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})\right], \tag{36}\] where the second equality holds for any codebook \(k\in\mathcal{K}\) and follows because the expectation of the distance measure over all codebooks is independent of \(K\). Then it is proved that the expected average distortion for any codebook \(k\in\mathcal{K}\) is asymptotically bounded by \(D\): \[\mathbb{E}_{c}\left[d_{n}{}^{\{k\}}(\rho^{\otimes n},\mathcal{D}_ {n}\circ\mathcal{E}_{n})\right]\leq D+d_{\max}\epsilon_{mc}. \tag{37}\] #### 2.3.4 Intersection of the Constraints In this section, we show that the previous bounds on the expected codebook realizations have an intersection with nonzero probability. i.e., there exists a codebook realization that can realize all events together. The following four cases are the required events in achievability proof which were proved to hold for the expected codebook realizations. Here we show that the union of the opposite of these events happens with a probability strictly less than one. This ensures there exists at least one codebook realization that satisfies all the constraints. Note that \(\delta\) and \(\epsilon\) are the parameters of the typical set and the probability of the non-typical set respectively. * Firstly, it is shown that the \(\Gamma_{w^{n}}^{(m,k)}\) form valid sub-POVM for all \(m\in\mathcal{M}\) and \(k\in\mathcal{K}\). This is considered as event \(E_{1}\). Using the Chernoff Bound technique, [6] shows that if \(R>I(R;W)_{\sigma}\) then \[\mathrm{Pr}\{\neg E_{1}\}\leq c\exp\bigl{\{}-2^{n\delta}\epsilon^ {3}\bigr{\}}.\] (38) for some \(c>0\). * Secondly, define event \(E_{2}\) as when \(S_{11}\leq\exp\{-\nu n\}\) for some \(\nu>0\). Then by applying Markov inequality to expression (28), we find the bound \[\Pr\{\neg E_{2}:S_{11}\geq\exp\{-\nu n\}\}\leq\frac{3}{2}\frac{\exp\{-tn\}}{\exp \{-\nu n\}}.\] (39) * Third, the bounds on expectations of \(S_{12}\) and \(S_{2}\). Let \(E_{31}\) and \(E_{32}\) be the corresponding events for these random variables. Then applying the Markov inequality to these inequalities (29), (30), we have the following bound for a fixed value \(\delta_{3}>0\): \[\Pr\{\neg E_{31}:S_{2}\geq 2\delta_{3}\}\leq\frac{\mathbb{E}\left[E_{31} \right]}{\delta_{3}}\leq\frac{\epsilon_{3}}{2\delta_{3}},\] (40) \[\Pr\{\neg E_{32}:S_{12}\geq 2\delta_{3}\}\leq\frac{\mathbb{E}\left[E_ {32}\right]}{\delta_{3}}\leq\frac{\epsilon_{3}}{2\delta_{3}}.\] (41) Note that we used the fact that \(\epsilon_{2}\leq\epsilon_{3}\). * Fourth, define \(E_{4}\) as the event when the average n-letter distortion constraint is satisfied. By applying Markov inequality to (37) we obtain for any fixed value \(\delta_{d}>0\) that \[\Pr\{\neg E_{4}:d(\rho^{\otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})\geq D +\delta_{d}\}\leq\frac{\mathbb{E}_{c}\left[d(\rho^{\otimes n},\mathcal{D}_{n} \circ\mathcal{E}_{n})\right]}{D+\delta_{d}}\leq\frac{D+d_{\text{max}}\epsilon _{mc}}{D+\delta_{d}}.\] (42) Then the probability of not being in the intersection is bounded by using the union bound \[\Pr\{\neg\bigcap_{i=1}^{4}E_{i}\}\leq\sum_{i=1}^{4}\Pr\{\neg E_{i}\}\leq c\exp \{-2^{n\delta}\epsilon^{3}\}+\frac{3}{2}\frac{\exp\{-kn\}}{\exp\{-\nu n\}}+ \frac{\epsilon_{3}}{\delta_{3}}+\frac{D+d_{\text{max}}\epsilon_{mc}}{D+\delta_ {d}}. \tag{43}\] By taking the limit of the above expression when \(n\rightarrow\infty\), we have \(\epsilon\to 0\). Then with proper choice of \(\delta\) and \(\nu\in(0,t)\), the first two terms decay exponentially, while \(\epsilon_{mc},\delta_{3},\delta_{d}\) are fixed, thus, \[\lim_{n\rightarrow\infty}\Big{(}c\exp\{-2^{n\delta}\epsilon^{3}\}+\frac{3}{2 }\frac{\exp\{-tn\}}{\exp\{-\nu n\}}+\frac{\epsilon_{3}}{\delta_{3}}+\frac{D+d _{\text{max}}\epsilon}{D+\delta_{d}}\Big{)}=\frac{\epsilon_{3}}{\delta_{3}}+ \frac{D+d_{\text{max}}\epsilon_{mc}}{D+\delta_{d}}<1. \tag{44}\] By choosing proper values for the \(\epsilon_{mc},\delta_{3}\) and \(\delta_{d}\) parameters, we make sure the above inequality holds, which means there exists with nonzero probability, a valid quantum measurement coding scheme that satisfies all the above four conditions together. #### 2.3.5 Exactly Satisfying IL.D. Output Distribution It remains to prove that the perfect i.i.d. output distribution can be achieved from the near-perfect one with an arbitrarily small increase in distortion level. The perfectly i.i.d. distribution and the near-perfect induced output distribution for this source coding scheme are expressed by \[P_{X^{n}}^{\text{des}}(x^{n}) :=\big{\langle}x^{n}\big{|}(\sigma_{\text{des}}^{X})^{\otimes n} |x^{n}\big{\rangle}=Q_{X}^{n}(x^{n})=\sum_{w^{n}}P_{X|W}^{n}(x^{n}|w^{n})P_{W} ^{n}(w^{n}), \tag{45}\] \[P_{X^{n}}^{\text{ind}}(x^{n}) :=\Big{\langle}x^{n}\Big{|}\sigma_{\text{ind}}^{X^{n}}|x^{n} \Big{\rangle}=\text{Tr}\Big{\{}(\text{id}_{R}\otimes\tilde{\tilde{\lambda}}_ {x^{n}})(\psi_{RA}^{\rho})^{\otimes n}\Big{\}}\] \[=\sum_{w^{n}\in W^{n}}P_{X|W}^{n}(x^{n}|w^{n})\,\text{Tr}\Big{\{}( \text{id}_{R}\otimes\tilde{\Lambda}_{w^{n}}^{A})(\psi_{RA}^{\rho})^{\otimes n }\Big{\}}, \tag{46}\] where, \((\sigma^{X}_{\text{des}})^{\otimes n}\) and \(\sigma^{X^{n}}_{\text{ind}}\) are defined in (23) and (22), respectively. Then [Theorem 1 [28]] shows that by fixing the measurement while changing only the i.i.d. post-processing unit \(P_{X|W}\) to batch decoder \(\tilde{P}_{X^{n}|W^{n}}\) we can satisfy the perfect i.i.d. condition from the near-perfect one. Define the alternative decoder as the conditional probability of any event \(A\subseteq\mathcal{X}^{n}\) given \(w^{n}\) as \[\tilde{P}_{\hat{X}^{n}|W^{n}}(A|w^{n})=\sum_{x^{n}\in\mathcal{A} \cap\mathcal{X}^{n}_{+}}\theta_{x^{n}}P^{n}_{X|W}(x^{n}|w^{n})+P^{n}_{X|W}(A \setminus\mathcal{X}^{n}_{+}|w^{n})+\phi_{wn}Z(A), \tag{47}\] where the given expressions are defined as \[\mathcal{X}^{n}_{+} :=\left\{x^{n}\in\mathcal{X}^{n}\Big{|}P^{\text{ind}}_{X^{n}}(x^{ n})>P^{\text{des}}_{X^{n}}(x^{n})\right\}, \tag{48}\] \[\theta_{x^{n}} :=\frac{P^{\text{des}}_{X^{n}}(x^{n})}{P^{\text{ind}}_{X^{n}}(x^ {n})}\quad x^{n}\in\mathcal{X}^{n}_{+},\] (49) \[\phi_{w^{n}} :=\sum_{x^{n}\in\mathcal{X}^{n}_{+}}(1-\theta_{x^{n}})P^{n}_{X|W} (x^{n}|w^{n}),\] (50) \[Z(A) :=\frac{\sum_{x^{n}\in A}\left[P^{\text{des}}_{X^{n}}(x^{n})-P^{ \text{ind}}_{X^{n}}(x^{n})\right]^{+}}{\sum_{x^{n}\in\mathcal{X}^{n}\setminus \mathcal{X}^{n}_{+}}\left[P^{\text{des}}_{X^{n}}(x^{n})-P^{\text{ind}}_{X^{n}}( x^{n})\right]^{+}}=\frac{P^{\text{des}}_{X^{n}}(A\setminus\mathcal{X}^{n}_{+})-P^{ \text{ind}}_{X^{n}}(A\setminus\mathcal{X}^{n}_{+})}{d_{TV}(P^{\text{ind}}_{X^{n }},P^{\text{des}}_{X^{n}})}. \tag{51}\] The validity and admissibility of the new post-processing decoder can be verified with simple calculus, which states that \(\tilde{P}_{\hat{X}^{n}|W^{n}}(\mathcal{X}^{n}|w^{n})=1\quad\forall w^{n}\in W ^{n}\), and the new induced output distribution satisfies the desired i.i.d. condition \[\tilde{P}^{\text{ind}}_{\hat{X}^{n}}(A):=\sum_{w^{n}}\tilde{P}_{ \hat{X}^{n}|W^{n}}(A|w^{n})\operatorname{Tr}\Bigl{\{}(\text{id}_{R}\otimes \tilde{\Lambda}^{A}_{w^{n}})(\psi^{\rho}_{RA})^{\otimes n}\Bigr{\}}=P^{\text{ des}}_{X^{n}}(A). \tag{52}\] Also, using the definition of Batch decoder (47), the following set of equalities hold for any \(w^{n}\in W^{n}\): \[d_{TV}\Bigl{(}\tilde{P}_{\hat{X}^{n}|W^{n}}(.|w^{n}),P^{n}_{X|W }(.|w^{n})\Bigr{)}\] \[=\frac{1}{2}\sum_{x^{n}\in\mathcal{X}^{n}_{+}}\Big{|}\tilde{P}_{ \hat{X}^{n}|W^{n}}(.|w^{n})-P^{n}_{X|W}(.|w^{n})\Big{|}+\frac{1}{2}\sum_{x^{n} \in\mathcal{X}^{n}\setminus\mathcal{X}^{n}_{+}}\Big{|}\tilde{P}_{\hat{X}^{n}| W^{n}}(.|w^{n})-P^{n}_{X|W}(.|w^{n})\Big{|}\] \[=\frac{1}{2}\sum_{x^{n}\in\mathcal{X}^{n}_{+}}\Big{|}(\theta_{x^{ n}}-1)P^{n}_{X|W}(x^{n}|w^{n})+\phi_{w^{n}}Z(x^{n})\Big{|}+\frac{1}{2}\sum_{x^{n} \in\mathcal{X}^{n}\setminus\mathcal{X}^{n}_{+}}\Big{|}\phi_{w^{n}}Z(x^{n})|\] \[=\frac{1}{2}\sum_{x^{n}\in\mathcal{X}^{n}_{+}}\Big{|}(\theta_{x^{ n}}-1)P^{n}_{X|W}(x^{n}|w^{n})\Big{|}+\frac{1}{2}\sum_{x^{n}\in\mathcal{X}^{n} \setminus\mathcal{X}^{n}_{+}}\Big{|}\phi_{w^{n}}\frac{P^{\text{des}}_{X^{n}}( x^{n})-P^{\text{ind}}_{X^{n}}(x^{n})}{d_{TV}(P^{\text{ind}}_{X^{n}},P^{\text{des}}_{X^{n }})}\Big{|}\] \[=\frac{1}{2}\phi_{w^{n}}+\frac{1}{2}\phi_{w^{n}}=\phi_{w^{n}}, \tag{53}\] where \((a)\) is because using the definition of \(Z(A)\), we know that \(Z(x^{n})=0\) for all \(x^{n}\in\mathcal{X}^{n}_{+}\) and the last line is from the definition of total variation distance. Thus, by definition, there exists a quantum coupling such that, \[\Pr\Bigl{(}X^{n}\neq\hat{X}^{n}\Big{|}w^{n}\Bigr{)}\leq\phi_{w^{n}} \quad\forall w^{n}\in W^{n}. \tag{54}\] Then from the above inequality, using the argument in [28], the probability of outputs not being equal is bounded by \[\Pr\Bigl{(}X^{n}\neq\hat{X}^{n}\Bigr{)}\leq d_{TV}(P_{X^{n}}^{\text{ ind}},P_{X^{n}}^{\text{des}})\leq\epsilon_{os}, \tag{55}\] and the second inequality appeals to (31) and the union bound in section 2.3.4. Next, we bound the n-letter distortion for the new decoder using the above bound. First, note that \(\tau_{R_{i}\hat{X}_{i}}\) is the local \(i\)-th reference-output state of the system, given by \[\tau_{R_{i}\hat{X}_{i}} =\mathrm{Tr}_{R^{n\setminus\{i\}}X^{n\setminus\{i\}}\{\tau_{R^{ n}\hat{X}^{n}}\}}\] \[=\sum_{x_{i}}^{n}P_{X}(x_{i})\zeta_{x_{i}}^{R_{i}}\otimes\left|x_ {i}\right\rangle\!\!\left\langle x_{i}\right|. \tag{56}\] The \(\zeta_{x_{i}}^{R_{i}}\) is the post-measurement reference state of the \(i\)-th local state given the outcome \(x_{i}\), \[\zeta_{x_{i}}^{R_{i}} =\frac{1}{P_{X}(x_{i})}\left\langle id\otimes x_{i}|\tau_{R_{i}X_ {i}}|id\otimes x_{i}\right\rangle\] \[=\frac{1}{P_{X}(x_{i})}\sum_{j=1:n,j\neq i}\sum_{x_{j}\in\mathcal{ X}}\mathrm{Tr}_{R^{n\setminus\{j\}}A^{n}}\left\{(\mathrm{id}^{\otimes n} \otimes\hat{\Lambda}_{x^{n}})(\psi^{RA})^{\otimes n}\right\},\] where \(\hat{\Lambda}\equiv\{\hat{\Lambda}_{x^{n}}\}_{x^{n}\in\mathcal{X}^{\otimes n}}\) is the combined encoder/decoder collective POVM taking into account the batch decoder. We expand the n-letter average distortion as \[\mathrm{Tr}\Bigl{\{}\Delta^{(n)}\tau_{R^{n}\hat{X}^{n}}\Bigr{\}} =\frac{1}{n}\sum_{i=1}^{n}\mathrm{Tr}\left\{\Delta_{R_{i}X_{i}} \tau^{R_{i}X_{i}}\right\}\] \[=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{\hat{X}_{i}}\left[ \mathrm{Tr}_{R}\left\{\zeta_{\hat{X}_{i}}^{R_{i}}\Delta_{R_{i}}(\hat{X}_{i}) \right\}\mathds{1}_{\hat{X}_{i}=X_{i}}\right]\] \[\leq D+\epsilon+\sup_{\rho,X,A}\mathbb{E}_{X}\left[\mathrm{Tr}_{R }\left\{\zeta_{\hat{X}}^{R}\Delta_{R}(X)\right\}\mathds{1}_{A}\right]. \tag{57}\] By accepting the assumption that the system is uniformly integrable (which is always true for the systems with finite dimensions), it is derived from (57) that \[\mathrm{Tr}\Bigl{\{}\Delta^{(n)}\tau_{R^{n}\hat{X}^{n}}\Bigr{\}}\leq D+ \epsilon_{4}. \tag{58}\] ### Proof of the Converse Assume that there exists an achievable \((n,R,R_{c})\) coding scheme and a set of n-letter collective measurements \(\Upsilon^{(m)}\) selected by a shared random number \(m\) on Alice's side. The measurement results in outcome \(L\) which is sent to Bob. Finally, Bob uses a batch decoder \(P_{X^{n}|L,M}(x^{n}|l,m)\), to generate the final state. Thus, the n-letter encoding quantum measurement composite state is \[\omega^{R^{n}LM}=\sum_{l,m}\operatorname{Tr}_{A^{n}}\left\{(\operatorname{id} \otimes\Upsilon_{l}^{(m)})(\psi_{\rho}^{RA})^{\otimes n}\right\}\otimes\frac{1 }{|\mathcal{M}|}\,|m\rangle\!\!\langle m|\otimes|l\rangle\!\langle l|\,. \tag{59}\] Let us assume that the above system is achievable in the sense of definition 2.3, similar to [6], for the communication rate, we have \[nR\geq H(L)_{\omega} \geq I(L;MR^{n})_{\omega}=I(LM;R^{n})_{\omega}+I(L;M)_{\omega}-I(M ;R^{n})_{\omega}\] \[\overset{a}{\geq}I(LM;R^{n})_{\omega}=H(R^{n})_{\omega}-H(R^{n}| LM)_{\omega}\] \[\overset{b}{\geq}\sum_{k}[H(R_{k})_{\omega}-H(R_{k}|LM)_{\omega}]\] \[=\sum_{k}I(LM;R_{k})_{\omega}\] \[=nI(LM;R_{K}|K)_{\sigma} \tag{60}\] \[=nI(LMK;R_{K})_{\sigma}.\] The first two lines are well-known properties of mutual information. Also, (a) follows because common randomness \(M\) is independent of the source. (b) holds by the sub-additivity of conditional quantum entropy and the fact that the source is a product state. For Eq. (60), define \(K\) as a uniform random variable over the set \(\{1,2,...,n\}\) which represents the index to the selected system. Then the overall state of the system can be redefined with \(K\) being a random index as \[\sigma^{RLM}=\sum_{k,l,m,}\operatorname{Tr}_{R_{1}^{k-1}R_{k+1}^{n}A^{n}} \left\{(\operatorname{id}\otimes\Upsilon_{l}^{(m)})(\psi_{\rho}^{RA})^{ \otimes n}\right\}\otimes\frac{1}{|\mathcal{M}|}\,|m\rangle\!\!\langle m| \otimes|l\rangle\!\langle l|\otimes\frac{1}{n}\,|k\rangle\!\langle k|\,. \tag{61}\] Also, the last equality holds because the reference and the index state are independent, i.e., \(I(R;K)_{\sigma}=0\). This can be easily verified by tracing out other terms in (61): \[\sigma^{RK} =\operatorname{Tr}_{LMX}\left\{\sigma^{RLMK}\right\}\] \[=\sum_{k,l,m}\frac{1}{|\mathcal{M}|}\operatorname{Tr}_{R^{[n] \setminus k}A^{n}}\left\{(\operatorname{id}\otimes\Upsilon_{l}^{(m)})(\psi_ {\rho}^{RA})^{\otimes n}\right\}\otimes\frac{1}{n}\,|k\rangle\!\langle k|\] \[=\sum_{k,l,m}\frac{1}{|\mathcal{M}|}\operatorname{Tr}_{R^{[n] \setminus k}}\left\{\sqrt{\rho^{\otimes n}}\Upsilon_{l}^{(m)}\sqrt{\rho^{ \otimes n}}\right\}\otimes\frac{1}{n}\,|k\rangle\!\langle k|\] \[=\sum_{k}\operatorname{Tr}_{R^{[n]\setminus k}}\left\{\rho^{ \otimes n}\right\}\otimes\frac{1}{n}\,|k\rangle\!\langle k|=\rho\otimes\left( \frac{1}{n}\sum_{k}|k\rangle\!\langle k|\right),\] which proves that this is a product state. Thus, \(R,K\) are independent. Therefore, we can define \(W:=(L,M,K)\) and observe that a quantum Markov chain of the form \(R-(L,M,K)-X\) exists. Also, the single-letter decoder is given by \(P_{X|W}(x|w):=P_{X_{k}|LM}(x|l,m)\), and the encoder is the below set of measurements \(\mathcal{M}_{w}\) on a state \(\xi^{A}\): \[\xi^{A}\xrightarrow{\mathcal{M}_{y}}\frac{1}{n|\mathcal{M}|}\operatorname{Tr} _{A_{1}^{k-1}AA_{k+1}^{n}}\left\{(\operatorname{id}\otimes\Upsilon_{l}^{(m)})^ {A^{n}}(\psi_{\rho}^{RA})^{\otimes k-1}\otimes\xi^{A}\otimes(\psi_{\rho}^{RA} )^{\otimes n-k}\right\}.\] For the second bound, the decoder is a classical channel, therefore, the following inequalities hold for the classical entropy and Shannon's mutual information: \[n(R+R_{c})\geq H(LM) \geq I(LM;X^{n})=H(X^{n})-H(X^{n}|LM)\] \[\geq\sum_{k}H(X_{k})-H(X_{k}|LM)\] \[=\sum_{k}I(X_{k};LM)=nI(X_{K};LM|K)\] \[\geq nI(X_{K};LM|K)+nI(X_{K};K) \tag{62}\] \[=nI(LMK;X_{K}).\] The arguments for the above inequalities are the same as before. However, here we simply have \(I(X_{K};K)=0\) as \(X^{n}\) are exactly i.i.d. and given by the constraint of the problem. Next, from the definition of the achievable rate pair, there exists an \((n,R,R_{c})\) coding scheme that satisfies the distortion inequality in (9). According to the Lemma 2 the minimum achievable rate described by the output-constrained rate-distortion function, is a convex function of distortion \(D\). Therefore, it is continuous in its domain. **Lemma 2**.: \(R(D;R_{c},\rho||Q_{X})\) _is a convex function of \(D\) on \(0<D<\infty\)._ Proof.: See Appendix C.1. This \(\epsilon\)-continuity implies that for any \((R,R_{c})\) inside the rate region; i.e. \(R>R(D;R_{c},\rho||Q_{X})\), there exists an \(\epsilon>0\) such that we still have \(R>R(D-\epsilon;R_{c},\rho||Q_{X})\). Therefore, having (9), we claim that there exists another \((n,R,R_{c})\) coding scheme which satisfies \[X^{n}\sim Q_{X}^{n},\;\;d_{n}(\rho^{\otimes n},\mathcal{D}_{n} \circ\mathcal{E}_{n})\leq D. \tag{63}\] Using the above tight distortion bound, we provide the following bound on the single-letter distortion which completes the proof for the converse of the theorem 1: \[d(\rho,\Delta_{RX}) =\mathbb{E}_{X}\left[\operatorname{Tr}\{\rho_{X}\Delta_{R}(X)\}\right]\] \[=\mathbb{E}_{K}\left[\mathbb{E}_{X}\left[\operatorname{Tr}\{\rho_ {X}\Delta_{R}(X)\}|K\right]\right]\] \[=\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}_{X}\left[\operatorname{Tr}\{ \rho_{X}\Delta_{R}(X)\}|K=k\right]\] \[=\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}_{X_{k}}\left[\operatorname{ Tr}\{\rho_{X_{k}}\Delta_{R_{k}}(X_{k})\}\right]\] \[=\mathbb{E}_{X^{n}}\left[\operatorname{Tr}\Bigl{\{}\rho_{X^{n}} \Delta^{(n)}(X^{n})\Bigr{\}}\right]\leq D. \tag{64}\] ### Cardinality Bound Assume having a probability distribution \(P_{W}(w)\) for the intermediate variable, we define a function \(f:\mathbb{R}\rightarrow\mathbb{R}^{5}\): \[f:P_{W}(w)\longrightarrow\Bigl{(}\rho^{A},\;Q_{X}(x),\;I(X;W),\;I(R;W),\; \operatorname{Tr}\Bigl{\{}\Delta^{(n)}\tau_{ind}^{n}\Bigr{\}}\Bigr{)}.\] We then find the number of affine functions required to implement the above function with a convex combination of conditionals on \(W\), \[f_{\text{affine}}:P_{W}(w)\longrightarrow\Big{(}\rho^{A},Q_{X}(x),H(X|W),H(R|W), \operatorname{Tr}\{\Delta_{RX}\tau_{RX}\}\Big{)}.\] The first condition \(\rho^{A}\) requires at most \((\dim\mathcal{H}_{A})^{2}-1\) affine functions because the state \(\rho^{A}\) can be represented by the \(\dim\mathcal{H}_{A}\times\dim\mathcal{H}_{A}\) Hermitian trace one matrix [31]. The two classical states have simple classical conditional representations as follows. Note that there is at most need for \(|\mathcal{X}|-1\) separate functions to represent the distribution \(Q_{X}\) and one for the conditional entropy: \[Q_{X}(x_{i}) =\sum_{w}Q_{X|W}(x_{i}|w)P_{W}(w), \forall i\in[|\mathcal{X}|-1],x_{i}\in\mathcal{X} \tag{65}\] \[H(X|W) =\sum_{w}H(X|W=w)P_{W}(w). \tag{66}\] Also, the other functions have the following representations, \[H(R|W) =\sum_{w}H(R|W=w)P_{W}(w)\] \[=\sum_{w}H(\rho_{w})P_{W}(w), \tag{67}\] \[\operatorname{Tr}\{\Delta_{RX}\tau_{RX}\} =\operatorname{Tr}\Biggl{\{}\Delta_{RX}\left(\sum_{w,x} \operatorname{Tr}_{A}\{(\operatorname{id}_{R}\otimes M_{w})\psi_{\rho}^{RA}\} \otimes P_{X|W}(x|w)\,|x\rangle\!\langle x|^{X}\right)\Biggr{\}}\] \[=\operatorname{Tr}\Biggl{\{}\Delta_{RX}\left(\sum_{w,x}\rho_{w}^{ A}\,P_{W}(w)\otimes P_{X|W}(x|w)\,|x\rangle\!\langle x|^{X}\right)\Biggr{\}}\] \[=\sum_{w}P_{W}(w)\operatorname{Tr}\Biggl{\{}\Delta_{RX}\left(\sum _{x}\rho_{w}^{A}\otimes P_{X|W}(x|w)\,|x\rangle\!\langle x|^{X}\right)\Biggr{\}}\] \[=\sum_{w}\operatorname{Tr}\bigl{\{}\Delta_{RX}\tau_{RX|W=w}\bigr{\}} P_{W}(w). \tag{68}\] Therefore, using the support lemma [[32], Appendix C], and the fact that \(\mathcal{W}\)-simplex is a compact and connected set, we claim that there exists a random variable \(W^{\prime}\) with cardinality at most \[|\mathcal{W}^{\prime}|\leq(\dim\mathcal{H}_{A})^{2}+|\mathcal{X}|+1, \tag{69}\] with the composite state \(\nu^{RW^{\prime}X}\) which forms the following quantum Markovity \(R-W^{\prime}-X\) with the same marginal distributions and satisfies the entropic equalities, \[\nu^{X}=\tau^{X}\equiv Q_{X},\ \ \nu^{R}=\tau^{R}\equiv\rho,\ \ \operatorname{Tr}\{ \Delta_{RX}\tau_{RX}\}_{\nu}=\operatorname{Tr}\{\Delta_{RX}\tau_{RX}\}_{\tau}, \tag{70}\] \[I(X;W^{\prime})_{\nu}=I(X;W)_{\tau},\ I(R;W^{\prime})_{\nu}=I(R; W)_{\tau}. \tag{71}\] Continuous-Variable Quantum System ### Generalized Definitions of Continuous Quantum Systems In this section, we investigate the measurement coding for the Bosonic continuous-variable quantum systems [Chapters 11,12 of [33]]. The proof of the achievability of the random coding argument in previous sections does not directly apply to the continuous quantum systems. The first reason is that the Chernoff bound [6] (which is the main theorem for the validity of the measurement POVMs in random coding argument) is available only for a finite-dimensional Hilbert space. Secondly, in infinite-dimensional systems, it is not possible to represent the outcome space using quantum registers defined on separable Hilbert spaces. This is in contrast to the finite-dimensional system for which, the set of all outcome states forms a complete orthonormal set. As a result, the quantum mutual information is not defined for such continuous measurement systems. Instead, we keep the output system as classical and use the generalized ensemble representation. In order to properly define the continuous system model, we should first provide the following generalized definitions. **Definition 3.1** ([33] Definition 11.22).: _The generalized ensemble is defined as a Borel probability measure \(\pi\) on the subspace of density operators \(\mathcal{G}(\mathcal{H}_{A})\). Then the average state of the ensemble is defined as_ \[\bar{\psi}_{\pi}=\int\psi\pi(d\psi). \tag{72}\] In contrast to the finite Hilbert space for which the POVM is defined for all possible outcomes, in the continuous quantum measurement systems, the generalized POVM is defined over the subset of \(\sigma\)-algebra of Borel subsets. **Definition 3.2** ([33] Definition 11.29.).: _A POVM is generally defined on a measurable space \(X\) with a \(\sigma\)-algebra of measurable subsets \(\mathcal{B}\), as a set of Hermitian operators \(M=\{M(B),B\in\mathcal{B}\}\) satisfying the following conditions:_ 1. \(M(B)\geq 0,B\in\mathcal{B}\)_,_ 2. \(M(\mathcal{X})=I\)_,_ 3. _For any countable (not necessarily finite) decomposition of mutually exclusive subsets_ \(B=\cup B_{j}\) (\(B_{i}\cap B_{j}=\emptyset,\ i\neq j\))_, the sum of the measures converge in weak operator sense to measure of the combined set; i.e._ \(M(B)=\sum_{j}M(B_{j})\)_._ Then having an observable POVM \(M\) acting on a state \(\psi\), with outcomes in measurable space \(\mathcal{X}\), results in the following probability measure \[\mu_{\rho}^{M}(B)=\mathrm{Tr}\{\rho M(B)\},\quad B\in\mathcal{B}. \tag{73}\] It is also necessary to have a proper definition of post-measurement states. The a posteriori average density operator for a subset \(B\in\mathcal{B}\) is defined in [34] for a general POVM \(M\) as \[\rho_{B}=\frac{\sqrt{M(B)}\rho\sqrt{M(B)}}{\mathrm{Tr}\{\rho M(B)\}}. \tag{74}\] Based on that, Ozawa defines the post-measurement state for a continuous quantum system, as given in the following theorem. **Theorem 3** (Theorem 3.1. [34]).: _For any observable \(M\) and input density operator \(\rho\), there exists a family of a posteriori density operators \(\{\rho_{x};x\in\mathbb{R}\}\), defined with the following properties:_ 1. _for any_ \(x\in\mathbb{R}\)_,_ \(\rho_{x}\) _is a density operator in_ \(\mathcal{H}_{A}\)_;_ 2. _the function_ \(x\to\rho_{x}\) _is strongly Borel measurable;_ 3. _for any arbitrary observable_ \(N\)_, and any Borel sets_ \(A\) _and_ \(B\)_, the joint probability_ \[P(X\in B,Y\in A) =\operatorname{Tr}\Bigl{\{}\sqrt{M(B)}\rho\sqrt{M(B)}N(A)\Bigr{\}}\] \[=\int_{B}\operatorname{Tr}\{\rho_{x}N(A)\}\mu(dx),\] _where_ \(\mu(B),\ B\in\mathcal{B}\) _is the probability measure of the outcome space._ Comparing this result to [[35], Section IV], where the outcome ensemble is defined as \[\mathcal{E}^{\prime}:\qquad\pi^{\prime}(B)=\operatorname{Tr}\{\rho M(B)\}, \quad\rho^{\prime}_{y}=\frac{\rho^{-1/2}m(y)\rho^{-1/2}}{\operatorname{Tr}\{ \rho m(y)\}},\quad M(B)=\int_{B}m(y)\mu(dy),\] we note that the density function \(m(y)\) is not necessarily defined for all measurements so we can write the post-measurement state in this simple format. This latter expression only exists when the POVM \(M\) also has a well-defined pdf \(m(y)\) which is not generally true. In general, \(\rho_{x}\) holds as defined by Ozawa, and will be left in the general format. The above theorem provides the necessary requirements to define the proper information quantity. Thus, using the post-measured state ensemble one can define the information gain introduced by Groenwold [36] for an input state \(\rho\) and output ensemble \(\{\rho(B),\mu_{\rho}^{M}(B)\}_{B\in\mathcal{B}}\) as [33]: \[I_{g}(\rho,X)=H(\rho)-\int_{\mathcal{X}}H(\rho_{x})\mu_{\rho}^{M}(dx). \tag{75}\] It is worth mentioning that the information gain equals the quantum mutual information in finite-dimensional measurement systems. ### Main Results: Achievable Rate Region for Continuous Quantum Systems We first redefine the Definitions 2.2 and 2.3 to match with the continuous quantum system. Thus the definition of achievability for a continuous quantum source coding scheme is as follows: **Definition 3.3**.: _An \((n,R,R_{c})\) source-coding scheme for the continuous quantum-classical system is comprised of an encoder \(\mathcal{E}_{n}\) on Alice's side and a decoder \(\mathcal{D}_{n}\) on Bob's side, with the detailed description provided in the Definition 2.2. The final output sequence \(X^{n}\) is generated by the decoder in the output space \(\mathcal{X}^{n}\) with the probability measure \(\{P_{X^{n}}(B),\ B\in\mathcal{B}(\mathcal{X}^{n})\}\). Thus, the average post-measured reference state and its corresponding Borel subset of the output sample space, form an ensemble of the form \(\{\hat{\rho}_{B}^{R^{n}},B\in\mathcal{B}(\mathcal{X}^{n})\}\), where_ \[\hat{\rho}_{B}^{R^{n}} =\frac{1}{P_{X^{n}}(B)}\sum_{m,l}\frac{1}{|\mathcal{M}|} \operatorname{Tr}_{A^{n}}\left\{(\text{id}\otimes\Upsilon_{l}^{(m)})[\psi_{RA} ^{\rho}]^{\otimes n}\right\}\mathcal{D}_{n}(B|l,m), \tag{76}\] \[P_{X^{n}}(B) =\sum_{m,l}\frac{1}{|\mathcal{M}|}\operatorname{Tr}\Bigl{\{} \Upsilon_{l}^{(m)}\rho^{\otimes n}\Bigr{\}}\mathcal{D}_{n}(B|l,m). \tag{77}\] We define the average n-letter distortion for the source coding system with encoder /decoder pair \(\mathcal{E}_{n}\) and \(\mathcal{D}_{n}\), and set of distortion observable operators \(\Delta(x),x\in\mathcal{X}\) and continuous memoryless source state \(\rho^{\otimes n}\) as \[d_{n}(\rho^{\otimes n},\mathcal{D}_{n}\circ\mathcal{E}_{n})=\frac{1}{n}\sum_{i=1 }^{n}\mathbb{E}_{X_{i}}\left[\mathrm{Tr}\Big{\{}\hat{\rho}_{X_{i}}^{R_{i}} \Delta(X_{i})\Big{\}}\right], \tag{78}\] where \(\hat{\rho}_{x_{i}}^{R_{i}}:=\mathbb{E}_{x^{n\setminus[i]}}\left[\mathrm{Tr}_{n \setminus[i]}\left\{\hat{\rho}_{X_{i}}^{R_{n}}\right\}\right]\) is the \(i\)-th local state of the post-measurement reference density operator in (76) defined for \(x^{n}\in\mathcal{X}^{n}\) as described by Theorem 3. Consequently, the following definition of achievability is used throughout this paper. **Definition 3.4**.: _We are given a probability measure \(\mu_{x}\) on \((\mathcal{X},\mathcal{B}(\mathcal{X}))\) and a distortion level \(D\). Then assuming a product input state of \(\rho^{\otimes n}\) of continuous Hilbert space, a rate pair \((R,R_{c})\) is defined as achievable if, for any sufficiently large \(n\) and any positive value \(\epsilon>0\), there exists an \((n,R,R_{c})\) coding scheme comprising of a measurement encoder \(\mathcal{E}_{n}\) and a decoder \(\mathcal{D}_{n}\) as described in Definition 3.3 that satisfy the following conditions as defined by the corresponding distortion measure of (78)_ \[X^{n}\sim\mu_{X}^{n},\quad d_{n}(\rho^{\otimes n},\mathcal{D}_{n}\circ \mathcal{E}_{n})\leq D+\epsilon. \tag{79}\] Then the achievable rate region for the continuous quantum system is defined by the following theorem: **Theorem 4**.: _Given a pair \((\mu_{X},D)\) and having a product input state \(\rho^{\otimes n}\) of continuous infinite-dimensional Hilbert space with limited von Neumann entropy, a rate pair \((R,R_{c})\) is achievable in the sense of the definition 3.4 if and only if there exists an intermediate state \(W\) with a corresponding measurement POVM \(M=\{M^{A}(B),B\in\mathcal{B}_{\mathcal{W}}\}\) where \(\mathcal{B}_{\mathcal{W}}\) is the \(\sigma\)-algebra of the Borel sets of \(\mathcal{W}\), and randomized post-processing transformation \(P_{X|W}\) which satisfies the rate inequalities_ \[R>I_{g}(W;R), \tag{80}\] \[R+R_{c}>I(W;X), \tag{81}\] _where \(W\), constructs a quantum Markov chain \(R-W-X\) which generates the ensemble \(E_{w}:=\{\rho_{w},w\in\mathcal{W}\}\) with the post-measurement reference states and the intermediate space, and generates the ensemble \(E_{x}:=\{\rho_{x},x\in\mathcal{X}\}\) with the post-measurement reference states and the output space, from the set_ \[\mathcal{M}_{c}(\mathcal{D})=\left\{(E_{w},E_{x})\left|\begin{array}{l}\int _{w}P_{X|W}(A|w)\,\mathrm{Tr}\{M(dw)\rho\}=\mu_{X}(A)\quad\text{for }A\in\mathcal{B}(\mathcal{X})\\ \int_{x\in\mathbb{R}}\mathrm{Tr}_{R}\left[\rho_{x}\,\Delta_{R}(x)\right]\mu_{X }(dx)\leq D\end{array}\right.\right\}. \tag{82}\] Note that the cardinality bound does not exist in the continuous case as we have \(\mathcal{W}\equiv\mathcal{X}\equiv\mathbb{R}\). ### Proof of Achievability in the Continuous System Given the pair \((\mu_{X},D)\), assume there exists a continuous intermediate state \(W\) forming a quantum Markov chain \(R-W-X\), with a corresponding continuous POVM \(M=\{M(B),B\in\mathcal{B}(\mathcal{X})\}\) with outcomes in \(\mathcal{W}\) space, defined as a set of Hermitian operators in \(\mathcal{H}_{A}\) satisfying conditions of Theorem 4. Moreover, a corresponding classical post-processing channel \(P_{X|W}:\mathcal{W}\times\sigma(\mathcal{X})\rightarrow\mathbb{R}\) which is a mapping such that for every \(w\in\mathcal{W}\), \(P_{X|W}(.|w)\) is a probability measure on \(\mathcal{B}(\mathcal{X})\) and for every \(B\in\mathcal{B}(\mathcal{X})\), \(P_{X|W}(B|.)\) is a Borel-measurable function. Then according to Ozawa's theorem [34], for the measurement POVM \(M\), there exists a family of post-measurement density operators \(\rho_{w}\) along with a probability measure \(\mu_{\rho}^{M}\) (or equivalently, \(\rho_{x}\) along with \(\mu_{X}\) if we consider the overall POVM \(\int_{\mathcal{W}}M(dw)P(B|w),\ B\in\mathcal{B}(\mathcal{X})\), such that \[\int_{w}P_{X|W}(A|w)\mu_{\rho}^{M}(dw)=\mu_{X}(A),\quad\text{for all }A\subseteq\mathcal{B}(\mathcal{X}), \tag{83}\] \[d(R,X):=\int_{x}\mathrm{Tr}\{\rho_{x}\Delta_{R}(x)\}\mu_{X}(dx) \leq D. \tag{84}\] The continuous quantum system can be represented by \(\{|n\rangle\}_{n=0}^{\infty}\), the number operators of the Fock basis, which is a countable infinite dimensional Hilbert space. We plan to use the source coding theorem of the discrete system from the previous section. To use Theorem 1, we first perform a clipping measurement, which cuts off the number operators above a cut-off threshold \(k_{1}\), using the following gentle POVM: \[C_{k_{1}}\equiv\left\{\Pi_{k_{1}}:=\sum_{n=0}^{k_{1}}|n\rangle \!\langle n|\,,\quad I-\Pi_{k_{1}}\right\}. \tag{85}\] Therefore, for any small \(\epsilon_{c}>0\), there exists a large enough \(k_{1}\) such that the probability of the state projecting to the first subspace is \(\epsilon_{c}\)-close to unity, \(\mathrm{Tr}\{\rho\Pi_{k_{1}}\}\geq 1-\epsilon_{c}\). For a detailed description of the spectral decomposition of continuous-variable quantum systems, refer to [37, 38]. #### 3.3.1 Information Processing Task In the classical systems, to generate a discrete coding scheme from the continuous distributions, one simply extends the Markov chain \(R-W-X\) to discrete variables by quantizing the input and output variables as \(R_{k_{1}}-R-W-X-X_{k_{2}}\) where \(k_{1},k_{2}\in\mathbb{N}\) are the quantization parameters forming \(2^{k_{i}}\) levels within the \([-k_{i},k_{i}]\) region [13]. In our quantum-classical system, as the output is classical we can extend the original Markov chain by quantizing output \(X\) with \(Q_{K_{2}}\) where \(K_{2}=(k_{2},k_{2}^{\prime})\) is the pair of clipping region parameters creating the cut-off range \([-k_{2},k_{2}]\) and \(k_{2}^{\prime}\) is the precision parameter making \(2^{k_{2}^{\prime}}\) levels forming the quantized output space \(\mathcal{X}_{K_{2}}\), such that \[R\xrightarrow{M_{\text{s}}}W\xrightarrow{P_{X|W}}X \xrightarrow{Q_{k_{2}}}X_{K_{2}}. \tag{86}\] However, for the source state, performing the inverse of measurement is not straightforward, as the quantum state collapses after measurement. Therefore, one cannot simply apply the inverse of clipping projection to the projected state \(R_{k_{1}}\). Therefore, we cannot extend the original Markov chain for the source state in this case. Instead, in an alternative approach, we form another separate Markov chain by clipping the quantum source state with (85) and then directly feeding that clipped state \(R_{k_{1}}\) into the same continuous measurement POVM \(M_{w}\), as shown in figure 1. This is specifically possible as the clipped input state lies in a subspace of the same quantum Hilbert space. \[R\xrightarrow{C_{k_{1}}}R_{k_{1}}\xrightarrow{M_{\text{s}}} W_{k_{1}}^{\prime}\xrightarrow{P_{X|W}}X_{k_{1}}^{\prime}\xrightarrow{Q_{k_{2}}}X_{k_{1},K_{2}}^{\prime}. \tag{87}\] #### 3.3.2 Proof of Rate Inequalities In this approach, the data processing inequality does not directly apply to the system. Therefore we must prove the following lemma: **Lemma 5**.: _Suppose that we have a Quantum Markov chain of the form \(R-W-X\) satisfying the conditions in _Theorem 4_.: _Then by using the alternative clipping method as described by (87), the clipped states still satisfy the following rate inequalities in the asymptotic regime_ \[\lim_{k_{1}\rightarrow\infty}I_{g}(R_{k_{1}};W^{\prime}_{k_{1}}) \leq I_{g}(R;W), \tag{88}\] \[\lim_{k_{1}\rightarrow\infty}I(W^{\prime}_{k_{1}};X^{\prime}_{k_ {1},K_{2}}) \leq I(W;X). \tag{89}\] For the first rate inequality we directly use the following Proposition [ Proposition 6 of [39] by Shirokov]: **Proposition 6**.: _Let \(\{\rho^{n}_{A}\}\) be a sequence of states converging to a state \(\rho^{0}_{A}\) (See Section 11.1 of [33]). Also, let \(\{M_{n}\}\) be any arbitrary sequence of POVMs weakly converging to a POVM \(M_{0}\), with the outcome space \(I\). If either card\(\{I\}<+\infty\) or \(\lim_{n\rightarrow\infty}H(\rho^{n}_{A})=H(\rho^{0}_{A})<+\infty\) then_ \[\lim_{n\rightarrow\infty}I_{g}(M_{n},\rho^{n}_{A})=I_{g}(M_{0}, \rho^{0}_{A}), \tag{90}\] _where \(I_{g}(M_{x},\rho^{x})\) is the information gain as defined by Groentvold._ To use this proposition in our system, we will use a refined POVM by combining the clipping projection and the \(M\) measurement POVM, in the following way. We first apply the clipping projection onto the input state \(\rho_{A}\). If it is not in the projected subspace (with probability \(\operatorname{Tr}\{(I-\Pi_{k_{1}})\rho_{A}\}\)), we throw out the state and only notify the receiver by asserting an error bit \(A_{k_{1}}\). Otherwise, if the input state is inside the clipping subspace (with probability \(\operatorname{Tr}\{\Pi_{k_{1}}\rho_{A}\}\)), the input is sent to the \(M\) measurement POVM which produces the classical outcome. In this case, the post-collapsed state after clipping projection is \[\hat{\rho}^{A^{\prime}}=\frac{\Pi_{k_{1}}\rho_{A}\Pi_{k_{1}}}{ \operatorname{Tr}\{\Pi_{k_{1}}\rho_{A}\}},\] and the final a-posteriori average density operator for an event \(B\in\mathcal{B}\) is obtained as \[\hat{\rho}^{A^{\prime\prime}}_{B}=\frac{\sqrt{M(B)}\Pi_{k_{1}} \rho_{A}\Pi_{k_{1}}\sqrt{M(B)}}{\operatorname{Tr}\{M(B)\Pi_{k_{1}}\rho_{A} \Pi_{k_{1}}\}}.\] Then the combination of the two above POVMs can be expressed in the following refined POVM \[\hat{M}_{k_{1}}=\left\{\left\{M(B)\Pi_{k_{1}},B\in\mathcal{B}\right\},I-\Pi_{ k_{1}}\right\}.\] Figure 1: Markov Chain of the Alternative Approach: The upper diagram shows the original Markov chain provided by the single-letter intermediate state \(W\). The lower diagram shows the single-letter Markov chain of the alternative approach in which the clipped source state is directly fed into the same continuous measurement \(M_{w}\) and the discrete output is obtained by quantizing \(X^{\prime}_{k_{1}}\). Finally the optimal transport transforms the discrete output back to the continuous output \(\hat{X}\). It can be shown that the above refined POVM is a valid POVM. Next, we show that the sequence of \(\hat{M}_{k_{1}}\) POVMs converge weakly to \(M\) continuous POVM. As mentioned in [33] lemma 11.1, it suffices to show that for each subset \(B\in\mathcal{B}\) and any two arbitrary states \(\phi,\psi\in\mathcal{H}\), the inner-product converges as \[\lim_{k_{1}\rightarrow\infty}\left\langle\psi|\hat{M}_{k_{1}}(B) \middle|\phi\right\rangle=\left\langle\phi|M(B)|\psi\right\rangle.\] Starting with \(I-\Pi_{k_{1}}\), we show that this operator vanishes to zero when \(k_{1}\rightarrow\infty\), for any \(\psi,\phi\in\mathcal{H}\) \[\lim_{k_{1}\rightarrow\infty}\left|\left\langle\phi|(I-\Pi_{k_{1 }})|\psi\right\rangle\right| \leq\lim_{k_{1}\rightarrow\infty}\lVert(I-\Pi_{k_{1}})\left| \psi\right\rangle\rVert_{2}\cdot\lVert(I-\Pi_{k_{1}})\left|\phi\right\rangle \rVert_{2}\] \[=\lim_{k_{1}\rightarrow\infty}\sqrt{\left\langle\phi|(I-\Pi_{k_{ 1}})|\phi\right\rangle}\cdot\sqrt{\left\langle\psi|(I-\Pi_{k_{1}})|\psi\right\rangle},\] where the inequality appeals to Cauchy-Schwartz inequality for Hilbert space. By using definition of the projector operator and then using the Bessel's inequality and the fact that \(|n\rangle\) are orthonormal set, we have \[\left\langle\phi|\Pi_{k_{1}}|\phi\right\rangle=\sum_{n=1}^{k_{1}}|\left\langle n |\phi\right\rangle|^{2}\leq\lVert\phi\rVert^{2}.\] The above inequality changes to equality when the orthonormal set is a complete orthonormal basis (Parseval's identity), which is when \(k_{1}\rightarrow\infty\). Substituting this into the inner-product expression proves that \[\lim_{k_{1}\rightarrow\infty}\left|\left\langle\phi|(I-\Pi_{k_{1}})|\psi \right\rangle\right|=0.\] Moreover for the other operators, we have \[\lim_{k_{1}\rightarrow\infty}\left\langle\phi|M(B)\Pi_{k_{1}}|\psi\right\rangle =\lim_{k_{1}\rightarrow\infty}\left\langle\phi^{\prime}|\Pi_{k_{ 1}}|\psi\right\rangle\] \[=\left\langle\phi^{\prime}|\psi\right\rangle\] \[=\left\langle\phi|M(B)|\psi\right\rangle,\] where \(|\phi^{\prime}\rangle=M(B)\left|\phi\right\rangle\), and the argument is similar to previous operator. These together prove that the sequence of POVMs \(\hat{M}_{k_{1}}\) weakly converge to the \(M\) POVM. Therefore, by applying the Proposition 6, it follows that \[\lim_{k_{1}\rightarrow\infty}I(R_{k_{1}},W_{k_{1}}^{\prime})=I(R;W). \tag{91}\] Next, we prove the second rate-inequality (89). Define the clipping error indicator variable \(A_{k_{1}}\) as the event that the state is not inside the clipping subspace. By applying the chain rule for mutual information we have \[I(W,A_{k_{1}};X_{K_{2}})=I(W;X_{K_{2}})+I(A_{k_{1}};X_{K_{2}}|W) \leq H(A_{k_{1}})+I(W;X_{K_{2}}),\] which results in, \[I(W,A_{k_{1}};X_{K_{2}})-H(A_{k_{1}})\leq I(W;X_{K_{2}})\leq I(W,A_{k_{1}};X_{ K_{2}}). \tag{92}\] Therefore, as \(H(A_{k_{1}})\) decays to zero when \(k_{1}\rightarrow\infty\), from squeeze theorem we have the following limit \[\lim_{k_{1}\rightarrow\infty}I(W,A_{k_{1}};X_{K_{2}})=I(W;X_{K_{2}}). \tag{93}\] Again, using the chain rule we further have \[I(W,A_{k_{1}};X_{K_{2}}) =I(A_{k_{1}};X_{K_{2}})+I(W;X_{K_{2}}|A_{k_{1}})\] \[=I(A_{k_{1}};X_{K_{2}})+P(A_{k_{1}}=0).I(W;X_{K_{2}}|A_{k_{1}}=0)\] \[\qquad\qquad\qquad+P(A_{k_{1}}=1).I(W;X_{K_{2}}|A_{k_{1}}=1). \tag{94}\] As the clipping region grows to infinity with \(k_{1}\rightarrow\infty\), the probability of clipping decays to zero \(\lim_{k_{1}\rightarrow\infty}P(A_{k_{1}}=0)=1\). Therefore, because \(A_{k_{1}}\) is a simple Bernoulli random variable with the probability asymptotically small, then \(I(A_{k_{1}};X_{K_{2}})\leq H(A_{k_{1}})\to 0\). Also for the third term, we can find the following asymptotic bound \[\lim_{k_{1}\rightarrow\infty}P(A_{k_{1}}=1).I(W;X_{K_{2}}|A_{k_{1}}=1)\leq \lim_{k_{1}\rightarrow\infty}P(A_{k_{1}}=1).H(X_{K_{2}})=0, \tag{95}\] where in the above, we appealed to the fact that quantized output with limited alphabet has limited entropy. Finally, consider that \(I(W;X_{K_{2}}|A_{k_{1}}=0)=I(W^{\prime}_{k_{1}},X^{\prime}_{k_{1},K_{2}})\) holds by definition because when the input state is inside the clipping subspace, the system behaves as if no clipping was performed. Thus, combining all together, for any fixed \(K_{2}\) we have \[\lim_{k_{1}\rightarrow\infty}I(W,A_{k_{1}};X_{K_{2}})=\lim_{k_{1}\to \infty}I(W^{\prime}_{k_{1}},X^{\prime}_{k_{1},K_{2}}). \tag{96}\] Then (93) and (96) together show that for any fixed \(K_{2}\) we have \[\lim_{k_{1}\rightarrow\infty}I(W^{\prime}_{k_{1}},X^{\prime}_{k_{1},K_{2}})=I (W,X_{K_{2}})\leq I(W;X), \tag{97}\] where the inequality follows directly from the Data Processing Inequality and completes the proof. In addition, note that as \(k_{2},k^{\prime}_{2}\rightarrow\infty\), \(X_{K_{2}}\) converges weakly to \(X\). Therefore, using lower semi-continuity of mutual information [40, 41], combined with the above data-processing inequality we further have \[\lim_{k_{2},k^{\prime}_{2}\rightarrow\infty}\lim_{k_{1}\rightarrow\infty}I(W ^{\prime}_{k_{1}},X^{\prime}_{k_{1},K_{2}})=\lim_{k_{2},k^{\prime}_{2} \rightarrow\infty}I(W,X_{K_{2}})=I(W;X). \tag{98}\] #### 3.3.3 Source-Coding Protocol for Continuous States Having a quantum source generating a sequence of \(n\) independent continuous states as \(\rho^{\otimes n}\), we apply a coding protocol on the source states, described in this section. We first separate the input states into proper and improper states by applying the clipping POVM \(\Pi_{k_{1}}\) which generates a sequence of error bits \(A^{n}_{k_{1}}\equiv\{A_{i,k_{1}}\}_{i=1}^{n}\) defined as \[A_{i,k_{1}}:=\begin{cases}0&\text{if }\rho_{i}\in\Pi_{k_{1}}\\ 1&\text{O.W.}\end{cases}. \tag{99}\] Thus, according to WLLN, for any fixed \(\epsilon_{cl}>0\) and \(k_{1}\in\mathbb{N}\), there exists a value \(N_{0}(\epsilon_{cl},k_{1})\) large enough such that for any \(n\geq N_{0}(\epsilon_{cl},k_{1})\), the number of proper states (states in the clipping subspace) \[T:=\sum_{i=1}^{n}A_{i,k_{1}}, \tag{100}\] is within the range \(T\in[n(1-P_{k_{1}}\pm\epsilon_{cl})]\) with probability no less than \(1-\epsilon_{cl}\). Then for any sequence with \(T<t_{\text{min}}\) where \(t_{\text{min}}:=n(1-P_{k_{1}}-\epsilon_{cl})\), we do not perform source coding, and instead assert a source coding error event \(E_{ce}\). Upon receiving the coding error event, Bob will locally generate a sequence of random outcomes \(\hat{X}_{\text{local}}^{n}\) with the desired \(\mu_{X}^{n}\) output distribution. This ensures that in every sequence for which the coding is performed, there are \(T>t_{\text{min}}\) independent source states which are inside the clipping region, for which we perform the coding scheme. For the rest of the \(n-T\) states, we do not perform the coding, instead, we throw away the source state and send the error-index to the receiver. Next, the error bits sequence is coded into indices of size \(\binom{n}{t_{\text{min}}}+1\) where the extra index is the event of a source coding error \(E_{ce}\). Note here that the required classical rate, in this case, will be \(R+\log_{2}\left(\binom{n}{t_{\text{min}}}+1\right)\). The following limit \[\lim_{\epsilon_{cl}\to 0}\lim_{k_{1}\to\infty}\lim_{n\to\infty}R+\frac{1}{n}\log_{2} \left(\binom{n}{n(1-P_{k_{1}}-\epsilon_{cl})}+1\right)=R, \tag{101}\] ensures that the extra error handling rate can be made arbitrarily small. Then at Bob's side, the classical sequence \(X_{k_{1},K_{2}}^{t}\) is constructed using the discrete coding scheme, and is fed to a memoryless optimal transport block \(\mathcal{T}_{\mu|\mu_{K_{2}}}\) to generate the final continuous sequence \(\hat{X}_{k_{1},K_{2}}^{t}\). Finally, the sequence is padded with the \(n-T\) locally generated independent values at the error positions to create the final \(\hat{X}_{k_{1},K_{2}}^{n}\). Bob then uses this sequence to prepare his final quantum states. Figure 2 shows the block diagram of this coding protocol. #### 3.3.4 Proof of Distortion Constraint The end-to-end average distortion for the above system is written as \[d_{n}(R^{n},\hat{X}_{k_{1},K_{2}}^{n}) =d_{n}\left(R^{n},\hat{X}_{k_{1},K_{2}}^{n}\Big{|}E_{ce}\right)P (E_{ce})+d_{n}\left(R^{n},\hat{X}_{k_{1},K_{2}}^{n}\Big{|}\neg E_{ce}\right)( 1-P(E_{ce}))\] \[=\frac{1}{n}\sum_{i=1}^{n}d\left(R_{i},\hat{X}_{i,\text{local}}|E _{ce}\right)\epsilon_{cl}+d_{n}\left(R^{n},\hat{X}_{k_{1},K_{2}}^{n}\Big{|} \neg E_{ce}\right), \tag{102}\] Figure 2: Coding Protocol where \(\tilde{X}_{\text{local}}^{n}\) is generated locally at Bob's side according to the fixed i.i.d. output distribution \(\mu_{X}\) in the event of coding error \(E_{ce}\). Therefore, in the first term above, for each \(i\)-th sample of the system, the uniform integrability of the distortion observable implies that it can be made arbitrarily small by selecting the proper value of \(\epsilon_{cl}\). As for the second term, we use the following lemma to provide a single-letter upper bound: **Lemma 7**.: _The end-to-end average \(n\)-letter distortion of the continuous system conditioned on the event of no coding error is asymptotically upper-bounded by the following single-letter distortion for any fixed value of \(k_{1},k_{2},k_{2}^{\prime}>0\) as \(n,t_{\text{min}}\to\infty\):_ \[\lim_{\begin{subarray}{c}n\to\infty,\\ t_{\text{min}}\to\infty\end{subarray}}d_{n}(R^{n},\hat{X}_{k_{1},K_{2}}^{n}| \neg E_{ce}) \leq(P_{k_{1}}+\epsilon_{cl})d\left(R,\hat{X}_{\text{local}}|A_{k _{1}}=1\right)+d(R_{k_{1}}+X_{k_{1},K_{2}}^{\prime})\] \[+\int_{\mathcal{X}}\text{Tr}\Big{\{}\bar{\bar{\rho}}_{x}^{R} \Big{(}\Delta(x)-\Delta(Q_{K_{2}}(x))\Big{)}\Big{\}}\mu_{X}(dx), \tag{103}\] _where \(\bar{\bar{\rho}}_{x}^{R}\) is the asymptotic post-measured average reference state, given by_ \[\bar{\bar{\rho}}_{x}^{R}:=\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\rho_{x}^{ R_{i}}.\] Proof.: See Appendix C.2 We then take the limit of this single-letter distortion as \(k_{1},k_{2}\to\infty\). Note that as \(\lim_{k_{1}\to\infty}P(A_{k_{1}}=1)=0\), the first term decays to zero by assuming that \(\epsilon_{cl}\leq P_{k_{1}}\), as a result of uniform integrability of the distortion observable. The third term also decays to zero as follows: \[\lim_{k_{2}\to\infty}\lim_{k_{2}\to\infty}\int_{\mathcal{X}}\text {Tr}\Big{\{}\bar{\bar{\rho}}_{x}^{R}\Big{(}\Delta(x)-\Delta(Q_{K_{2}}(x)) \Big{)}\Big{\}}\mu_{X}(dx)\] \[\leq\lim_{k_{2}\to\infty}\lim_{k_{2}^{\prime}\to\infty}\int_{-2^ {k_{2}}}^{2^{k_{2}}}\|\Delta(x)-\Delta(Q_{K_{2}}(x))\|_{1}\mu_{X}(dx)\] \[\quad+\lim_{k_{2}\to\infty}\int_{\mathcal{X}\setminus[-2^{k_{2}},2^{k_{2}}]}\text{Tr}\Big{\{}\bar{\bar{\rho}}_{x}^{R}\Big{(}\Delta(x)-\Delta( Q_{K_{2}}(x))\Big{)}\Big{\}}\mu_{X}(dx)=0, \tag{104}\] where we split the integral into the cut-off range and out-of-range intervals and used the Holder's inequality. Then the first term above converges to zero due to the continuity of \(\Delta(x)\) operator and the second term converges to zero by the definition of uniform integrability. Next, consider that the following inequality holds by definition for the single-letter discrete distortion: \[d(R,X_{K_{2}}) =d(R,X_{K_{2}}|A_{k_{1}}=0)\Pr(A_{k_{1}}=0)+d(R,X_{K_{2}}|A_{k_{1} }=1)\Pr(A_{k_{1}}=1)\] \[=d(R_{k_{1}},X_{k_{1},K_{2}}^{\prime})\Pr(A_{k_{1}}=0)+d(R,X_{K_{ 2}}|A_{k_{1}}=1)\Pr(A_{k_{1}}=1). \tag{105}\] Then, by having the probability of clipping approach zero \(\lim_{k_{1}\to\infty}\Pr(A_{k_{1}}=1)=0\), the second term above goes to zero as a direct result of uniform integrability, and we have the following asymptotic limit: \[\lim_{k_{1}\to\infty}d(R_{k_{1}},X_{k_{1},K_{2}}^{\prime})=d(R,X_{K_{2}}):= \int_{\mathcal{X}}\text{Tr}\big{\{}\sqrt{\rho}\,\Lambda_{X}(dz)\sqrt{\rho}\, \Delta\big{(}Q_{K_{2}}(z)\big{)}\big{\}}. \tag{106}\] Then as \(k_{2},k_{2}^{\prime}\rightarrow\infty\), we can upper-bound this RHS distortion value by \[\lim_{k_{2},k_{2}^{\prime}\rightarrow\infty}d(R,X_{K_{2}}) \leq D+\lim_{k_{2},k_{2}^{\prime}\rightarrow\infty}\left(d(R,X_{K_ {2}})-d(R,X)\right)\] \[=D+\lim_{k_{2},k_{2}^{\prime}\rightarrow\infty}\int_{\mathcal{X} }\operatorname{Tr}\Bigl{\{}\sqrt{\rho}\,\Lambda_{X}(dz)\sqrt{\rho}\left( \Delta\bigl{(}Q_{K_{2}}(z)\bigr{)}-\Delta(z)\right)\Bigr{\}}=D, \tag{107}\] where the last equality follows similarly from continuity and uniform integrability of the distortion observable operator \(\Delta(x)\) as a function of \(x\in\mathcal{X}\). Combining the above bounds to the single-letter expression in (103) shows \[\lim_{\begin{subarray}{c}n\rightarrow\infty,\\ t_{\text{min}}\rightarrow\infty\end{subarray}}d_{n}(R^{n},\hat{X}_{k_{1},K_ {2}}^{n})\leq D, \tag{108}\] which completes the proof of achievability. ## 4 Evaluations and Examples In this section, we present the evaluations and examples of the main coding theorem. Having the source state \(\rho\) and output distribution \(Q_{X}\), the purpose is to find the output-constrained rate-distortion function \(R(D;R_{c},\rho||Q_{X})\) for any fixed amount of common randomness. By inverting this function we then achieve the rate-limited optimal transport cost function. ### Qubit System with Unlimited Common Randomness For the case of the qubit quantum-classical system, we employ entanglement fidelity as the distortion measure. Therefore, the distortion can be written as \[\operatorname{Tr}\{\Delta_{RX}\tau_{RX}\} =\operatorname{Tr}\Biggl{\{}\left(I-\left|\psi^{RA}\right\rangle \!\!\left\langle\psi^{RA}\right|\right)\left(\sum_{x}\operatorname{Tr}_{A} \left\{(\text{id}\otimes M_{x}^{A})\psi^{RA}\right\}\otimes|x\rangle\!\! \left\langle x\right|\right)\Biggr{\}}\] \[=1-\left\langle\psi^{RA}\right|\left(\sum_{x}\sqrt{\rho}M_{x} \sqrt{\rho}\otimes|x\rangle\!\!\left\langle x\right|\right)\left|\psi^{RA} \right\rangle. \tag{109}\] Using the following spectral decomposition of \(\rho\) on the eigenbasis \(\left|\varphi_{t}\right\rangle_{t=1}^{d}\), \[\rho_{A}=\sum_{t=1}^{d}P_{T}(t)\left|\varphi_{t}\right\rangle\!\!\left\langle \varphi_{t}\right|_{A},\] and by substituting the canonical purification of the above decomposition into (109), it simplifies to \[\operatorname{Tr}\{\Delta_{RX}\tau_{RX}\}=1-\sum_{x}\left\langle x\right| \rho M_{x}^{T_{v}}\rho\left|x\right\rangle, \tag{110}\] where \(M_{x}^{T_{v}}\) is the transpose of \(M_{x}\) with respect to the \(\{\varphi_{t}\}\) basis, defined as \[M_{x}^{T_{v}}=\sum_{t,s}\left\langle\varphi_{t}|M_{x}|\varphi_{s}\right\rangle \left|\varphi_{t}\right\rangle\!\!\left\langle\varphi_{s}\right|.\] In the presence of an unlimited amount of common randomness, the only effective rate becomes \(I(W;R)\), which is lower-bounded by \(I(X;R)\) because of the data processing inequality and the Markov chain \(R-W-X\)[13]. Thus \(W=X\) minimizes the mutual information, which means no local randomness is required at decoder. Therefore, using the main theorem, for a qubit system with input state \(\rho\) and Bernoulli(\(q_{1}\)) output distribution, the output-constrained rate-distortion function is obtained by \[R(D;\infty,\rho||\text{Bern}(q_{1}))= \min_{M_{x}^{A}}I(X;R)_{\tau},\] \[\text{Tr}\left\{M_{x}^{A}\rho\right\}=q_{x},\qquad\forall x\in \mathcal{X},\] \[\text{such that }\sum_{x}\text{Tr}_{A}\left\{(\text{id}_{R}\otimes M_{x}^ {A})\psi^{RA}\right\}=\rho, \tag{111}\] \[\text{Tr}\{\Delta_{RX}\tau_{RX}\}\leq D.\] where the quantum mutual information is with respect to the composite state \(\tau_{RX}=\sum_{x}\sqrt{\rho}M_{x}\sqrt{\rho}\otimes\left|x\right\rangle\!\! \left\langle x\right|^{X}.\) #### 4.1.1 Rate-Limited Optimal Transport for Qubit Measurement System By addressing the above optimization problem, we obtain Theorem 8, which yields a transcendental system of equations that determines the output-constrained rate-distortion function for this system. **Theorem 8**.: _For the case of qubit input state \(\rho\) with matrix representation_ \[\rho=\begin{bmatrix}\rho_{1}&\rho_{2}\\ \rho_{2}^{*}&1-\rho_{1}\end{bmatrix},\] _and fixed output with Bernoulli\((1-q_{0})\) distribution, with the presence of an unlimited amount of common randomness, and using entanglement fidelity distortion measure, the output-constrained rate-distortion function and the corresponding optimal POVMs \(M_{0},M_{1}\) are provided as follows._ _For any distortion level \(D\) above the threshold \(D\geq D_{R_{0}}\) where_ \[D_{R_{0}}:=1-q_{0}\left\langle 0\middle|\rho^{2}\middle|0\right\rangle- \left(1-q_{0}\right)\left\langle 1\middle|\rho^{2}\middle|1\right\rangle. \tag{112}\] _the output state can be generated independently, thus \(R(D;\infty,\rho||\text{Bern}(1-q_{0}))=0\) with \(M_{0,R_{0}}=q_{0}I\)._ _Otherwise if \(D<D_{R_{0}}\) then_ \[R(D;\infty,\rho||\text{Bern}(1-q_{0})) =H(\rho)-q_{0}H\left(\frac{N_{opt}}{q_{0}}\right)-\left(1-q_{0} \right)H\left(\frac{\rho-N_{opt}}{1-q_{0}}\right), \tag{113}\] \[M_{0} =\sqrt{\rho}^{-1}N_{opt}\sqrt{\rho}^{-1},\quad M_{1}=I-M_{0}. \tag{114}\] _The optimal parameter \(N_{opt}/q_{0}\) is the optimal post-measurement reference state conditioned on outcome 0 and is expressed by_ \[N_{opt}:=\begin{bmatrix}n&s\rho_{2}/|\rho_{2}|\\ s\rho_{2}^{*}/|\rho_{2}|&q_{0}-n\end{bmatrix},\] _whose variables \(n,s\) are obtained by solving the transcendental system of equations_ \[\begin{cases}\frac{-as+b(n-q_{0}/2)}{E_{1}}\ln\frac{q_{0}/2+E_{1}}{q_{0}/2-E_{ 1}}+\frac{-a(s-|\rho_{2}|)+b(n-\rho_{1}+\frac{1-q_{0}}{2})}{E_{2}}\ln\frac{ \frac{1-q_{0}}{2}+E_{2}}{\frac{1-q_{0}}{2}-E_{2}}&=0\\ an+bs+c&=0\end{cases}, \tag{115}\] _where_ \[E_{1}(n,s) :=\sqrt{\left(n-\frac{q_{0}}{2}\right)^{2}+s^{2}}, \tag{116}\] \[E_{2}(n,s) :=\sqrt{\left(n-\rho_{1}+\frac{1-q_{0}}{2}\right)^{2}+(s-|\rho_{2} |)^{2}}. \tag{117}\] _The parameters \(a,b,c\) are fixed parameters of system based on the input and output states \(\rho,q_{0}\) defined as_ \[a :=1-\frac{4|\rho_{2}|^{2}}{1+2k}, b :=\frac{2|\rho_{2}|(2\rho_{1}-1)}{1+2k},\] \[c :=q_{0}\left(\rho_{1}-1+\frac{2|\rho_{2}|^{2}}{1+2k}\right)+ \left\langle 1|\rho^{2}|1\right\rangle-1+D, k :=\sqrt{\det\{\rho\}}.\] Proof.: See Appendix A. The following two special input states result in interesting \(N_{opt}\) optimal matrices: * Pure input state: For the case of pure input state, the rate-distortion curve reduces to a single point where the rate is \(R=0\), the optimal \(N_{opt}=q_{0}\rho\) and \(D=D_{R_{0}}\). This is because the pure input state has no correlation with the reference, so the receiver can simply use local randomness in its decoder. * Diagonal (among canonical eigen-basis) quantum input state: In this case, the optimal operator \(N_{opt}\) will also be diagonal. One can simply find that in this case when \(D\leq D_{R_{0}}\), the optimal operator is given by \[N_{opt}^{cl}=\begin{bmatrix}1-D+(1-\rho_{1})(q_{0}+\rho_{1}-1)&0\\ 0&D+\rho_{1}(q_{0}+\rho_{1}-2)\end{bmatrix}.\] #### 4.1.2 Optimal Transport for Qubit Measurement System The optimal transport scheme provides the minimum achievable distortion when the rate of information is not limited. The following theorem provides this value for the problem of the qubit measurement system. **Theorem 9**.: _For the case of qubit input state \(\rho\) and fixed output distribution \(\text{Bern}(1-q_{0})\), with the presence of the unlimited amount of common randomness, the optimal transport with respect to the entanglement fidelity distortion measure is obtained under different conditions of parameters as follows. Defining the parameter \(Q:=\frac{\rho_{1}-1/2}{\sqrt{1-4|\rho_{2}|^{2}}}\), we have the minimum transportation cost \(D_{OT}:=D(R=\infty,R_{c}=\infty,\rho||\text{Bern}(1-q_{0}))\) and the optimal operator \(N_{OT}:=\begin{bmatrix}n_{OT}&s_{OT}\rho_{2}/|\rho_{2}|\\ s_{OT}\rho_{2}^{2}/|\rho_{2}|&q_{0}-n_{OT}\end{bmatrix}\) given by:_ 1. _if_ \(Q\leq\frac{\det(\rho)}{1-q_{0}}-\frac{1}{2}\) _then_ \[D_{OT} =q_{0}(1-\rho_{1})+\det(\rho)+\frac{1-q_{0}}{2}\left(1-\sqrt{1-4 |\rho_{2}|^{2}}\right),\] (118) \[s_{OT} =\frac{b}{\sqrt{1-4|\rho_{2}|^{2}}}\frac{1-q_{0}}{2}+|\rho_{2}|,\] \[n_{OT} =\left(\frac{a}{\sqrt{1-4|\rho_{2}|^{2}}}-1\right)\frac{1-q_{0}}{ 2}+\rho_{1}.\] 2. _Else if_ \(Q\geq\frac{1}{2}-\frac{\det(\rho)}{q_{0}}\) _then_ \[D_{OT} =(1-q_{0})\rho_{1}+\det(\rho)+\frac{q_{0}}{2}\left(1-\sqrt{1-4|\rho_{2}|^{2}} \right),\] (119) \[s_{OT} =\frac{b}{\sqrt{1-4|\rho_{2}|^{2}}}\frac{q_{0}}{2},\] \[n_{OT} =\left(\frac{a}{\sqrt{1-4|\rho_{2}|^{2}}}+1\right)\frac{q_{0}}{2}.\] 3. _Else if_ \(\frac{\det(\rho)}{1-q_{0}}-\frac{1}{2}\leq Q\leq\frac{1}{2}-\frac{\det(\rho)}{ q_{0}}\) _then,_ \[D_{OT} =1-q_{0}\left(\rho_{1}-1+\frac{2|\rho_{2}|^{2}}{1+2k}\right)- \langle 1|\rho^{2}|1\rangle-an_{OT}-bs_{OT},\] (120) \[s_{OT} =\frac{(q_{0}-2\det(\rho))\,|\rho_{2}|+\text{sgn}\{a-b\}(1-2\rho_{1}) \sqrt{\Delta^{\prime}}}{4|\rho_{2}|^{2}+(1-2\rho_{1})^{2}},\] \[n_{OT} =\frac{2q_{0}|\rho_{2}|^{2}-(1-2\rho_{1})(\rho_{1}q_{0}-\det( \rho))+\text{sgn}\{a-b\}2|\rho_{2}|\sqrt{\Delta^{\prime}}}{4|\rho_{2}|^{2}+(1- 2\rho_{1})^{2}}.\] _where_ \(\Delta^{\prime}:=(q_{0}(1-q_{0})-\det(\rho))\det(\rho)\)_._ Proof.: See Appendix B. Interestingly, when the input state is prepared with the diagonal density operator along the eigenbasis of the output state (i.e. \(\rho_{2}=0\)), the optimal matrices \(N_{opt}\) and \(\rho-N_{opt}\) produce the classical binary optimal transport scheme. One can see that the first and second conditions reduce to \(q_{0}\geq\rho_{1}\) and \(q_{0}<\rho_{1}\) and the third condition is empty. #### 4.1.3 Minimum Required Rate for Optimal Transport Finally, it is worth recalling that an unlimited communication rate is not required to obtain optimal transport. Thus, the minimum sufficient required rate \(R_{\text{min, OT}}\) for the optimal transport scheme can be obtained by substituting optimal values \(s_{OT}\) and \(n_{OT}\) in (113), which results \[R_{\text{min, OT}}=H(\rho)-q_{0}H_{b}\left(\frac{1}{2}-\frac{E_{1}(n_{OT},s_{OT })}{q_{0}}\right)-(1-q_{0})H_{b}\left(\frac{1}{2}-\frac{E_{2}(n_{OT},s_{OT})}{ 1-q_{0}}\right). \tag{121}\] where \(H_{b}(.)\) is the binary entropy function, and \(E_{1}(n_{OT},s_{OT})\) and \(E_{2}(n_{OT},s_{OT})\) are functions defined in (116), (117). ### Numerical Results We used the CVX package [42, 43] to find numerical solutions for the examples of this convex optimization problem. Also [44] provides the CVX functions for the von-Neumann entropy functions. The output-constrained rate-distortion function is numerically evaluated for the following set of examples with fixed \(q_{0}=1/2\) and \(\rho_{1}=1/2\) parameters and different off-diagonal values. \[\text{Ex. 1:}\quad q_{0}=\frac{1}{2}\quad\rho_{a}=\begin{bmatrix}1/2&0 \\ 0&1/2\end{bmatrix}, \text{Tr}\big{\{}\rho_{a}^{2}\big{\}}=0.5.\] \[\text{Ex. 2:}\quad q_{0}=\frac{1}{2}\quad\rho_{b}=\begin{bmatrix}1/2& 0.1319-0.0361i\\ 0.1319+0.0361i&1/2\end{bmatrix}, \text{Tr}\big{\{}\rho_{b}^{2}\big{\}}=0.5374.\] \[\text{Ex. 3:}\quad q_{0}=\frac{1}{2}\quad\rho_{c}=\begin{bmatrix}1/2& 0.0754-0.2307i\\ 0.0754+0.2307i&1/2\end{bmatrix}, \text{Tr}\big{\{}\rho_{c}^{2}\big{\}}=0.6178.\] \[\text{Ex. 4:}\quad q_{0}=\frac{1}{2}\quad\rho_{d}=\begin{bmatrix}1/2& -0.1399-0.3872i\\ -0.1399+0.3872i&1/2\end{bmatrix}, \text{Tr}\big{\{}\rho_{d}^{2}\big{\}}=0.8390.\] These rate-distortion functions are plotted in Figure 3, which shows that starting from a maximally mixed state (Ex.1), as the source state becomes purer, it requires less communication rate to maintain the same level of entanglement fidelity. In the case of a pure source state, the rate-distortion function reduces to a single point at the no transmission rate. This is intuitively acceptable as the pure state is independent of the reference state. So the receiver can generate random outcomes independent of the source. However, the entanglement fidelity distortion will not be zero because the measurement collapses the state into deterministic outcomes and hence it will not fully recover the source state. On the contrary, the maximally mixed state has the maximum dependence on the reference state which requires the maximum rate of transmission to recover the state with the same level of distortion. Figure 3: Output-Constrained rate-distortion function with unlimited common randomness for the examples with \(\rho_{1}=0.5\) Conclusion We introduced the output-constrained lossy source coding with limited classical common randomness for the quantum to classical systems. We further used this source coding scheme to establish the concept of rate-limited quantum-classical optimal transport. The theorem provides a computable single-letter characterization of the achievable rate region \((R,R_{c})\) to fulfill the distortion level in accordance with a generally defined form of distortion observable. Moreover, we extended this theorem to the continuous-variable quantum systems with the help of an alternative continuous coding protocol. We next performed an evaluation for the example of binary quantum systems with unlimited common randomness. The analytical expression for the rate-limited q-c optimal transport was provided in the form of a transcendental equation. Also, the analytical expressions for the q-c optimal transport scheme were provided for the minimum transportation cost when sufficient communication rate and common randomness are available. The expressions show that in the case of the source qubit state having a diagonal density operator along the canonical eigenbasis and using the entanglement fidelity distortion measure, the optimal transport scheme retrieves the classical optimal transport scheme for the minimum transportation cost. Future works may include the coding theorems for the optimal transport through quantum channels as well as the analysis of one-shot regime which will provide more practical applications of the subject. ## Appendix A Proof of Theorem 8 The optimization problem in (111) with the entanglement fidelity distortion measure is equivalent to the following entropy maximization problem \[\max_{M_{0},M_{1}}q_{0}H(\rho_{0})+q_{1}H(\rho_{1})\] \[\text{s.t.}\quad q_{0}+q_{1} =1\] \[\operatorname{Tr}\{M_{x}\rho\} =q_{x},\,\,\,x\in\{0,1\}\] \[\rho_{x} =\sqrt{\rho}M_{x}\sqrt{\rho}/q_{x},\,\,\,x\in\{0,1\}\] \[\rho_{0}q_{0}+\rho_{1}q_{1} =\rho\] \[\left\langle 0\right|\rho M_{0}\rho\left|0\right\rangle+\left\langle 1 \right|\rho M_{1}\rho\left|1\right\rangle \geq 1-D\] \[M_{0}+M_{1} =I\] \[M_{0},M_{1} \geq 0\] where \(H(\rho)=-\operatorname{Tr}\{\rho\ln\rho\}\) is the von-Neumann entropy function. Note that although the distortion formula in (110) has transposed POVM operator \(M_{0}^{T_{x}}\), we do not employ the transpose in the distortion constraint of the above optimization problem. The reason is that the eigenvalues of \(\rho_{x}\) are preserved under the transposition with respect to any basis. This implies that the entropy function also does not change under transposition, \[H(\rho_{x})=H(\sqrt{\rho}M_{0}^{T}\sqrt{\rho}).\] Therefore, we may remove the transpose from all the terms in the optimization problem. Then, by defining \(N=\sqrt{\rho}M_{0}\sqrt{\rho}\) the conditional post-measurement reference state given outcome zero, as the variable of optimization, the optimization problem reduces to the following standard form \[\min_{N} \operatorname{Tr}\{N\ln(N/q_{0})\}+\operatorname{Tr}\biggl{\{}(\rho- N)\ln\biggl{(}\frac{\rho-N}{1-q_{0}}\biggr{)}\biggr{\}},\] (122) s.t., \[\left\langle 0\right|\sqrt{\rho}N\sqrt{\rho}\left|0\right\rangle+ \left\langle 1\right|\sqrt{\rho}(\rho-N)\sqrt{\rho}\left|1\right\rangle\geq 1-D, \tag{123}\] \[\operatorname{Tr}\{N\}=q_{0},\] (124) \[0\preceq N\preceq\rho, \tag{125}\] where \(\rho\) is the input state and \(0\leq q_{0}\leq 1\) is the zero-output probability, which are the given parameters of the problem. This is a convex optimization problem, as the objective function comprises negative entropy functions that are concave, and the constraint is linear. The distortion constraint (123) is further simplified to \[\operatorname{Tr}\{NG\}\geq 1-D-\left\langle 1|\rho^{2}|1\right\rangle, \tag{126}\] where \[G:=\sqrt{\rho}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\sqrt{\rho}. \tag{127}\] ### Zero-Crossing Point To find the zero crossing point, we first ignore the distortion constraint and find the optimal operator when \(D\to\infty\). Thus, simply taking the derivative of the objective function and equating it to zero gives \[\frac{df(N)}{dN} =\frac{d}{dN}\left[\operatorname{Tr}\biggl{\{}N\ln\frac{N}{q_{0}} \biggr{\}}+\operatorname{Tr}\biggl{\{}(\rho-N)\ln\frac{\rho-N}{1-q_{0}}\biggr{\}}\right]\] \[=\left(I+\ln\frac{N}{q_{0}}\right)^{T}-\left(I+\ln\frac{\rho-N}{1 -q_{0}}\right)^{T}:=0\] \[\implies\ln\frac{N}{q_{0}} =\ln\frac{\rho-N}{1-q_{0}}\] \[\implies N=q_{0}\rho\] This implies the measurement POVMs \(M_{0}=q_{0}I,\ M_{1}=(1-q_{0})I\) in the infinite-distortion case. Substituting these values into the mutual information expression gives \(I(R;X)=0\) which means the communication rate will be zero and the input and output will be independent, which is intuitively acceptable. This is also observed from the \(M_{0},M_{1}\) which both apply the same identity operator on the source state regardless of the outcome. Substituting this \(N=q_{0}\rho\) in the distortion constraint gives the zero-crossing point \[D_{R_{0}}:=1-q_{0}\left\langle 0\middle|\rho^{2}|0\right\rangle- \left(1-q_{0}\right)\left\langle 1\middle|\rho^{2}|1\right\rangle.\] Thus, for all values of entanglement fidelity distortion in the range \(D\geq D_{Ro}\), the output and reference state are independent so the rate is zero. ### Non-Zero Rate Region Next, in the non-zero region \(D\leq 1-q_{0}\left\langle 0\middle|\rho^{2}\middle|0\right\rangle-\left(1-q_{0} \right)\left\langle 1\middle|\rho^{2}\middle|1\right\rangle\), the distortion constraint is active. Recall that the convex problem (122) is on the domain of positive semi-definite Hermitian matrices \(N\in\mathcal{S}_{+}^{n}\). Also the trace of \(N_{opt}\) is constrained to be \(\operatorname{Tr}\{N\}=q_{0}\). So the states are shown in the expanded matrix form \[\rho=\begin{bmatrix}\rho_{1}&\rho_{2}\\ \rho_{2}^{*}&1-\rho_{1}\end{bmatrix},\quad N=\begin{bmatrix}n&n_{\text{off}} \\ n_{\text{off}}^{*}&q_{0}-n\end{bmatrix},\] where \(n,\rho_{1}\in\mathbb{R}^{+}\) and \(n_{\text{off}},\rho_{2}\in\mathbb{C}\). Further, the PSD constraints of (125) reduce to \(\det(N)\geq 0\) and \(\det(\rho-N)\geq 0\). Thus we have \[\left|n_{\text{off}}\right|^{2} \leq n(q_{0}-n), \tag{128}\] \[\left|\rho_{2}-n_{\text{off}}\right|^{2} \leq(\rho_{1}-n)(1-\rho_{1}-q_{0}+n). \tag{129}\] Next, examining the objective function (122) and constraints shows a symmetry among the value of \(n_{\text{off}}\). Note that the first term in (122) is a function of eigenvalues of matrix \(N\) which only depends on \(\left|n_{\text{off}}\right|\). Similarly, the second term is only a function of \(\left|\rho_{2}-n_{\text{off}}\right|\). These two expressions can be illustrated as two circles in the complex plane of \(n_{\text{off}}\). These circles create a symmetry w.r.t. the direction of vector \(\rho_{2}\) in this complex plane. Moreover, we can rewrite the LHS of distortion constraint (126) as \[\operatorname{Tr}\{NG\} =\operatorname{Tr}\left\{\begin{bmatrix}n&n_{\text{off}}\\ n_{\text{off}}^{*}&q_{0}-n\end{bmatrix}\begin{bmatrix}g_{1}&g_{2}\rho_{2}/| \rho_{2}|\\ g_{2}\rho_{2}^{*}/|\rho_{2}|&g_{3}\end{bmatrix}\right\}\] \[=(g_{1}-g_{3})n+\frac{2g_{2}}{|\rho_{2}|}\operatorname{Re}\{n_{ \text{off}}\,\rho_{2}^{*}\}+q_{0}g_{3}.\] By expanding the term \(\operatorname{Re}\{n_{\text{off}}\,\rho_{2}^{*}\}\) in above expression, the distortion function is \[\operatorname{Re}\{n_{\text{off}}\}\operatorname{Re}\{\rho_{2}\}+ \operatorname{Im}\{n_{\text{off}}\}\operatorname{Im}\{\rho_{2}\}+f(n,q_{0}, \rho)\geq 0,\] where \(f(n,q_{0},\rho)\) is a function comprised of the remaining terms. This is a half-plane in the complex plane of \(n_{\text{off}}\), with its slope orthogonal to the slope of symmetry line \(\rho_{2}\). Therefore, the objective function and all the constraints are symmetric w.r.t. the line \(\rho_{2}\) in the complex plane. In view of the fact that the problem is a convex program, the solution must occur on the line of symmetry. This means the optimal solution has the shape \[N_{opt}=\begin{bmatrix}n&s\rho_{2}/|\rho_{2}|\\ s\rho_{2}^{*}/|\rho_{2}|&q_{0}-n\end{bmatrix}, \tag{130}\] where \(n,s\in\mathbb{R}\) are the variables of optimization. The active distortion constraint is also expanded in the form \[\operatorname{Tr}\{NG\}=1-D-\left\langle 1|\rho^{2}\middle|1\right\rangle\] \[\Longrightarrow\Big{(}g_{1}-g_{3}\Big{)}n+\Big{(}2g_{2}\Big{)}s+ \Big{(}q_{0}g_{3}+\left\langle 1|\rho^{2}\middle|1\right\rangle-1+D\Big{)}=0, \tag{131}\] where \(G\) is defined in (127) and its elements are obtained by \[G:=\begin{bmatrix}g_{1}&\frac{g_{2}}{|\rho_{2}|}\rho_{2}\\ \frac{g_{2}}{|\rho_{2}|}\rho_{2}^{*}&g_{3}\end{bmatrix}=\frac{1}{1+2k}\begin{bmatrix}( \rho_{1}+k)^{2}-|\rho_{2}|^{2}&(2\rho_{1}-1)\rho_{2}\\ b(2\rho_{1}-1)\rho_{2}^{*}&|\rho_{2}|^{2}-(1-\rho_{1}+k)^{2}\end{bmatrix},\] with \(k:=\sqrt{\det\{\rho\}}\). Therefore, expanding the objective function yields the following scalar convex optimization problem \[\min_{n,s}\sum_{i=1,2}\lambda_{N_{i}}(n,s)\ln(\lambda_{N_{i}}(n,s )/q_{0})+\sum_{i=1,2}\lambda_{d_{i}}(n,s)\ln(\lambda_{d_{i}}(n,s)/(1-q_{0}))\] \[\text{s.t. }\Big{(}g_{1}-g_{3}\Big{)}n+\Big{(}2g_{2}\Big{)}s+ \Big{(}q_{0}g_{3}+\big{\langle}1|\rho^{2}\big{|}1\big{\rangle}-1+D\Big{)}=0\] where \(n,s\in\mathbb{R}\). The terms \(\lambda_{N_{i}}\) and \(\lambda_{d_{i}}\) are the eigenvalues of \(N\) and \(\rho-N\) respectively, obtained by \[\lambda_{N_{1},N_{2}} =\frac{q_{0}}{2}\pm E_{1}(n,s),\] \[\lambda_{d_{1},d_{2}} =\frac{1-q_{0}}{2}\pm E_{2}(n,s),\] where \(E_{1}(n,s)\) and \(E_{2}(n,s)\) are as defined in (116), (117). #### a.2.1 The Transcendental Equation of Optimal Solution We solve by substituting the linear constraint into the objective function, then taking the derivative w.r.t. \(n\) and equating it to zero. The final solution becomes in the form of the following transcendental system of equations of \(n,s\) which provides the optimal values \(n_{opt}\) and \(s_{opt}\): \[\frac{-as+b(n-q_{0}/2)}{E_{1}}\ln\frac{q_{0}/2+E_{1}}{q_{0}/2-E_{1 }}+\frac{-a(s-|\rho_{2}|)+b(n-\rho_{1}+\frac{1-q_{0}}{2})}{E_{2}}\ln\frac{ \frac{1-q_{0}}{2}+E_{2}}{\frac{1-q_{0}}{2}-E_{2}}=0\] \[an+bs+c=0\] where \(a:=g_{1}-g_{3}\), \(b:=2g_{2}\) and \(c:=q_{0}g_{3}+\big{\langle}1|\rho^{2}\big{|}1\big{\rangle}-1+D\) are the parameters of the linear distortion constraint and are given by \[a =1-\frac{4|\rho_{2}|^{2}}{1+2k},\] \[b =\frac{2|\rho_{2}|(2\rho_{1}-1)}{1+2k},\] \[g_{3} =\rho_{1}-1+\frac{2|\rho_{2}|^{2}}{1+2k}.\] Based on the optimal values \(n_{opt},s_{opt}\), the optimal POVM operators are \[M_{0,\text{opt}}=\sqrt{\rho}^{-1}N_{\text{opt}}\sqrt{\rho}^{-1}, \quad M_{1,\text{opt}}=I-M_{0},\] \[N_{\text{opt}}=\begin{bmatrix}n_{opt}&s_{opt}\cdot\rho_{2}/| \rho_{2}|\\ s_{opt}\cdot\rho_{2}^{*}/|\rho_{2}|&q_{0}-n_{opt}\end{bmatrix}.\] ## Appendix B Proof of Theorem 9 To find the optimal transport scheme, we assume having an unlimited available rate and find the minimum possible distortion. In this case, the problem reduces to \[\min_{M_{0},M_{1}}D_{ef} :=1-\sum_{x}\left\langle x|\rho M_{x}\rho|x\right\rangle,\] s.t., \[\quad M_{0},M_{1}\geq 0,\] \[\quad M_{0}+M_{1}=I,\] \[\quad\mathrm{Tr}\{M_{0}\rho\}=q_{0}.\] Note that we used the same argument as in Appendix A to remove the transpose operator from the formulations. Then again using the change of variable \(N:=\sqrt{\rho}M_{0}\sqrt{\rho}\) reduces the problem to the following semi-definite programming, whose result is the operator for optimal transport \(N_{OT}\) \[\min_{N}1-\left\langle 1|\rho^{2}\big{|}1\right\rangle- \mathrm{Tr}\{NG\},\] s.t., \[\quad 0\preceq N\preceq\rho,\] \[\mathrm{Tr}\{N\}=q_{0}.\] Similar to the rate-distortion problem in the previous section, the symmetry implies \(N_{OT}\) to be in the form of (130). With this assumption, the problem reduces to the scalar optimization below: \[\min_{n,s}f_{0}(n,s) :=-(g_{1}-g_{3})n-(2g_{2})s+1-q_{0}g_{3}-\left\langle 1|\rho^{2}|1 \right\rangle,\] s.t. \[\quad f_{1}(n,s) :=\left(n-\frac{q_{0}}{2}\right)^{2}+s^{2}<\left(\frac{q_{0}}{2} \right)^{2},\] \[\quad f_{2}(n,s) :=\left(n-\left(\rho_{1}-\frac{1-q_{0}}{2}\right)\right)^{2}+(s- |\rho_{2}|)^{2}<\left(\frac{1-q_{0}}{2}\right)^{2}.\] This is quadratic-constrained linear programming which is a convex problem. By forming the Lagrangian function \[\mathcal{L}(n,s,\lambda_{1},\lambda_{2})=f_{0}(n,s)+\lambda_{1}f_{1}(n,s)+ \lambda_{2}f_{2}(n,s),\qquad\lambda_{1,2}\geq 0\] taking the partial derivatives with respect to \(n,s\) and equating to zero we obtain the optimal variables as a function of \(\lambda_{i}\) Lagrange multipliers, \[n_{opt}(\lambda_{1},\lambda_{2}) =\frac{a+q_{0}\lambda_{1}+2(\rho_{1}-\frac{1-q_{0}}{2})\lambda_{2 }}{2(\lambda_{1}+\lambda_{2})},\] \[s_{opt}(\lambda_{1},\lambda_{2}) =\frac{b+2\lambda_{2}|\rho_{2}|}{2(\lambda_{1}+\lambda_{2})}.\] Then we substitute above expressions in the following complementary slackness conditions \[\lambda_{1}f_{1}\Big{(}n_{opt}(\lambda_{1},\lambda_{2}),\,s_{opt}( \lambda_{1},\lambda_{2})\Big{)} =0,\] \[\lambda_{2}f_{2}\Big{(}n_{opt}(\lambda_{1},\lambda_{2}),\,s_{opt}( \lambda_{1},\lambda_{2})\Big{)} =0,\] \[\lambda_{1,2} \geq 0,\] which result in the following scenarios. 1. Minimum happens at \(f_{2}\) circle (\(\lambda_{1}=0,\ \lambda_{2}\neq 0\)): Because \(\lambda_{2}\neq 0\), then \(f_{2}(n_{opt},s_{opt})=0\). This results in \[\lambda_{2}=\frac{\sqrt{a^{2}+b^{2}}}{1-q_{0}}.\] The corresponding optimal values for variables are obtained as \[s_{opt} =\frac{b(1-q_{0})}{2\sqrt{a^{2}+b^{2}}}+|\rho_{2}|=\frac{b}{\sqrt {1-4|\rho_{2}|^{2}}}\frac{1-q_{0}}{2}+|\rho_{2}|,\] \[n_{opt} =\frac{a(1-q_{0})}{2\sqrt{a^{2}+b^{2}}}+\rho_{1}-\frac{1-q_{0}}{ 2}=\left(\frac{a}{\sqrt{1-4|\rho_{2}|^{2}}}-1\right)\frac{1-q_{0}}{2}+\rho_{1},\] \[D_{OT} =1-q_{0}g_{3}-\left\langle 1|\rho^{2}|1\right\rangle-\frac{1-q_{0} }{2}\sqrt{a^{2}+b^{2}}-a(\rho_{1}-\frac{1-q_{0}}{2})-b|\rho_{2}|\] \[=q_{0}(1-\rho_{1})+\det(\rho)+\frac{1-q_{0}}{2}\left(1-\sqrt{1-4| \rho_{2}|^{2}}\right).\] Note that \(a^{2}+b^{2}=1-4|\rho_{2}|^{2}\). Also, substituting the above values into the feasibility condition for \(f_{2}(n_{opt},s_{opt})\leq 0\) provides the conditions required for this scenario: \[\frac{1-q_{0}}{2}+\frac{1-q_{0}}{\sqrt{a^{2}+b^{2}}}\left(a(\rho_{1}-\frac{1} {2})+b|\rho_{2}|\right)\leq\det(\rho).\] 2. Minimum happens at \(f_{1}\) circle (\(\lambda_{1}\neq 0,\lambda_{2}=0\)): Because \(\lambda_{1}\neq 0\), then \(f_{1}(n_{opt},s_{opt})=0\). This results in \[\lambda_{1}=\frac{\sqrt{a^{2}+b^{2}}}{q_{0}}.\] The corresponding optimal values for variables are obtained as \[s_{opt} =\frac{bq_{0}}{2\sqrt{a^{2}+b^{2}}}=\frac{b}{\sqrt{1-4|\rho_{2}|^ {2}}}\frac{q_{0}}{2},\] \[n_{opt} =\frac{aq_{0}}{2\sqrt{a^{2}+b^{2}}}+\frac{q_{0}}{2}=\left(\frac{ a}{\sqrt{1-4|\rho_{2}|^{2}}}+1\right)\frac{q_{0}}{2},\] \[D_{OT} =1-q_{0}g_{3}-\left\langle 1|\rho^{2}|1\right\rangle-\frac{q_{0} }{2}\sqrt{a^{2}+b^{2}}-\frac{aq_{0}}{2}\] \[=(1-q_{0})\rho_{1}+\det(\rho)+\frac{q_{0}}{2}\left(1-\sqrt{1-4| \rho_{2}|^{2}}\right).\] Also, substituting the above values into the feasibility condition for \(f_{2}(n_{opt},s_{opt})\leq 0\) provides the conditions required for this scenario: \[\frac{q_{0}}{2}-\frac{q_{0}}{\sqrt{a^{2}+b^{2}}}\left(a(\rho_{1}-\frac{1}{2})+b| \rho_{2}|\right)\leq\det(\rho).\] 3. Minimum happens at intersection of circles (\(\lambda_{1}\neq 0,\lambda_{2}\neq 0\)): If neither of the conditions of the above scenarios is satisfied, then the result happens at the intersection of two circles \[\left(n-\frac{q_{0}}{2}\right)^{2}+s^{2}=\left(\frac{q_{0}}{2} \right)^{2},\] \[\left(n-\left(\rho_{1}-\frac{1-q_{0}}{2}\right)\right)^{2}+(s-|\rho_{2}|)^ {2}=\left(\frac{1-q_{0}}{2}\right)^{2},\] which is obtained by the following expressions under separate conditions: * If \(a\geq b\), \[s_{opt} =\frac{2B+q_{0}A+A\sqrt{\Delta}}{2(1+A^{2})},\] \[n_{opt} =\frac{q_{0}-2AB+\sqrt{\Delta}}{2(1+A^{2})}.\] * If \(a<b\), \[s_{opt} =\frac{2B+q_{0}A-A\sqrt{\Delta}}{2(1+A^{2})},\] \[n_{opt} =\frac{q_{0}-2AB-\sqrt{\Delta}}{2(1+A^{2})}.\] where \[A :=\frac{1-2\rho_{1}}{2|\rho_{2}|},\] \[B :=\frac{\rho_{1}q_{0}-\det(\rho)}{2|\rho_{2}|},\] \[\Delta :=q_{0}^{2}-4B^{2}-4q_{0}AB.\] Then the optimal transport value can be obtained by substituting the above results in the expression below: \[D_{OT}=1-q_{0}g_{3}-\left\langle 1|\rho^{2}|1\right\rangle-an_{opt}-bs_{opt}.\] ## Appendix C Proof of Useful Lemmas ### Proof of Lemma 2 The proof appeals to the time-sharing method similar to the proof of Lemma 1 of [13]. Assume having two distortion threshold values \(D_{1}\) and \(D_{2}\) and define a value \(\alpha\in(0,1)\). By definition, for any \(\delta>0\) there exists an achievable rate \(R_{i}\) in the range \[R_{i}\leq R(D_{i};R_{c},\rho||Q_{X})+\delta,\quad i=1,2.\] In other words, there exists coding schemes \((n,R_{1},R_{c})\) and \((n,R_{2},R_{c})\) with fixed \(Q_{X}^{n}\) output distribution satisfying \[\mathrm{Tr}\Big{\{}\Delta^{(n)}(\mathcal{D}_{(i)}^{n}\circ\mathcal{M}_{(i)}^{ n})(\phi_{\rho}^{RA})^{\otimes n}\Big{\}}\leq D_{i}+\delta,\quad i=1,2\] where \(\mathcal{M}_{(i)}^{n}\) and \(\mathcal{D}_{(i)}^{n}\) are the measurement and decoder of each source code respectively. Next, we define a mega-block of source code with size \(nN\) where \(n\) is the size of each inner block and \(N\) is the number of inner blocks. Then we set up the system such that the first \(k_{N}\in\mathbb{N}\) blocks use the first coding scheme, and the rest use the second coding scheme as \[\mathcal{M}^{nN}:=\left(\underbrace{\mathcal{M}_{(1)}^{n},..., \mathcal{M}_{(1)}^{n}}_{k_{N}\text{-times}}\underbrace{\mathcal{M}_{(2)}^{n},...,\mathcal{M}_{(2)}^{n}}_{N-k_{N}\text{-times}}\right),\quad\mathcal{D}^{ nN}:=\left(\underbrace{\mathcal{D}_{(1)}^{n},...,\mathcal{D}_{(1)}^{n}}_{k_{N} \text{-times}}\underline{\mathcal{D}_{(2)}^{n}}_{N-k_{N}\text{-times}}\right).\] Also, set \(k_{N}\) such that \(\lim_{M\rightarrow\infty}\frac{k_{M}}{M}=\alpha\). The new source coding has a rate of \[R_{\text{new}} =\frac{k_{N}}{N}R_{1}+\frac{N-k_{N}}{N}R_{2}\] \[\leq\frac{k_{N}}{N}R(D_{1};R_{c},\rho||Q_{X})+\frac{N-k_{N}}{N}R( D_{2};R_{c},\rho||Q_{X})+\delta,\] with distortion level \[D_{\text{new}}=\mathrm{Tr}\bigg{\{}\left(\Delta^{(n)}\right)^{ \otimes N}(\mathcal{D}^{nN}\circ\mathcal{M}^{nN})(\phi_{\rho}^{RA})^{\otimes nN }\bigg{\}}\leq\frac{K_{N}}{N}D_{1}+\frac{N-k_{N}}{N}D_{2}+\delta.\] Therefore, by assigning proper values to \(N\) and \(\delta\), we ensure that \[R_{\text{new}} \leq\alpha R(D_{1};R_{c},\rho||Q_{X})+(1-\alpha)R(D_{2};R_{c}, \rho||Q_{X})+\epsilon,\] \[D_{\text{new}} \leq\alpha D_{1}+(1-\alpha)D_{2}+\epsilon.\] This new coding scheme is achievable as it is the time-sharing between two achievable source codes. Then, according to the definition of the achievable rate region \(\mathcal{R}(D,\rho||Q_{X})\), the minimum rate of new coding is bounded by \[R(D_{\text{new}};R_{c},\rho||Q_{X})\leq R_{\text{new}}.\] Substituting the upperbounds into the above inequality gives \[R\bigg{(}\alpha D_{1}+(1-\alpha)D_{2};R_{c},\rho||Q_{X},\bigg{)} \leq\alpha R(D_{1};R_{c},\rho||Q_{X})+(1-\alpha)R(D_{2};R_{c},\rho||Q_{X})+\epsilon.\] Since \(\epsilon\) can be arbitrarily small, then this inequality converges to the exact convex inequality. Thus, the proof holds. ### Proof of Lemma 7 We condition the average n-letter distortion on the sequence of clipping errors \(A^{n}_{k_{1}}\) as follows: \[d_{n}(R^{n},\hat{X}^{n}_{k_{1},K_{2}}|\neg E_{ce})=\mathbb{E}_{A^{ n}_{k_{1}}|\neg E_{ce}}\left[d_{n}(R^{n},\hat{X}^{n}_{k_{1},K_{2}}|A^{n}_{k_{1}})\right]\] \[=\mathbb{E}_{A^{n}_{k_{1}}|\neg E_{ce}}\left[\frac{1}{n}\sum_{i=1 }^{n}d(R_{i},\hat{X}_{i,k_{1},K_{2}}|A^{n}_{k_{1}})\right]\] \[=\mathbb{E}_{A^{n}_{k_{1}}|\neg E_{ce}}\left[\frac{1}{n}\sum_{i:A _{i,k_{1}}=1}d(R_{i},\hat{X}_{i,k_{1},K_{2}}|A^{n}_{k_{1}})+\frac{1}{n}\sum_{i :A_{i,k_{1}}=0}d(R_{i},\hat{X}_{i,k_{1},K_{2}}|A^{n}_{k_{1}})\right]\] \[=\mathbb{E}_{T|\neg E_{ce}}\left[\frac{n-T}{n}d(R,\hat{X}_{\text {local}}|A_{k_{1}}=1)+\frac{T}{n}d_{T}(R^{T}_{k_{1}},\hat{X}^{T}_{k_{1},K_{2}})\right]\] \[\leq(P_{k_{1}}+\epsilon_{el})d\left(R,\hat{X}_{\text{local}}|A_{k_ {1}}=1\right)+\mathbb{E}_{T|\neg E_{ce}}\left[d_{T}(R^{T}_{k_{1}},\hat{X}^{T}_ {k_{1},K_{2}})\right] \tag{132}\] where in the last equality, we generate a local random value \(\hat{X}_{\text{local}}\) at Bob's side for any sample with an asserted error bit \(A_{i,k_{1}}=1\). Then for any fixed \(T=t\), we expand the second term inside the expectation, by adding and removing an intermediate term \[\lim_{\begin{subarray}{c}n\to\infty,\\ t_{\text{min}}\to\infty\end{subarray}}d_{t}(R^{t}_{k_{1}},\hat{X}^{t}_{k_{1},K_{2}})\] \[=\lim_{\begin{subarray}{c}n\to\infty,\\ t_{\text{min}}\to\infty\end{subarray}}d_{t}(R^{t}_{k_{1}},{X^{\prime}}^{t}_{k_ {1},K_{2}})+\Big{(}d_{t}(R^{t}_{k_{1}},\hat{X}^{t}_{k_{1},K_{2}})-d_{t}(R^{t} _{k_{1}},{X^{\prime}}^{t}_{k_{1},K_{2}})\Big{)}\] \[\leq d(R_{k_{1}},{X^{\prime}}_{k_{1},K_{2}})+\lim_{ \begin{subarray}{c}n\to\infty,\\ t_{\text{min}}\to\infty\end{subarray}}\Big{(}d_{t}(R^{t}_{k_{1}},\hat{X}^{t}_ {k_{1},K_{2}})-d_{t}(R^{t}_{k_{1}},{X^{\prime}}^{t}_{k_{1},K_{2}})\Big{)}. \tag{133}\] The first term in inequality follows for sufficiently large \(t\) from the discrete rate-distortion theorem. Note that we have coupled the parameters \(n,k_{1},\epsilon_{el}\) in a way that the value of \(t_{\text{min}}\) is sufficiently large. The second part of (133) is the distortion caused by the optimal transport block in the receiver. However, unlike the classical case, we cannot use triangle inequality in this system. Therefore, we expand the distortions for each one and find the difference. Thus, assume that \(\{{\lambda}^{{\mathcal{X}}^{t}_{k_{2}}}_{x^{t}\in{\mathcal{X}}^{t}_{k_{2}}}\}\) is the \(t\)-collective measurement of the \(t\)-letter discrete rate-distortion coding which according to discrete coding theorem generates \(\hat{X}^{t}\) with perfect i.i.d. pmf \(\mu_{X_{K_{2}}}\). We also define the continuous measurement POVM \(\hat{\Lambda}\equiv\{\hat{\Lambda}(B),B\in{\mathcal{B}}({\mathcal{X}}^{t})\}\) which combines the discrete measurement coding with the output stage optimal transport block. For any event \(A\subseteq{\mathcal{B}}({\mathcal{X}}^{t})\) we define \[\hat{\Lambda}(A):=\sum_{x^{t}\in{\mathcal{X}}^{t}_{k_{2}}}\Lambda^{{\mathcal{X }}^{t}_{k_{2}}}_{x^{t}}\pi^{t}_{OT}\Big{(}A\Big{|}x^{t}\Big{)},\] where \(\pi_{OT}(\cdot|x)\) for \(x\in{\mathcal{X}}_{K_{2}}\) is the optimal transport channel from the discrete space to the continuous space. Specifically, as the discrete coding produces i.i.d. discrete pmf \(\mu_{X_{K_{2}}}\) and the final output is required to have i.i.d. continuous distribution \(\mu_{X}\), then the optimal transport is a simple dequantization channel given for any event \(A\subseteq{\mathcal{B}}({\mathcal{X}}^{t})\) by \[\pi^{t}_{OT}\Big{(}A\Big{|}x^{t}\Big{)}=\frac{\mu^{t}_{X}(A\cap RQ_{ K_{2}}(x^{t}))}{\mu^{t}_{X}\left({\mathcal{R}}{\mathcal{Q}}_{K_{2}}(x^{t}) \right)}, \tag{134}\] where \({\mathcal{R}}{\mathcal{Q}}_{K_{2}}(x^{t})\) is the quantization region of \(x^{t}\in{\mathcal{X}}^{t}_{K_{2}}\). Next, the \(i\)-th letter distortion is expanded as follows: \[d(R_{i,k_{1}},X^{\prime}_{i,k_{1},K_{2}}) =\sum_{x^{\prime}\in X^{\prime}_{K_{2}}}\mathrm{Tr}\bigg{\{} \mathrm{Tr}_{[t]\setminus i}\left\{\omega_{k_{1}}\Lambda^{\mathcal{X}^{k_{2}}_{ x^{\ell}}}_{x^{\ell}}\omega_{k_{1}}\right\}\Delta(x_{i})\right\}\] \[=\sum_{x_{i}\in\mathcal{X}_{K_{2}}}\mathrm{Tr}\Bigg{\{}\mathrm{Tr }_{[t]\setminus i}\left\{\omega_{k_{1}}\left(\sum_{x^{[t]\setminus i}\in \mathcal{X}^{\prime}_{K_{2}}}\Lambda^{\mathcal{X}^{\prime}_{k_{2}}}_{x^{\ell} }\right)\omega_{k_{1}}\right\}\Delta(x_{i})\Bigg{\}}\] \[=\sum_{x\in\mathcal{X}_{K_{2}}}\mathrm{Tr}\big{\{}\hat{\rho}^{R_ {i}}_{x}\Delta(x)\big{\}}\mu_{X_{K_{2}}}(x), \tag{135}\] where we defined \[\hat{\rho}^{R_{i}}_{x}:=\frac{1}{\mu_{X_{K_{2}}}(x)}\,\mathrm{Tr}_{[t] \setminus i}\left\{\omega_{k_{1}}\left(\sum_{x^{[t]\setminus i}\in\mathcal{X }^{\prime}_{K_{2}}}\Lambda^{\mathcal{X}^{\prime}_{K_{2}}}_{x^{\ell}}\right) \omega_{k_{1}}\right\},\] as the operator representing the unnormalized post-measured reference state of the \(i\)-th system after discrete measurement POVM. Also, \[d(R_{i,k_{1}},\hat{X}_{i,k_{1},K_{2}})\] \[=\int_{\mathcal{X}^{\prime}}\mathrm{Tr}\bigg{\{}\mathrm{Tr}_{[t] \setminus i}\left\{\omega_{k_{1}}\hat{\Lambda}(dz^{t})\omega_{k_{1}}\right\} \Delta(z_{i})\Big{\}}\] \[=\sum_{x^{\prime}\in\mathcal{X}^{\prime}_{K_{2}}}\int_{\mathcal{ R}\mathcal{Q}_{K_{2}}(x^{\prime})}\mathrm{Tr}\bigg{\{}\mathrm{Tr}_{[t] \setminus i}\left\{\omega_{k_{1}}\Delta^{\mathcal{X}^{\prime}_{K_{2}}}_{x^{ \ell}}\pi^{t}_{OT}\Big{(}dz^{t}\Big{|}x^{t}\Big{)}\omega_{k_{1}}\right\}\Delta (z_{i})\bigg{\}}\] \[=\sum_{x_{i}\in\mathcal{X}_{K_{2}}}\int_{\mathcal{R}\mathcal{Q}_{ K_{2}}(x^{\prime})}\mathrm{Tr}\bigg{\{}\mathrm{Tr}_{[t]\setminus i} \left\{\omega_{k_{1}}\left(\sum_{x^{[t]\setminus i}}\Lambda^{\mathcal{X}^{ \prime}_{K_{2}}}_{x^{\ell}}\left(\int_{\mathcal{R}\mathcal{Q}_{K_{2}}(x^{[t] \setminus i})}\pi^{t-1}_{OT}\Big{(}dz^{[t]\setminus i}|x^{[t]\setminus i} \Big{)}\right)\right)\right)\omega_{k_{1}}\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\cdot\Delta(z_{i})\pi_{OT}( dz_{i}|x_{i})\Bigg{\}}\] \[=\sum_{x_{i}\in\mathcal{X}_{K_{2}}}\int_{\mathcal{R}\mathcal{Q}_{ K_{2}}(x_{i})}\mathrm{Tr}\Bigg{\{}\mathrm{Tr}_{[t]\setminus i}\left\{ \omega_{k_{1}}\left(\sum_{x^{[t]\setminus i}}\Lambda^{\mathcal{X}^{\prime}_{K _{2}}}_{x^{\ell}}\right)\omega_{k_{1}}\right\}\Delta(z_{i})\pi_{OT}(dz_{i}|x_{ i})\Bigg{\}}\] \[=\sum_{x\in\mathcal{X}_{K_{2}}}\mathrm{Tr}\bigg{\{}\hat{\rho}^{R_ {i}}_{x}\left(\int_{\mathcal{R}\mathcal{Q}_{K_{2}}(x)}\Delta(z)\pi_{OT}(dz|x) \right)\Bigg{\}}\mu_{X_{K_{2}}}(x), \tag{136}\] where \(\omega_{k_{1}}:=\sqrt{\rho^{\otimes\ell}_{k_{1}}}\). Thus, by comparing the expressions for the two distortions in (135) and (136), we find the distortion of the optimal transport block as \[d_{t}(R_{k_{1}}^{t},\hat{X}_{k_{1},K_{2}}^{t})-d_{t}(R_{k_{1}}^{t},{X^ {\prime}}_{k_{1},K_{2}}^{t})\] \[=\frac{1}{t}\sum_{i=1}^{t}\sum_{x\in\mathcal{X}_{K_{2}}}\mathrm{Tr} \Bigg{\{}\hat{\rho}_{x}^{R_{k}}\left(\int_{\mathcal{R}\mathcal{Q}_{K_{2}}(x)} \Delta(z)\pi_{OT}(dz|x)-\Delta(x)\right)\Bigg{\}}\mu_{X_{K_{2}}}(x)\] \[=\sum_{x\in\mathcal{X}_{K_{2}}}\mathrm{Tr}\Bigg{\{}\bar{\rho}_{t, x}^{R}\left(\int_{\mathcal{R}\mathcal{Q}_{K_{2}}(x)}\Delta(z)\pi_{OT}(dz|x)- \Delta(x)\right)\Bigg{\}}\mu_{X_{K_{2}}}(x)\] \[=\sum_{x\in\mathcal{X}_{K_{2}}}\mathrm{Tr}\Bigg{\{}\bar{\rho}_{t, x}^{R}\left(\int_{\mathcal{R}\mathcal{Q}_{K_{2}}(x)}\Big{(}\Delta(z)-\Delta(x) \Big{)}\pi_{OT}(dz|x)\right)\Bigg{\}}\mu_{X_{K_{2}}}(x)\] \[=\sum_{x\in\mathcal{X}_{K_{2}}}\int_{\mathcal{R}\mathcal{Q}_{K_{2 }}(x)}\mathrm{Tr}\Big{\{}\bar{\rho}_{t,x}^{R}\Big{(}\Delta(z)-\Delta(x)\Big{)} \Big{\}}\pi_{OT}(dz|x)\mu_{X_{K_{2}}}(x)\] \[=\int_{\mathcal{X}}\mathrm{Tr}\Big{\{}\bar{\rho}_{t,Q_{K_{2}}(x) }^{R}\Big{(}\Delta(x)-\Delta(Q_{K_{2}}(x))\Big{)}\Big{\}}\mu_{X}(dx), \tag{137}\] where the t-letter post-measured average reference state in the second equality is defined as \[\bar{\rho}_{t,x}^{R}:=\frac{1}{t}\sum_{i=1}^{t}\hat{\rho}_{x}^{R_{i}}.\] Substituting (137) and (133) into (132) provides the following bound \[d_{n}(R^{n},\hat{X}_{k_{1},K_{2}}^{n}|\neg E_{\mathrm{cc}}) =(P_{k_{1}}+\epsilon_{d})d\left(R,\hat{X}_{\mathrm{local}}|A_{k_{ 1}}=1\right)+d(R_{k_{1}}+X_{k_{1},K_{2}}^{\prime})\] \[=(P_{k_{1}}+\epsilon_{d})d\left(R,\hat{X}_{\mathrm{local}}|A_{k_ {1}}=1\right)+d(R_{k_{1}}+X_{k_{1},K_{2}}^{\prime})\] \[+\int_{\mathcal{X}}\mathrm{Tr}\Big{\{}\bar{\rho}_{x}^{R}\Big{(} \Delta(x)-\Delta(Q_{K_{2}}(x))\Big{)}\Big{\}}\mu_{X}(dx),\] where the asymptotic post-measured average reference state is given by \[\bar{\rho}_{x}^{R}:=\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\rho_{x}^{R_{i}}.\]
2307.08032
Nested Sequents for Quantified Modal Logics
This paper studies nested sequents for quantified modal logics. In particular, it considers extensions of the propositional modal logics definable by the axioms D, T, B, 4, and 5 with varying, increasing, decreasing, and constant domains. Each calculus is proved to have good structural properties: weakening and contraction are height-preserving admissible and cut is (syntactically) admissible. Each calculus is shown to be equivalent to the corresponding axiomatic system and, thus, to be sound and complete. Finally, it is argued that the calculi are internal -- i.e., each sequent has a formula interpretation -- whenever the existence predicate is expressible in the language.
Tim S. Lyon, Eugenio Orlandelli
2023-07-16T13:04:13Z
http://arxiv.org/abs/2307.08032v3
# Nested Sequents for Quantified Modal Logics+ ###### Abstract This paper studies nested sequents for quantified modal logics. In particular, it considers extensions of the propositional modal logics definable by the axioms \(\mathbf{D}\), \(\mathbf{T}\), \(\mathbf{B}\), \(\mathbf{4}\), and \(\mathbf{5}\) with varying, increasing, decreasing, and constant domains. Each calculus is proved to have good structural properties: weakening and contraction are height-preserving admissible and cut is (syntactically) admissible. Each calculus is shown to be equivalent to the corresponding axiomatic system and, thus, to be sound and complete. Finally, it is argued that the calculi are internal--i.e., each sequent has a formula interpretation--whenever the existence predicate is expressible in the language. Keywords:Cut elimination Nested sequent Quantified modal logic. ## 1 Introduction Generalisations of Gentzen-style sequent calculi have proven useful for developing cut-free and analytic proof systems for many propositional non-classical logics, including modal and intermediate ones. Among these generalisations are _display calculi_[2], _hypersequents_[1], _labelled calculi_[22, 24], and _nested sequents_[5, 12]. They often allow one to give constructive proofs of important meta-theoretical properties such as decidability [3], interpolation [9], and automatic countermodel extraction [16]. These systems generalise the structural level of Gentzen-style calculi in different ways in order to express wider classes of logics. In the case of propositional modal logics they can express the structure of various relational models. In particular, nested sequents encode tree-like relational models and labelled calculi encode graph-like models. In contrast to other formalisms (e.g. labelled sequents) nested sequents have the advantage of being internal calculi: each nested sequent has a formula interpretation, and thus, such expressions are not a major departure from the modal language. Things become more difficult when we add the quantifiers. As it is well known [7, 10], in quantified modal logics (QMLs) we have _interaction formulas_ such as \[\mathbf{CBF}:=\ \Box\forall xA\supset\forall x\Box A\qquad\text{and}\qquad\mathbf{ BF}:=\ \forall x\Box A\supset\Box\forall xA\] whose validity depends on the interrelations between the domains of quantification (\(\mathcal{D}_{w}\)) of the different worlds (\(w\)) of the model: **CBF** is valid only if domains are _increasing_--\(w\mathcal{R}v\) implies \(\mathcal{D}_{w}\subseteq\mathcal{D}_{v}\)--and **BF** is valid only if domains are _decreasing_--\(w\mathcal{R}v\) implies \(\mathcal{D}_{w}\supseteq\mathcal{D}_{v}\). Axiomatically, **CBF** is derivable from the interaction of the axioms/rules for modalities and those for the classical quantifiers, and **BF** is independent from them. However, the situation is radically different for sequent calculi than for axiomatic calculi. The problem is that **BF** becomes derivable when we add standard sequent rules for the quantifiers to a calculus having separated left and right rules for the modalities--i.e., it is derivable in all generalisations of Gentzen-style calculi mentioned above. To overcome this issue for nested sequents, we employ a formulation technique motivated by labelled sequent calculi. One way of making **CBF** and **BF** independent of the rules for quantifiers within labelled sequent calculi is to extend the language with _domain atoms_ of shape \(y\in D(w)\) whose intended meaning is that '\(y\) belong to the quantifiational domain of the label \(w\)' [19, 24]. In this way, one can restrict the rules for the quantifiers to the terms belonging to the domain of the label under consideration: \[\frac{w:A(y/x),y\in D(w),w:\forall xA,\Gamma\Rightarrow\Delta}{y\in D(w),w: \forall xA,\Gamma\Rightarrow\Delta}\quad\frac{z\in D(w),\Gamma\Rightarrow \Delta,w:A(z/x)}{\Gamma\Rightarrow\Delta,w:\forall xA}\ z\ \mbox{ fresh}\] As a consequence, **CBF** and **BF** are derivable only if we extend the basic calculus with rules relating the domains of the distinct labels. In this paper, we study nested sequent calculi for QMLs with varying, increasing, decreasing, and constant domains. Similar to the use of domain atoms in labelled sequents, we will formulate our nested calculi by extending the syntax of sequents with _signatures_--i.e., multisets of terms that restrict the applicability of the rules for the quantifiers at that node of the nested sequent--as was done in [23] to define hypersequents for Godel-Dummett logic with non-constant domains. In particular, we will use the following rules for the universal quantifier: \[\frac{\mathcal{S}\{X,y;A(y/x),\forall xA,\Gamma\Rightarrow\Delta\}}{\mathcal{S }\{X,y;\forall xA,\Gamma\Rightarrow\Delta\}}\quad\frac{\mathcal{S}\{X,z; \Gamma\Rightarrow\Delta,A(z/x)\}}{\mathcal{S}\{X;\Gamma\Rightarrow\Delta, \forall xA\}}\quad\frac{\mathcal{S}\{X,z;\Gamma\Rightarrow\Delta,A(z/x)\}}{ \mathcal{S}\{X;\Gamma\Rightarrow\Delta,\forall xA\}}\quad\] and will add signature structural rules for increasing, decreasing, and constant domains (Table 3). As a consequence, we will be able to define nested calculi that are equivalent to the labelled calculi considered in [24, Ch. 6] and [19, Ch. 12.1]. We will show that our nested calculi have good structural properties--all rules are height-preserving invertible, weakening and contraction are height-preserving admissible, and cut is syntactically admissible--and that they characterise the quantified extensions of the propositional modal logics in the cube of normal modalities. One advantage of the present approach is that nested sequents with signatures have a formula interpretation given that the language can express the _existence predicate_\(\mathcal{E}\). In this paper, we will consider a language with identity so that \(\mathcal{E}x\) can be expressed as \(\exists y(y=x)\) and it need not be taken as an additional primitive symbol; cf. [7]. Thus, our calculi utilise (nested) sequents as expressive as the modal language, showing that our calculi are syntactically economical. The rest of the paper is organised as follows: SS2 sketches the QMLs considered in the paper, and SS3 introduces the nested calculi for these logics. Then, SS4 shows that these calculi have good structural properties distinctive of G3-style calculi, including syntactic cut-elimination, and SS5 shows that each calculus is sound and compete with respect to its intended semantics. Finally, SS6 presents some future lines of research. ## 2 Quantified Modal Logics -_Syntax._ Let _Rel_ be a set containing, for each \(n\in\mathbb{N}\), an at most countable set of \(n\)-ary predicates \(R_{1}^{n}\), \(R_{2}^{n},\dots\), and let _Var_ be a denumerable set of individual variables. The language \(\mathcal{L}\) is defined by the following grammar: \[A\,::=\,R_{i}^{n}(x_{1},\dots,x_{n})\,|\,x_{1}=x_{2}\,|\perp|\,A\supset A\,|\, \forall xA\,|\,\square A\] ( \[\mathcal{L}\] ) where \(x,x_{1},\dots,x_{n}\,\in\)_Var_ and \(R_{i}^{n}\in\)_Rel_. An _atomic formula_ is a formula of the shape \(R_{i}^{n}(x_{1},\dots,x_{n})\) or \(x_{1}=x_{2}\). We use the following metavariables: \(x,y,z\) for variables; \(P,Q,R\) for atomic formulas; and \(A,B,C\) for formulas. An occurrence of a variable \(x\) in a formula is _free_ if it is not in the scope of \(\forall x\); otherwise, it is _bound_. A _sentence_ is a formula without free occurrences of variables. The formulas \(\neg A\), \(A\wedge B\), \(A\lor B\), \(\exists xA\), and \(\Diamond A\) are defined as expected. We follow the usual conventions for parentheses. The _weight_ of a formula \(|A|\) is defined accordingly: \(|R_{i}^{n}(x_{1},\dots,x_{n})|=|x=y|=|\bot|=0\), \(|A\supset B|=|A|+|B|+1\), and \(|\forall xA|=|\square A|=|A|+1\). We use \(A(y/x)\) to denote the formula obtained from \(A\) by replacing each free occurrence of \(x\) with an occurrence of \(y\), possibly renaming bound variables to avoid capture: if \(y\not\equiv x\), then \((\forall yA)(y/x)\equiv\forall z((A(z/y))(y/x))\), where \(z\) is fresh. -_Semantics._ A _frame_ is a triple \(\mathcal{F}=\langle\mathcal{W},\,\mathcal{R},\mathcal{D}\rangle\), where: * \(\mathcal{W}\) is a non-empty set of _worlds_; * \(\mathcal{R}\) is a binary _accessibility relation_ defined over \(\mathcal{W}\); * \(\mathcal{D}\) is a function mapping each \(w\in\mathcal{W}\) to a possibly empty set of objects \(\mathcal{D}_{w}\) (the _domain_ of \(w\)); we impose that \(\mathcal{D}\) is such that \(\mathcal{D}_{v}\neq\varnothing\) for some \(v\in\mathcal{W}\). We say that \(\mathcal{F}\) has: 1. _increasing domains_ if for all \(w,v\in\mathcal{W}\), \(w\mathcal{R}v\) implies \(D_{w}\subseteq D_{v}\); 2. _decreasing domains_ if for all \(w,v\in\mathcal{W}\), \(w\mathcal{R}v\) implies \(D_{w}\supseteq D_{v}\); 3. _constant domains_ if for all \(w,v\in\mathcal{W}\), \(D_{w}=D_{v}\); 4. _varying domains_ if none of the above conditions hold. A _model_\(\mathcal{M}\) is a frame together with a valuation function \(\mathcal{V}\) such that for each \(w\in W\) and each \(R^{n}\) in _Rel_, \(\mathcal{V}(w,R_{n})\subseteq(D_{\mathcal{W}})^{n}\), where \(D_{\mathcal{W}}=\bigcup_{v\in\mathcal{W}}D_{v}\). An assignment \(\sigma\) is a function mapping each variable to an object in \(\mathcal{D}_{\mathcal{W}}\). We let \(\sigma^{x\triangleright o}\) be the assignment mapping \(x\) to \(o\in\mathcal{D}_{\mathcal{W}}\), which behaves like \(\sigma\) for all other variables. Observe that variables are _rigid designators_ in that their value does not change from one world to another. The notion of _satisfaction_ of a formula \(A\) at a world \(w\) of a model \(\mathcal{M}\) under an assignment \(\sigma\)--to be denoted by \(\sigma\Vdash_{w}^{\mathcal{M}}A\), possibly omitting \(\mathcal{M}\)--is defined as follows: \[\begin{array}{llll}\sigma\Vdash_{w}^{\mathcal{M}}R^{n}(x_{1},\ldots,x_{n})& \text{ iff }&\langle\sigma(x_{1}),\ldots,\sigma(x_{n})\rangle\in\mathcal{V}(w,R^{n})\\ \sigma\Vdash_{w}^{\mathcal{M}}x=y&\text{ iff }&\sigma(x)=\sigma(y)\\ \sigma\Vdash_{w}^{\mathcal{M}}\bot&\\ \sigma\Vdash_{w}^{\mathcal{M}}A\supset B&\text{ iff }&\sigma\Vdash_{w}^{\mathcal{M}}A \text{ or }\sigma\Vdash_{w}^{\mathcal{M}}B\\ \sigma\Vdash_{w}^{\mathcal{M}}\forall xA&\text{ iff }&\text{ for each }o\in\mathcal{D}_{w},\,\sigma^{\mp o} \Vdash_{w}^{\mathcal{M}}A\\ \sigma\Vdash_{w}^{\mathcal{M}}\Box A&\text{ iff }&\text{ for each }v\in\mathcal{W},\,w \mathcal{R}v\text{ implies }\sigma\Vdash_{v}^{\mathcal{M}}A\end{array}\] The notions of _truth at a world \(w\)_ (\(\Vdash_{w}^{\mathcal{M}}A\)), _truth in a model \(\mathcal{M}\)_ (\(\Vdash^{\mathcal{M}}A\)), _validity in a frame \(\mathcal{F}\)_ (\(\mathcal{F}\Vdash A\)), and validity in class \(\mathcal{C}\) of frames (\(\mathcal{C}\Vdash A\)) are defined as usual. It is well-known that the formula: **CBF:=**\(\Box\forall xA\supset\forall x\Box A\) is valid over frames with increasing domains; **BF:=**\(\forall x\Box A\supset\Box\forall xA\) is valid over frames with decreasing domains; **UI:=**\(\forall xA\supset A(y/x)\) is valid over frames with constant domains. Over frames with non-constant domains the valid theory of quantification is that of positive free logic instead of that of classical logic. This means that the axiom **UI** is replaced by the weaker axiom \(\textbf{UI}^{\circ}:=\forall y(\forall xA\supset A(y/x))\). If we extend the language with an _existence predicate_\(\mathcal{E}\)--whose satisfaction clause is \(\sigma\models_{w}^{\mathcal{M}}\mathcal{E}x\) iff \(\sigma(x)\in\mathcal{D}_{w}\)--then we have the following weaker form of UI that is valid \(\textbf{UI}^{\mathcal{E}}:=\forall xA\wedge\mathcal{E}y\supset A(y/x)\). Over the language \(\mathcal{L}\) the formula \(\mathcal{E}x\) can be defined as \(\exists y(y=x)\), but over an identity-free language the existence predicate has to be taken as an additional primitive symbol. This distinction has an impact on the calculi introduced in the next section: nested sequents have a formula interpretation when \(\mathcal{E}\) is expressible in the language. _-Logics._ A _QML_ is defined to be the set of all formulas that are valid in some given class of frames. In this paper, we consider logics that are defined by imposing combinations of the properties in Table 1. We use Q.L for a generic logic and we say that a formula is Q.L-_valid_ if it belong to the logic Q.L. The formulas that are valid over the class of all frames is called Q.K and it is axiomatised by the axioms and rules given in Table 2. We notice that \(\textbf{UI}^{\mathcal{E}}\) is a theorem of \begin{table} \begin{tabular}{|l|l|l||l|l|l|} \hline Name & Axiom & Property \((w,v,u\in\mathcal{W})\) & Name & Axiom & Property \((w,v,u\in\mathcal{W})\) \\ \hline \hline **D** & \(\Box A\supset\Diamond A\) & \(\forall w\exists u\in\mathcal{W}(w\mathcal{R}u)\) & **5** & \(\Diamond A\supset\Box\Diamond A\) & \(\forall w,v,u(w\mathcal{R}v\wedge w\mathcal{R}u\supset v\mathcal{R}u)\) \\ **T** & \(\Box A\supset A\) & \(\forall w(w\mathcal{R}w)\) & **CBF** & \(\Box vA\supset\forall x\Box A\) & \(\forall w,v(w\mathcal{R}v\supset\mathcal{D}_{v}\subseteq\mathcal{D}_{v})\) \\ **B** & \(A\supset\Box\Diamond A\) & \(\forall w,v(w\mathcal{R}v\supset v\mathcal{R}w)\) & **BF** & \(\forall x\Box A\supset\Box\forall xA\) & \(\forall w,v(w\mathcal{R}v\supset\mathcal{D}_{v}\supseteq\mathcal{D}_{v})\) \\ **4** & \(\Box A\supset\Box\Box A\) & \(\forall w,v,u(w\mathcal{R}v\wedge v\mathcal{R}u\supset w\mathcal{R}u)\) & **UI** & \(\forall xA\supset A[y/x]\) & \(\forall w,v(\mathcal{D}_{w}=\mathcal{D}_{v})\) \\ \hline \end{tabular} \end{table} Table 1: Axioms and corresponding properties Q.K, see [7, Lem. 2.1(iii)]. The additional axioms for the logics extending Q.K are given in Table 1. We follow the usual conventions for naming logics--e.g., Q.S4\(\oplus\) CBF is the set of formulas that are valid over all reflexive and transitive frames with increasing domains and it is axiomatised by adding axioms **T**, **4**, and **CBF** to Q.K. We will not distinguish between a logic and its axiomatisation. This is justified by the following theorem. Theorem 2.1 ([7]): _A formula is a theorem of Q.L if and only if it is Q.L-valid._ ## 3 Nested Calculi for QML A _sequent_ is an expression \(X;\Gamma\Rightarrow\Delta\) where \(X\) is a multiset of variables, called a _signature_, and \(\Gamma,\,\Delta\) are multisets of formulas of the language \(\mathcal{L}\). The signature of a sequent is a syntactic counterpart of the existence atoms used in calculi where **UI** is replaced by \(\textbf{UI}^{\circ}\) or \(\textbf{UI}^{\mathcal{E}}\), see [18]. _Nested sequents_ are defined as follows: \[\mathcal{S}\::=\:X;\Gamma\Rightarrow\Delta\mid\mathcal{S},\,[\mathcal{S}], \ldots,[\mathcal{S}]\] A nested sequent \(\mathcal{S}\) codifies the tree of sequents \(\texttt{tr}(\mathcal{S})\), as shown in Figure 1. Substitution of free variables are extended to (nested) sequents and to multisets of formulas by applying them component-wise. The formula interpretation of a sequent is defined as follows: \[\texttt{fm}(X;\Gamma\Rightarrow\Delta)\equiv\bigwedge_{x\in X}\mathcal{E}x \wedge\bigwedge\Gamma\supset\bigvee\Delta\] \begin{table} \begin{tabular}{|l l|} \hline **TAUT.** & Propositional tautologies & **REF.**\(x=x\) \\ **K.** & \(\Box(A\supset B)\supset(\Box A\supset B)\) & **REPL.**\(x=y\wedge A(x/z)\supset A(y/z)\) \\ **UI\({}^{\circ}\)**.** & \(\forall y(\forall xA\supset A(y/x))\) & **ND.**\(x\neq y\supset\Box(x\neq y)\) \\ \(\forall\)**-COMM.** & \(\forall x\forall yA\supset\forall y\forall xA\) & **MP.** If \(A\) and \(A\supset B\) are theorem so is \(B\) \\ \(\forall\)**-DIST.** & \(\forall x(A\supset B)\supset(\forall xA\supset\forall xB)\) & **N.** If \(A\) is a theorem so is \(\Box A\) \\ \(\forall\)**-VAQ.** & \(A\supset\forall xA\), if \(x\) is not free in \(A\) & **UG.** If \(A\) is a theorem so is \(\forall xA\) \\ \hline \end{tabular} \end{table} Table 2: Axiomatisation of Q.K. where \(\mathcal{E}x\) is short for the formula \(\exists y(y=x)\) and an empty conjunction (disjunction) is \(\top\) (\(\bot\), resp.). To provide a formula reading of nested sequents over the identity-free language we could add \(\mathcal{E}\) to the language or interpret formulas via their universal closure. In the latter case, for example, the formula interpretation of a sequent would be \(\mathtt{fm}(X;\Gamma\Rightarrow\Delta)\equiv\forall x\in X(\bigwedge\Gamma \supset\bigvee\Delta)\), and it seems our nested calculi would capture the QMLs in [13].3 Nonetheless, we believe there are independent reasons for studying QMLs over a language containing identity; cf. [7, 10]. The formula interpretation of a nested sequent is defined recursively as: Footnote 3: We thank the anonymous reviewer who suggested this latter possibility. \[\mathtt{fm}(X;\Gamma\Rightarrow\Delta,[\mathcal{S}_{1}],\ldots,[\mathcal{S}_{ n}])\equiv(\bigwedge_{x\in X}\mathcal{E}x\land\bigwedge\Gamma\supset\bigvee \Delta)\vee\bigvee_{k=1}^{n}\Box\,\mathtt{fm}(\mathcal{S}_{k})\] Rules are based on the notion of a _hole_\(\{\cdot\}\), which is a placeholder for a subtree of (the tree of) a nested sequent and, thus, allows one to apply a rule at an arbitrary node in the tree of a nested sequent. A _context_ is defined as follows: \[\mathcal{C}::=X;\Gamma\Rightarrow\Delta,\{\cdot\},\ldots,\{\cdot\}\ \mid\mathcal{C},\,[\mathcal{C}],\ldots,[\mathcal{C}]\] In other words, a context \(\mathcal{C}\) is a nested sequent with \(n\geq 0\) hole occurrences, which do not occur inside formulas and must occur within consequent position. We hitherto write contexts as \(\mathcal{S}\{\cdot\}\cdots\{\cdot\}\) indicating each of the holes occurring within the context. The _depth_ of a hole in a context is defined as the height of the branch from that hole to the root (cf. [3]), and we write \(Depth(\mathcal{S}\{\cdot\})\geq n\) for \(n\in\mathbb{N}\) to mean that the depth of the hole in \(\mathtt{tr}(\mathcal{S}\{\cdot\})\) is \(n\) or greater. We define _substitutions_ of nested sequents into contexts recursively on the number and depth of holes in a given context: suppose first that our context is of the form \(\mathcal{S}\{\cdot\}\equiv X;\Gamma\Rightarrow\Delta,\{\cdot\},[\mathcal{S}_{ 1}],\ldots,[\mathcal{S}_{n}]\) with a single hole at a depth of \(0\) and let \(\mathcal{S}^{\prime}\equiv Y,\Pi\Rightarrow\Sigma,[\mathcal{S}^{\prime}_{1}], \ldots,[\mathcal{S}^{\prime}_{k}]\) be a nested sequent. Then, \[\mathcal{S}\{\mathcal{S}^{\prime}\}\equiv X,Y;\Pi,\Gamma\Rightarrow\Delta, \Sigma,[\mathcal{S}_{1}],\ldots,[\mathcal{S}_{n}],[\mathcal{S}^{\prime}_{1}],\ldots,[\mathcal{S}^{\prime}_{k}]\] If our context is of the form \(\mathcal{S}\{\cdot\}\equiv X;\Gamma\Rightarrow\Delta,[\mathcal{S}_{1}\{\cdot\} ],\ldots,[\mathcal{S}_{n}]\) with a single hole at a depth greater then \(0\), then we recursively define \(\mathcal{S}\{\mathcal{S}^{\prime}\}\) to be the nested sequent \(X;\Gamma\Rightarrow\Delta,[\mathcal{S}_{1}\{\mathcal{S}^{\prime}\}],\ldots,[ \mathcal{S}_{n}]\). This definition extends to a context \(\mathcal{S}\{\cdot\}\cdots\{\cdot\}\) with \(n\) holes in the expected way, and for nested sequents \(\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\), we let \(\mathcal{S}\{\mathcal{S}_{1}\}\cdots\{\mathcal{S}_{n}\}\) denote the nested sequent obtained by replacing, for each \(i\in\{1,\ldots,n\}\), the \(i\)-th hole \(\{\cdot\}\) in \(\mathcal{S}\{\cdot\}\cdots\{\cdot\}\) with \(\mathcal{S}_{i}\). We may also write \(\mathcal{S}\{\mathcal{S}_{1}\}\{\mathcal{S}_{i}\}_{i=2}^{n}\) to indicate \(\mathcal{S}\{\mathcal{S}_{1}\}\cdots\{\mathcal{S}_{n}\}\) more succinctly. Plugging \(\emptyset\) into a hole suggests the removal of the hole; for instance, if \(\mathcal{S}\{\cdot\}\{\cdot\}\equiv x;A\Rightarrow B,\{\cdot\},[x,y,B,C \Rightarrow D,\{\cdot\}]\), then \(\mathcal{S}\{\cdot\}\{\emptyset\}\equiv x;A\Rightarrow B,\{\cdot\},[x,y;B,C \Rightarrow D]\). The rules of the nested calculi for QMLs are given in Table 3. The minimal calculus \(\mathsf{NQ}\).\(\mathsf{K}\) contains initial sequents, the logical rules, and the rules for identity (rule _Rig_ is needed--and is sound--because variables are rigid designators). If \(\mathsf{Q}\).\(\mathsf{L}\) is an extension of \(\mathsf{Q}\).\(\mathsf{K}\) as discussed in Section 2, then \(\mathsf{NQ}\).\(\mathsf{L}\) denotes the nested calculus extending \(\mathsf{NQ}\).\(\mathsf{K}\) with the rules for the axioms of those logics. Observe that to capture axioms \(\mathbf{D}\), \(\mathbf{CBF}\), \(\mathbf{BF}\), and \(\mathbf{UI}\) we have added structural rules instead of logical ones since the former have a better behaviour. In [3], Brunnler only considers nested calculi (for propositional modal logics) defined relative to _45-complete sets_ of axioms. This restriction is required to ensure that the nested calculi contain all rules required for their completeness. Similarly, in the first-order setting, we only consider nested calculi defined relative to _properly closed_ sets of axioms, which is a generalisation of 45-completeness and takes care of the interaction of \(\mathbf{B}\) with \(\mathbf{CBF}\) and \(\mathbf{BF}\) (for example), ensuring the completeness of our nested calculi. Definition 1 (Properly Closed): Let \(\mathsf{L}\subseteq\{\mathbf{D},\mathbf{T},\mathbf{B},\mathbf{4},\mathbf{5}, \mathbf{CBF},\mathbf{BF},\mathbf{UI}\}\). We define \(\mathsf{L}\) to be _properly closed_ iff if all \(\mathsf{Q}\).\(\mathsf{L}\)-frames satisfy \(X\in\{\mathbf{4},\mathbf{5},\mathbf{CBF},\mathbf{BF}\}\), then \(X\in\mathsf{L}\). We define a nested calculus \(\mathsf{NQ}\).\(\mathsf{L}\) to be _properly closed_ iff (1) \(\mathsf{L}\) is properly closed, and (2) \(R_{5dom}\in\mathsf{NQ}\).\(\mathsf{L}\) iff \(\mathbf{5}\in\mathsf{L}\) and \(\{\mathbf{CBF},\mathbf{BF}\}\cap\mathsf{L}\neq\emptyset\). Remark 1: All nested calculi hitherto considered will be assumed properly closed. Given a calculus \(\mathsf{NQ}\).\(\mathsf{L}\), an \(\mathsf{NQ}\).\(\mathsf{L}\)_-derivation_ of a nested sequent \(\mathcal{S}\) is a tree of nested sequents, whose leaves are initial sequents, whose root is \(\mathcal{S}\), and which grows according to the rules of \(\mathsf{NQ}\).\(\mathsf{L}\). We consider only derivations of _pure sequents_, meaning no variable has both free and bound occurrences and each _eigenvariable_ (i.e., a fresh variable participating in an \(R\forall\) inference) is distinct. The _height_ of an \(\mathsf{NQ}\).\(\mathsf{L}\)-derivation is the number of nodes of one of its longest branches. We say that \(\mathcal{S}\) is \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable if there is an \(\mathsf{NQ}\).\(\mathsf{L}\)-derivation of \(\mathcal{S}\) or of an alphabetical variant of \(\mathcal{S}\). We let \(\mathsf{NQ}\).\(\mathsf{L}\vdash\mathcal{S}\) denote that \(\mathcal{S}\) is \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable. A rule is said to be _(height-preserving) admissible_ in \(\mathsf{NQ}\).\(\mathsf{L}\), if, whenever its premisses are \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable (with height at most \(n\)), also its conclusion is \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable (with height at most \(n\)). A rule is said to be _(height-preserving) invertible_ in \(\mathsf{NQ}\).\(\mathsf{L}\), if, whenever its conclusion is \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable (with height at most \(n\)), each premiss is \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable (with height at most \(n\)). For each rule displayed in Table 3, the formulas explicitly displayed in the conclusion are called _principal_, those explicitly displayed in the premisses are called _auxiliary_, and everything else constitutes the _context_. ## 4 Properties and Cut-Elimination We now show that our nested calculi satisfy fundamental admissibility and invertibility properties. Ultimately, we will apply these properties in our proof of syntactic cut-elimination. Lemma 1 (Generalised Initial Sequents): \(\mathsf{NQ}\).\(\mathsf{L}\vdash\mathcal{S}\{X;A,\Gamma\Rightarrow\Delta,A\}\)_, for any arbitrary \(\mathcal{L}\)-formula \(A\)._ Proof: By a standard induction on the weight of \(A\). Lemma 2: _The sequents \(\mathcal{S}\{\Rightarrow x=x\}\) and \(\mathcal{S}\{x=y,A(x/z)\Rightarrow A(y/z)\}\) are \(\mathsf{NQ}\).\(\mathsf{L}\)-derivable._ Proof: \(\mathcal{S}\{\;\Rightarrow\;x=x\}\) is derivable by applying an instance of rule _Ref_ to the initial sequent \(\mathcal{S}\{\;x=x\;\Rightarrow\;x=x\}\). The case of \(\mathcal{S}\{x=y,A(x/z)\Rightarrow\;A(y/z)\}\) is handled by induction on \(|A(x/z)|\). We consider only the case where \(A(x/z)=\Box B(x/z)\). \[\frac{\overline{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow,[x=y,B(x/z)\Rightarrow B (y/z)]\}}}{\;\frac{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow,[B(x/z)\Rightarrow B (y/z)]\}}{\;\frac{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow,[\Rightarrow B(y/z)]\}} {\;\frac{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow,[\Rightarrow B(y/z)]\}}{\; \mathcal{S}\{x=y,\Box B(x/z)\Rightarrow\Box B(y/z)\}}}}{\;\frac{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow,[\Rightarrow B(y/z)]\}}}{\;\frac{\mathcal{S}\{x=y,\Box B( x/z)\Rightarrow\Box B(y/z)\}}}{\;\frac{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow \Box B(y/z)\}}}{\;\frac{\mathcal{S}\{x=y,\Box B(x/z)\Rightarrow\Box B(y/z)\}}} \;\frac{\mathcal{S}\{x\)\(;\Box\)\(\;\)\(\;\)\(\;\)\(\;\)\(\;\)\(\;\)\(\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\)\(\;\;\;\)\(\;\ of an instance of \(L\bot\) gives another initial sequent or instance of \(L\bot\), respectively, and \(R\bot\) permutes above every other rule of \(\mathsf{NQ}\).L. Lemma 4 (Substitution): _The following rule of substitution of free variables is height-preserving admissible in \(\mathsf{NQ}\).L:_ \[\frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta\}}{\mathcal{S}(y/x)\{X(y/x); \Gamma(y/x)\Rightarrow\Delta(y/x)\}}\ _{(y/x)}\] Proof: By induction on the height of the derivation \(\mathcal{D}\) of the premiss. The only interesting case is when the last step of \(\mathcal{D}\) is an instance of \(R\forall\): \[\frac{\mathcal{S}\{X,z_{2};\Gamma\Rightarrow\Delta,A(z_{2}/z_{1})\}}{\mathcal{ S}\{X;\Gamma\Rightarrow\Delta,\forall z_{1}A\}}\ _{R\forall,\ z_{2}\ \mathrm{fresh}}\] We transform the derivation of the premiss by applying the inductive hypothesis twice to ensure the freshness condition is preserved: the first time to replace \(z_{2}\) with a fresh variable \(z_{3}\) and then to replace \(x\) with \(y\). We conclude by applying \(R\forall\) with \(z_{3}\) as the _eigenvariable_. Typically, admissible structural rules operate on either formulas (e.g., see the internal weakening rule IW below) or nesting structure (e.g., see the Merge rule below) in nested calculi. An interesting observation in the first-order setting is that admissible structural rules also act on the signatures occurring in nested sequents. This gives rise to forms of weakening and contraction for terms, which are reminiscent of analogous rules formulated in the context of hypersequents with signatures [23]. Lemma 5 (Signature Structural Rules): _The following rules of signature weakening and signature contraction are height-preserving admissible in \(\mathsf{NQ}\).L:_ \[\frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta\}}{\mathcal{S}\{X,x;\Gamma \Rightarrow\Delta\}}\ _{SW}\qquad\frac{\mathcal{S}\{X,x,x;\Gamma\Rightarrow\Delta\}}{\mathcal{S}\{X,x ;\Gamma\Rightarrow\Delta\}}\ _{SC}\] Proof: By a standard induction on the height of the derivation \(\mathcal{D}\) of the premiss. Proving height-preserving admissibility of SC is trivial as the rule permutes above all rules of \(\mathsf{NQ}\).L. Proving the height-preserving admissibility of SW is also straightforward with the only interesting case arising when \(\mathcal{D}\) ends with an instance of \(R\forall\) with \(x\) as the _eigenvariable_. However, this case is easily managed by applying the height-preserving admissible substitution \((y/x)\) to ensure the freshness condition for \(R\forall\) is satisfied, followed by the inductive hypothesis, and an application of \(R\forall\). As in the setting of first-order intuitionistic logics with increasing and constant domains (see [14]), we find that our structural rules for domains give rise to admissible logical rules generalising the \(L\forall\) rule. Such rules (presented in the proposition below) combine the functionality of the associated domain structural rules with the \(L\forall\) rule. The \(L\forall_{bf}\) and \(L\forall_{cbf}\) rules are instances of _reachability rules_[16, 17], which bottom-up operate by searching for terms along edges in a nested sequent used to instantiate universal formulas. Proposition 1: _The following logical rules for 'domain-axioms' and for axiom \(\mathbf{D}\) are admissible in the nested calculi including the appropriate structural rules for domains or \(R_{D}\):_ \[\frac{\mathcal{S}\{X;A(y/x),\forall xA,\Gamma\Rightarrow\Delta,[Y;y;\Pi \Rightarrow\Sigma]\}}{\mathcal{S}\{X;\forall xA,\Gamma\Rightarrow\Delta,[Y,y;\Pi \Rightarrow\Sigma]\}}\ \ \mbox{${}_{L^{\forall_{\mathit{ubf}}}}$}\ \ \frac{\mathcal{S}\{X;A(y/x),\forall xA,\Gamma \Rightarrow\Delta\}}{\mathcal{S}\{X;\forall xA,\Gamma\Rightarrow\Delta\}}\ \ \mbox{${}_{L^{\forall_{\mathit{ubf}}}}$}\] \[\frac{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,[Y;A(y/x),\forall xA,\Pi \Rightarrow\Sigma]\}}{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,[Y;\forall xA, \Pi\Rightarrow\Sigma]\}}\ \ \mbox{${}_{L^{\forall_{\mathit{ubf}}}}$}\ \ \mbox{${}_{L^{\forall_{\mathit{ubf}}}}$}\ \ \mbox{${}_{L ^{\infty}}$}\ \ \mbox{${}_{R_{\mathit{cbf}}}$}\] Proof: The admissibility of \(L\forall_{\mathit{cbf}}\) from \(R_{\mathit{cbf}}\) and \(SW\) is proven as follows: \[\frac{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,[Y;A(y/x),\forall xA,\Pi \Rightarrow\Sigma]\}}{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,[Y,y;A(y/x), \forall xA,\Pi\Rightarrow\Sigma]\}}\ \mbox{${}_{SW}$}\] \[\frac{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,[Y,y;\forall xA,\Pi \Rightarrow\Sigma]\}}{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,[Y;\forall xA, \Pi\Rightarrow\Sigma]\}}\ \mbox{${}_{L\mathbb{V}}$}\] The cases of \(L\forall_{\mathit{bf}}\) and \(L\forall_{\mathit{ui}}\) are similar, and the case of \(L_{D}\) follows immediately from \(R_{D}\). Lemma 6 (Weakenings): _The following rules of internal and external weakening are height-preserving admissible in \(\mathsf{NQ.L}\):_ \[\frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta\}}{\mathcal{S}\{X;\Pi,\Gamma \Rightarrow\Delta,\Sigma\}}\ \mbox{${}^{\mathit{IW}}$}\ \ \ \ \ \ \frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta\}}{\mathcal{S}\{X;\Gamma \Rightarrow\Delta,[Y;\Pi\Rightarrow\Sigma]\}}\ \mbox{${}^{EW}$}\] Proof: By induction on the height of the derivation \(\mathcal{D}\) of the premiss. If \(\mathcal{D}\) ends with an instance of rule \(R\forall\) with \(y\) the _eigenvariable_, we apply the (height-preserving admissible) substitution rule to replace \(y\) with a fresh variable \(z\) occurring neither in \(\mathcal{S}\{X;\Gamma\Rightarrow\Delta\}\), nor in \(\Pi,\Sigma\) (in the IW case) or in \(Y,\Pi,\Sigma\) (in the EW case). Then, we apply the inductive hypothesis and an instance of \(R\forall\) to conclude \(\mathcal{S}\{X;\Pi,\Gamma\Rightarrow\Delta,\Sigma\}\) in the IW case and \(\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[Y;\Pi\Rightarrow\Sigma]\}\) in the EW case. Lemma 7 (Necessitation and Merge): _The following rules are height-preserving admissible in \(\mathsf{N.Q.L}\):_ \[\frac{\mathcal{S}}{\Rightarrow,[\mathcal{S}]}\ \mbox{${}^{\mathit{Nec}}$}\ \ \ \ \ \frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[Y;\Pi_{1}\Rightarrow\Delta_{1}],[Z; \Pi_{2}\Rightarrow\Delta_{2}]\}}{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[Y,Z; \Pi_{1},\Pi_{2}\Rightarrow\Delta_{1},\Delta_{2}]\}}\ \mbox{${}_{Merge}$}\] Proof: By a simple induction on the height of the derivation of the premiss. Lemma 8 (Invertibility): _Each rule of \(\mathsf{NQ.L}\) is height-preserving invertible._ Proof: The proof is by induction on the height of the derivation. The height-preserving invertibility of all rules but \(L\supset\), \(R\supset\), \(R\forall\) and \(R\Box\) follows from Lemmas 5 and 6, and the proof of the remaining cases is standard. Lemma 9 (Contraction): _The following rules of left and right contraction are height-preserving admissible in \(\mathsf{NQ.L}\):_ \[\frac{\mathcal{S}\{X;\Gamma,A,A\Rightarrow\Delta\}}{\mathcal{S}\{X;\Gamma,A \Rightarrow\Delta\}}\ _{CL}\ \ \ \ \ \ \ \frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,A,A\}}{\mathcal{S}\{X;\Gamma \Rightarrow\Delta,A\}}\ _{CR}\] Proof: By simultaneous induction on the height of the derivation of the premisses of CL and CR. We consider only the non-trivial \(R\forall\) case for CR as the remaining cases are similar or simpler. Assume that the last step of \(\mathcal{D}\) is: \[\frac{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,A(y/x),\forall xA\}}{\mathcal{ S}\{X;\Gamma\Rightarrow\Delta,\forall xA,\forall xA\}}\ _{R\forall}\] To resolve the case, we apply the height-preserving invertibility of \(R\forall\), the height-preserving admissibility of \((y/z)\) and SC, followed by the inductive hypothesis. Finally, an application of \(R\forall\) gives the desired conclusion. \[\frac{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,A(y/x),\forall xA\}}{\mathcal{ S}\{X,y,z;\Gamma\Rightarrow\Delta,A(y/x),A(z/x)\}}\ _{Lemma}\ \ref{lem:C1}\] \[\frac{\mathcal{S}\{X,y,y;\Gamma\Rightarrow\Delta,A(y/x),A(y/x)\}} {\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,A(y/x),A(y/x)\}}\ _{SC}\] \[\frac{\mathcal{S}\{X,y;\Gamma\Rightarrow\Delta,A(y/x),A(y/x)\}} {\mathcal{S}\{X;\Gamma\Rightarrow\Delta,\forall xA\}}\ _{R\forall}\] Due to the presence of \(R_{4}\) and \(R_{5}\) in specific nested calculi, our cut elimination theorem (Theorem 2.1 below) requires us to simultaneously eliminate a second form of cut that acts on modal formulas. We refer to this rule as L-Cut and note that it is essentially Brunnler's Y-cut rule [3]. Since the principal and auxiliary formulas of \(R_{4}\) and \(R_{5}\) are of the same weight (i.e. both are \(\Box A\)), L-Cut is needed to permute the cut upward in these special cases as cuts cannot be reduced to formulas of a smaller weight. Definition 2 (L-Cut and L-Str): Let \(\mathsf{NQ.L}\) be properly closed. We define L-Cut to be the following rule: \[\frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,\Box A\}\{Y_{i};\Pi_{i} \Rightarrow\Sigma_{i}\}_{i=1}^{n}}{\mathcal{S}\{X;\Box A,\Gamma\Rightarrow \Delta\}\{Y_{i};\Box A,\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n}}{\mathcal{S} \{X;\Gamma\Rightarrow\Delta\}\{Y_{i};\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n} }\ _{\mathsf{L\text{-Cut}}}\] which is subject to the following side conditions: * if \(\mathbf{4},\mathbf{5}\not\in\mathsf{L}\), then \(n=0\); * if \(\mathbf{4}\in\mathsf{L}\) and \(\mathbf{5}\not\in\mathsf{L}\), then \(\mathcal{S}\{\cdot\}\{\cdot\}\) is of the form \(\mathcal{S}\{X;\Gamma\Rightarrow\Delta,\{\cdot\},\{\mathcal{S}_{1}\{\cdot\}^{n}\}\}\); * if \(\mathbf{5}\in\mathsf{L}\) and \(\mathbf{4}\not\in\mathsf{L}\), then \(Depth(\mathcal{S}\{\cdot\}\{\emptyset\}^{n})\geq 1\); * otherwise, if \(\mathbf{4},\mathbf{5}\in\mathsf{L}\), then no restriction on the shape of the rule is enforced. We define L-Str to be the following rule: \[\frac{\mathcal{S}\{Y_{1};\Pi_{1}\Rightarrow\Sigma_{1},[X;\Gamma\Rightarrow \Delta]\}\{Y_{2};\Pi_{2}\Rightarrow\Sigma_{2}\}}{\mathcal{S}\{Y_{1};\Pi_{1} \Rightarrow\Sigma_{1}\}\{Y_{2};\Pi_{2}\Rightarrow\Sigma_{2},[X;\Gamma\Rightarrow \Delta]\}}\ \mathsf{L\text{-Str}}\] which is subject to the following side conditions:_ * _if_ \(\mathbf{4},\mathbf{5}\not\in\mathsf{L}\)_, then_ \(\mathcal{S}\{\cdot\}\{\cdot\}\) _is of the form_ \(\mathcal{S}\{X;\Gamma\Rightarrow\Delta,\{\cdot\},\{\cdot\}\}\)_;_ * _if_ \(\mathbf{4}\in\mathsf{L}\) _and_ \(\mathbf{5}\not\in\mathsf{L}\)_, then_ \(\mathcal{S}\{\cdot\}\{\cdot\}\) _is of the form_ \(\mathcal{S}\{X;\Gamma\Rightarrow\Delta,\{\cdot\},\{\mathcal{S}_{1}\{\cdot\}\}\}\)_;_ * _if_ \(\mathbf{5}\in\mathsf{L}\) _and_ \(\mathbf{4}\not\in\mathsf{L}\)_, then_ \(Depth(\mathcal{S}\{\cdot\}\{\emptyset\})\geq 1\)_;_ * _otherwise, if_ \(\mathbf{4},\mathbf{5}\in\mathsf{L}\)_, then no restriction on the shape of the rule is enforced._ Lemma 10 (Special Structural Rules).: _If \(\mathsf{NQ.L}\) contains the rule \(R_{X}\) for the propositional axiom \(X\), then the corresponding structural rule from Table 4 is admissible in \(\mathsf{NQ.L}\). Moreover, \(\mathsf{L}\)-Str is admissible in \(\mathsf{NQ.L}\)._ Proof.: We argue the \(S_{B}\) case by induction on the height of the given derivation; the remaining cases follow from the lemmas in the appendix. We only consider the \(R_{bf}\) and \(R_{5dom}\) cases of the inductive step as the remaining cases are simple or similar. \[\begin{array}{c}\frac{\mathcal{S}\{Z;\Pi_{1}\Rightarrow\Sigma_{1},[X,x; \Gamma\Rightarrow\Delta,[Y,x;\Pi_{2}\Rightarrow\Sigma_{2}]]\}}{\mathcal{S}\{Z ;\Pi_{1}\Rightarrow\Sigma_{1},[X;\Gamma\Rightarrow\Delta,[Y,x;\Pi_{2} \Rightarrow\Sigma_{2}]]\}}\,_{S_{B}}\\ \frac{\mathcal{S}\{Z,Y,x;\Pi_{1},\Pi_{2}\Rightarrow\Sigma_{1},\Sigma_{2},[X; \Gamma\Rightarrow\Delta]\}}{\mathcal{S}\{Z,Y,x;\Pi_{1},\Pi_{2}\Rightarrow \Sigma_{1},\Sigma_{2},[X;\Gamma\Rightarrow\Delta]\}}\,_{R_{cbf}}\end{array}\] As our nested calculi are assumed to be properly closed, we know that if \(\mathsf{NQ.L}\) contains \(R_{B}\) and \(R_{bf}\), then it must contain \(R_{cbf}\), showing that we can apply \(IH\) first and then \(R_{cbf}\) as shown below. \[\begin{array}{c}\frac{\mathcal{S}\{Z;\Pi_{1}\Rightarrow\Sigma_{1},[X,x; \Gamma\Rightarrow\Delta,[Y,x;\Pi_{2}\Rightarrow\Sigma_{2}]]\}}{\mathcal{S}\{ Z,Y,x;\Pi_{1},\Pi_{2}\Rightarrow\Sigma_{1},\Sigma_{2},[X,x;\Gamma\Rightarrow\Delta]\}}\,_{ IH}\\ \frac{\mathcal{S}\{Z,Y,x;\Pi_{1},\Pi_{2}\Rightarrow\Sigma_{1},\Sigma_{2},[X; \Gamma\Rightarrow\Delta]\}}{\mathcal{S}\{Z,Y,x;\Pi_{1},\Pi_{2}\Rightarrow \Sigma_{1},\Sigma_{2},[X;\Gamma\Rightarrow\Delta]\}}\,_{R_{cbf}}\end{array}\] Last, we consider an interesting \(R_{5dom}\) case: \[\begin{array}{c}Z;\Pi_{1}\Rightarrow\Sigma_{1},[X_{1};\Gamma_{1}\Rightarrow \Delta_{1},[X_{2},x;\Gamma_{2}\Rightarrow\Delta_{2}]],[\mathcal{S}\{Y,x;\Pi_ {2}\Rightarrow\Sigma_{2}\}]\\ \frac{\mathcal{S}\{Z,Y,x;\Pi_{1},\Gamma_{1}\Rightarrow\Delta_{1},[X_{2},x; \Gamma_{2}\Rightarrow\Delta_{2}]],[\mathcal{S}\{Y,\Pi_{2}\Rightarrow\Sigma_{2} \}]\}}{\mathcal{S}\{Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta _{2},[X_{1},\Gamma_{1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y;\Pi_{2}\Rightarrow \Sigma_{2}\}]\}}\,_{S_{B}}\end{array}\] \begin{table} \begin{tabular}{c} \hline \hline \(\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[Y;\Pi\Rightarrow\Sigma]\}\) \\ \hline \(\mathcal{S}\{X,Y;\Pi,\Gamma\Rightarrow\Delta,\Sigma\}\) \\ \hline \(\mathcal{S}\{Y_{1};\Pi_{1}\Rightarrow\Sigma_{1},[X;\Gamma\Rightarrow\Delta]\}\{ Y_{2};\Pi_{2}\Rightarrow\Sigma_{2},[X;\Gamma\Rightarrow\Delta]\}\) \\ \(\frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[Y;\Pi_{2}\Rightarrow\Sigma_{2},[Z ;\Pi_{1}\Rightarrow\Sigma_{1}]]\}}{\mathcal{S}\{X,Z;\Pi_{1},\Gamma\Rightarrow \Delta,\Sigma_{1},[Y;\Pi_{2}\Rightarrow\Sigma_{2}]\}}\,\,_{S_{B}}\end{array}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Structural rules for propositional axioms To resolve the case, we apply the inductive hypothesis, followed by the height-preserving admissible rule SW. We apply the SW rule \(n-1\) times adding the variable \(x\) along the path from the root to \(Y,x;\Pi_{2}\Rightarrow\Sigma_{2}\), and then the \(R_{cbf}\) rule \(n\) times to delete the \(n-1\) copies of \(x\) up to the root. We may apply \(R_{cbf}\) as our nested calculi are properly closed, that is, \(\mathbf{B},\mathbf{BF}\in\mathsf{L}\) only if \(\mathbf{CBF}\in\mathsf{L}\). \[\begin{array}{c}Z;\Pi_{1}\Rightarrow\Sigma_{1},[X;\Gamma_{1}\Rightarrow \Delta_{1},[X_{2},x;\Gamma_{2}\Rightarrow\Delta_{2}]],[\mathcal{S}\{Y,x;\Pi_{ 2}\Rightarrow\Sigma_{2}\}]\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta_{2},[X,\Gamma_{ 1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y,x;\Pi_{2}\Rightarrow\Sigma_{2}\}]\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta_{2},[X,\Gamma_{ 1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y,x;\Pi_{2}\Rightarrow\Sigma_{2}\}]\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta_{2},[X,\Gamma_{ 1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y;\Pi_{2}\Rightarrow\Sigma_{2}\}]\\ \hline\end{array}_{\begin{array}{c}SW\left(n-1\text{ times}\right)\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta_{2},[X,\Gamma_{ 1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y;\Pi_{2}\Rightarrow\Sigma_{2}\}]\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta_{2},[X,\Gamma_{ 1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y;\Pi_{2}\Rightarrow\Sigma_{2}\}]\\ \hline\end{array}_{\begin{array}{c}SW\left(n-1\text{ times}\right)\\ \hline Z,\Gamma_{2}\Rightarrow\Sigma_{2}\}]\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma_{1},\Delta_{2},[X,\Gamma_{ 1}\Rightarrow\Delta_{1}],[\mathcal{S}\{Y;\Pi_{2}\Rightarrow\Sigma_{2}\}]\\ \hline\end{array}_{\begin{array}{c}SW\left(n-1\text{ times}\right)\\ \hline Z,\Gamma_{2}\Rightarrow\Sigma_{2}\}]\\ \hline\end{array}_{\begin{array}{c}SW\left(n-1\text{ times}\right)\\ \hline Z,X_{2},x;\Pi_{1},\Gamma_{2}\Rightarrow\Sigma \[\begin{array}{c}\frac{\mathcal{S}_{2}\{X_{2};x=y,\Gamma_{2}\Rightarrow\Delta_{2} \}}{\mathcal{S}\{X;x=y,\Gamma\Rightarrow\Delta\}}\stackrel{{ R2}}{{ \longrightarrow}}\\ \frac{\mathcal{S}_{1}\{X_{1};\Gamma_{1}\Rightarrow\Delta_{1},x=y\}}{ \mathcal{S}_{1}\{X_{1};x=y,\Gamma_{1}\Rightarrow\Delta_{1}\}}\stackrel{{ }}{{Cut}}\end{array}\] If we suppose now that \(x=y\) is principal in the left premiss of _Cut_, then the left premiss must be an initial sequent of the form \(\mathcal{S}\{X,x=y,\Gamma\Rightarrow\Delta,x=y\}\). We have cases according to whether \(x=y\) is principal or not in the right premiss. If it is principal then the right premiss is either (i) an initial sequent or (ii) the conclusion of an instance of a rule in \(\{Repl,\,Repl_{X},\,\mathit{Rig}\}\). In case (i) the conclusion of _Cut_ is an intial sequent and in case (ii) the conclusion of _Cut_ is identical to the conclusion of its right premiss, which is cut-free derivable. Else, the Cut is of the form shown below, where two copies of \(x=y\) must occur in the right premiss since the contexts must match in Cut. \[\begin{array}{c}\frac{\mathcal{S}\{X;x=y,\Gamma\Rightarrow\Delta,x=y\}}{ \mathcal{S}\{X;x=y,\Gamma\Rightarrow\Delta\}}\stackrel{{ R2}}{{ \longrightarrow}}\\ \frac{\mathcal{S}\{X;x=y,\Gamma\Rightarrow\Delta\}}{\mathcal{S}\{X;x=y,\Gamma \Rightarrow\Delta\}}\stackrel{{}}{{Cut}}\end{array}\] Applying the height-preserving admissible rule CL to the right premiss of _Cut_ gives the desired conclusion. Let us suppose now that the weight of the cut formula is greater than zero. We also assume that the cut formula is principal in both premisses of _Cut_ and consider the interesting cases when \(A\equiv\forall xB\) and \(A\equiv\square B\) as all other cases are standard, see [3, Thm. 5]. If the cut formula \(A\equiv\forall xB\) is principal in both premisses of _Cut_, then our Cut is of the following form: \[\begin{array}{c}\frac{\mathcal{S}\{X,y,z;\Gamma\Rightarrow\Delta,B(y/x)\}}{ \mathcal{S}\{X,z;\Gamma\Rightarrow\Delta,\forall xB\}}\stackrel{{ }}{{R\forall}}\\ \frac{\mathcal{S}\{X,z;\Gamma\Rightarrow\Delta,\forall xB\}}{\mathcal{S}\{X,z; \Gamma\Rightarrow\Delta\}}\stackrel{{}}{{Cut}}\end{array}\] We first shift the Cut upward by applying the height-preserving admissibility of IW to the left premiss of _Cut_, and then apply _Cut_ with the premiss of \(L\forall\) as shown below, thus reducing \(h_{1}+h_{2}\). \[\begin{array}{c}\frac{\mathcal{S}\{X,y,z;\Gamma\Rightarrow\Delta,B(y/x)\}} {\mathcal{S}\{X,z;\Gamma\Rightarrow\Delta,\forall xB\}}\stackrel{{ }}{{R\forall}}\\ \frac{\mathcal{S}\{X,z;B(z/x),\Gamma\Rightarrow\Delta,\forall xB\}}{\mathcal{S} \{X,z;B(z/x),\Gamma\Rightarrow\Delta\}}\stackrel{{}}{{Cut}} \end{array}\] Let us refer to the above proof as \(\mathcal{D}\). We now reduce the weight of the cut formula by applying Cut as shown below, giving the desired conclusion. \[\begin{array}{c}\frac{\mathcal{S}\{X,y,z;\Gamma\Rightarrow\Delta,B(y/x)\}} {\mathcal{S}\{X,z,z;\Gamma\Rightarrow\Delta,B(z/x)\}}\stackrel{{ }}{{SC}}\\ \frac{\mathcal{S}\{X,z;\Gamma\Rightarrow\Delta,B(z/x)\}}{\mathcal{S}\{X,z; \Gamma\Rightarrow\Delta\}}\stackrel{{}}{{Cut}}\end{array}\] We now assume that the cut formula \(A\equiv\Box B\) is principal in both premisses and we may assume w.l.o.g. that the cut is an instance of \(\mathsf{L}\)-Cut. We consider the case where the right premiss of \(\mathsf{L}\)-Cut is an instance of \(R_{T}\) and the left premiss of \(\mathsf{L}\)-Cut is an instance of \(R[\Box]\). The remaining cases are proven in a similar fashion. The trick is to use the height-preserving admissibility of the special structural rules (see Lemma 10), namely, the \(S_{T}\) rule. Our \(\mathsf{L}\)-Cut is of the following form: \[\begin{array}{c}\frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[\emptyset; \Rightarrow B]\}\{Y_{i};\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n}}{\mathcal{S} \{X;\Gamma\Rightarrow\Delta,\Box B\}\{Y_{i};\Pi_{i}\Rightarrow\Sigma_{i}\}_{i= 1}^{n}}\;_{R\Box}\;\;\;\;\;\frac{\mathcal{S}\{X;\Box B,\Gamma\Rightarrow\Delta \}\{Y_{i};\Box B,\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n}}{\mathcal{S}\{X; \Gamma\Rightarrow\Delta\}\{Y_{i};\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n}}\;_{ \mathsf{L}\text{-Cut}}\end{array}\] Let \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) denote the derivation of the left and right premiss of \(\mathsf{L}\)-Cut, respectively. To resolve the case, we first apply the height-preserving admissible rule IW to the conclusion of \(\mathcal{D}_{1}\), yielding the derivation \(\mathcal{D}_{3}\) shown below top. We then apply \(\mathsf{L}\)-Cut to the conclusion of \(\mathcal{D}_{3}\) and the premiss of \(\mathcal{D}_{2}\) (where \(h_{1}+h_{2}\) is strictly smaller), giving the second derivation shown below, which we refer to as \(\mathcal{D}_{4}\). Finally, as shown in the third derivation below, we can apply Cut to \(B\) (which has a strictly smaller weight than \(\Box B\)), and derive the desired conclusion after applying a single application of the admissible rule \(S_{T}\) to the left premiss. \[\mathcal{D}_{3}\left\{\begin{array}{c}\frac{\mathcal{S}\{X;\Gamma \Rightarrow\Delta,[\emptyset;\Rightarrow B]\}\{Y_{i};\Pi_{i}\Rightarrow\Sigma_{ i}\}_{i=1}^{n}}{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,\Box B\}\{Y_{i};\Pi_{i} \Rightarrow\Sigma_{i}\}_{i=1}^{n}}\;_{R\Box}\\ \frac{\mathcal{S}\{X;B,\Gamma\Rightarrow\Delta,\Box B\}\{Y_{i};\Pi_{i} \Rightarrow\Sigma_{i}\}_{i=1}^{n}}\;_{IW}\end{array}\right.\] \[\mathcal{D}_{4}\left\{\begin{array}{c}\frac{\mathcal{D}_{3}}{\mathcal{S} \{X;\Box B,B,\Gamma\Rightarrow\Delta\}\{Y_{i};\Box B,\Pi_{i}\Rightarrow\Sigma_ {i}\}_{i=1}^{n}}\;_{\mathsf{L}\text{-Cut}}\\ \frac{\mathcal{S}\{X;\Gamma\Rightarrow\Delta,[\emptyset;\Rightarrow B]\}\{Y_ {i};\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n}}{\mathcal{S}\{X;\Gamma\Rightarrow \Delta\}\{Y_{i};\Pi_{i}\Rightarrow\Sigma_{i}\}_{i=1}^{n}}\;_{\mathsf{L}\text{ -Cut}}\end{array}\right.\] ## 5 Soundness and Completeness Theorem 5.1 (Soundness): _If \(\mathsf{NQ}\mathsf{L}\vdash\mathcal{S}\) then \(\mathsf{fm}(\mathcal{S})\) is \(\mathsf{Q}\mathsf{L}\)-valid._ Proof: We first note that nested application of rules is sound: for each context \(\mathcal{S}\{\cdot\}\), if \(A\supset B\) is \(\mathsf{Q}\mathsf{L}\)-valid then \(\mathsf{fm}(\mathcal{S}\{A\})\supset\mathsf{fm}(\mathcal{S}\{B\})\) is \(\mathsf{Q}\mathsf{L}\)-valid. This can be shown by induction on the depth of the context \(\mathcal{S}\{\cdot\}\); see [3, Lem. 3] for details. The \(\mathsf{Q}\mathsf{L}\)-soundness of the rules of \(\mathsf{NQ}\mathsf{L}\) is proved by induction on the height of the derivation. The cases of initial sequents and of propositional rules of \(\mathsf{NQ}\mathsf{L}\) are given in [3, Thm. 1]. We present the cases of \(L\forall\), \(R_{cbf}\), \(Rig\), and \(R_{5dom}\), all other cases being similar. If \(\mathsf{fm}(X,z;A(z/x),\forall xA,\Gamma\Rightarrow\Delta)\) is \(\mathsf{Q}\mathsf{L}\)-valid, then the \(\mathsf{Q}\mathsf{L}\)-validity of \(\mathsf{fm}(X,z;\forall xA,\Gamma\Rightarrow\Delta)\) follows by the soundness of the axiom \(\mathbf{UI}^{\mathcal{E}}\). If \(\mathsf{fm}(X,x;\Gamma\Rightarrow\Delta,[Y,x;\Pi\Rightarrow\Sigma])\) is \(\mathsf{Q}\mathsf{L}\).\(\mathsf{CBF}\)-valid, then the formula \(\mathsf{fm}(X,x;\Gamma\Rightarrow\Delta,[Y;\Pi\Rightarrow\Sigma])\) is as well because frames for \(\mathsf{Q}\mathsf{L}\).\(\mathsf{CBF}\) have increasing domains. The \(\mathsf{Q}\mathsf{L}\)-validity of \(\mathsf{fm}(\mathcal{S}\{X;x=y,\Gamma\Rightarrow\Delta\}\{Y;\Pi\Rightarrow\Sigma\})\) follows from that of \(\mathtt{fm}(\mathcal{S}\{X;x=y,\Gamma\Rightarrow\Delta\}\{Y;x=y,\Pi\Rightarrow \Sigma\})\) since variables are rigid designators--i.e., the validity of \(\mathbf{NI}:=x=y\supset\Box(x=y)\) and that of \(\mathbf{ND}\) allow identities to be duplicated up and down the accessibility relation, respectively. Finally, we argue that \(R_{5dom}\) preserves \(\mathsf{Q}.\mathsf{L}\)-validity when either \(\mathbf{5},\mathbf{CBF}\in\mathsf{L}\) or \(\mathbf{5},\mathbf{BF}\in\mathsf{L}\). We show this holds for the following one-context rules from which \(R_{5dom}\) is \(\mathsf{NQ}.\mathsf{L}\)-derivable (if \(x\) is in the signature of a non-root node, these rules bottom-up copy \(x\) into the signature of another non-root node): \[\frac{\mathcal{S}\{[X,x;\Gamma\Rightarrow\Delta],[Y,x;\Pi\Rightarrow\Sigma]\} }{\mathcal{S}\{[X,x;\Gamma\Rightarrow\Delta],[Y;\Pi\Rightarrow\Sigma]\}}\ _{R_{5dom_{1}}}\ \ \ \frac{ \mathcal{S}\{[X,x;\Gamma\Rightarrow\Delta,[Y,x;\Pi\Rightarrow\Sigma]]\}}{ \mathcal{S}\{[X,x;\Gamma\Rightarrow\Delta,[Y;\Pi\Rightarrow\Sigma]]\}}\ _{R_{5dom_{2}}}\] \[\frac{\mathcal{S}\{[Y,x;\Pi\Rightarrow\Sigma,[X,x;\Gamma\Rightarrow\Delta]]\} }{\mathcal{S}\{[Y;\Pi\Rightarrow\Sigma,[X,x;\Gamma\Rightarrow\Delta]]\}}\ _{R_{5dom_{3}}}\] If the premiss of one of these rule is \(\mathsf{Q}.\mathsf{L}\)-valid, then so is the respective conclusion since for \(\mathbf{5}\)-frames with increasing or decreasing domains the points satisfying \(X,x;\Gamma\Rightarrow\Delta\) and \(Y;\Pi\Rightarrow\Sigma\) are mutually accessible and have the same domain. Theorem 5.4 (Completeness): _If \(\mathtt{fm}(\mathcal{S})\) is \(\mathsf{Q}.\mathsf{L}\)-valid, then \(\mathsf{NQ}.\mathsf{L}\vdash\mathcal{S}\)._ Proof: We show that \(\mathsf{Q}.\mathsf{L}\vdash\mathtt{fm}(\mathcal{S})\) implies \(\mathsf{NQ}.\mathsf{L}\vdash\mathcal{S}\); the theorem follows by the completeness of \(\mathsf{Q}.\mathsf{L}\) (Theorem 3.1). We proceed by induction on the height of the derivation of \(\mathtt{fm}(\mathcal{S})\) in \(\mathsf{Q}.\mathsf{L}\). The \(\mathsf{NQ}.\mathsf{L}\)-admissibility of rule \(\mathbf{MP}/\mathbf{UG}/\mathbf{N}\) is a corollary of Theorem 3.2/Lemma 6/Lemma 7. We consider only axioms \(\mathbf{UI}^{\circ}\) (assuming \(y\not\in A\) for simplicity), \(\mathbf{ND}\), and \(\mathbf{CBF}\). The cases of axioms \(\mathbf{REF}\) and \(\mathbf{REPL}\) follows from Lemma 2 and the other cases are similar. ## 6 Conclusion and Future Work We provided a uniform nested sequent presentation of quantified modal logics characterised by combinations of fundamental properties. Due to the inclusion of equality in the language of the QMLs considered, our nested calculi permit a formula translation by means of the (definable) existence predicate. As a consequence, our systems possess both a good degree of modularity _and_ utilise a language as expressive as that of each logic, yielding more economical systems in contrast to the labelled calculi given for the same QMLs, which employ a more expressive language [19; 24]. Beyond formula interpretability, our nested calculi satisfy fundamental properties such as the admissibility of important structural rules, invertibility of all rules, and syntactic cut-elimination. In future work, we aim to investigate constructive proofs of interpolation properties with our nested calculi (cf. [9, 15]), to use (variations of) our nested calculi to identify decidable QML fragments, as well as extend the present approach to QMLs with non-rigid designators and, possibly, definite descriptions based on \(\lambda\)-abstraction (see [10]) as was done in [20] for labelled sequent calculi. Another open problem is to give nested sequents with a formula interpretation for QMLs where the existence predicate is not expressible; we conjecture that this might be achieved by using the 'universally closed nesting' defined by Brunner for free logics [4]. We also aim to generalise our approach by employing a wider selection of propagation rules [6, 8] and reachability rules [16, 17] in our systems. As shown in various works [11, 16], diverse classes of logics characterised by Horn properties can be supplied cut-free nested calculi by utilising logical rules that propagate or consume data along paths within nested sequents specified by formal grammars. Applying this technique, we plan to see if we can capture a much wider class of QMLs in a uniform and modular fashion, and plan to investigate admissibility and invertibility properties as well as cut-elimination in this more general setting. It would also be worthwhile to examine the relationship between our nested calculi and other calculi for QMLs; e.g., we could study the computational relationship between our nested calculi and the labelled calculi for QMLs, showing how proofs can be translated and determining complexity bounds for the relative sizes of proofs.
2306.07028
Reduction by symmetries of contact mechanical systems on Lie groups
We study the dynamics of contact mechanical systems on Lie groups that are invariant under a Lie group action. Analogously to standard mechanical systems on Lie groups, existing symmetries allow for reducing the number of equations. Thus, we obtain Euler-Poincar\'e-Herglotz equations on the extended reduced phase space $\mathfrak{g}\times \R$ associated with the extended phase space $TG\times \R$, where the configuration manifold $G$ is a Lie group and $\mathfrak{g}$ its Lie algebra. Furthermore, we obtain the Hamiltonian counterpart of these equations by studying the underlying Jacobi structure. Finally, we extend the reduction process to the case of symmetry-breaking systems which are invariant under a Lie subgroup of symmetries.
Alexandre Anahory Simoes, Leonardo Colombo, Manuel de León, Juan Carlos Marrero, David Martín de Diego, Edith Padrón
2023-06-12T11:05:21Z
http://arxiv.org/abs/2306.07028v2
# Reduction by symmetries of contact mechanical systems on Lie groups ###### Abstract We study the dynamics of contact mechanical systems on Lie groups that are invariant under a Lie group action. Analogously to standard mechanical systems on Lie groups, existing symmetries allow for reducing the number of equations. Thus, we obtain Euler-Poincare-Herglotz equations on the extended reduced phase space \(\mathfrak{g}\times\mathbb{R}\) associated with the extended phase space \(TG\times\mathbb{R}\), where the configuration manifold \(G\) is a Lie group and \(\mathfrak{g}\) its Lie algebra. Furthermore, we obtain the Hamiltonian counterpart of these equations by studying the underlying Jacobi structure. Finally, we extend the reduction process to the case of symmetry-breaking systems which are invariant under a Lie subgroup of symmetries. Introduction Contact Hamiltonian and Lagrangian systems have deserved a lot of attention in recent years (see Bravetti 2017; Bravetti 2018, Leon and Lainz Valcazar 2019a; Leon and Lainz Valcazar 2019b, Anahory Simoes et al. 2021, and references therein). One of the most relevant features of contact dynamics is the absence of conservative properties contrarily to the conservative character of the energy in symplectic dynamics; indeed, they have a dissipative behavior. This fact suggests that contact geometry may be the appropriate framework to model many physical and mathematical problems with dissipation we find in thermodynamics, statistical physics, systems with impulsive effect and discontinuities, quantum mechanics, gravity or control theory, among many others (see Leon and Lainz 2019, Colombo, Leon, and Lopez-Gordon 2022, Lopez-Gordon, Colombo, and Leon 2022, Leon, Lainz, and Munoz-Lecanda 2023, and references therein). As an illustrative example, consider the motion of a rigid body under a linear dissipation term. The equations of motion for this kind of systems are of the general form \[\mathbb{I}\dot{\xi}=\mathbb{I}\xi\times\xi+C\xi, \tag{1}\] where \(\xi\in\mathbb{R}^{3}\) denotes the body angular velocity, \(\mathbb{I}\in\mathbb{R}^{3\times 3}\) is the inertia tensor, and \(C\in\mathbb{R}^{3\times 3}\) denotes a constant positive semi-definite matrix called the damping matrix (see Shen, Sanyal, and McClamroch 2003). In the special case where \(C=\gamma\mathbb{I}\), with \(\gamma\) a non-zero real number, we will see that equations (1) are given by a contact version of Euler-Poincare equations (see Holm, Marsden, and Ratiu 1998) for a reduced contact Lagrangian. As it is noted in Holm, Schmah, and Stoica 2009, in the case of the rigid body on \(SO(3)\) we can rewrite the Lagrangian function \(L:TSO(3)\to\mathbb{R}\) for this system as \[L(R,\dot{R})=\frac{1}{2}\operatorname{tr}(\dot{R}\mathbb{I}\dot{R}^{T}),\] with \(\mathbb{I}\) symmetric and constant. Moreover, due to the \(SO(3)\)-invariance of \(L\), we may define the reduced Lagrangian \(l:\mathfrak{so}(3)\to\mathbb{R}\) given by \[l(\xi)=\frac{1}{2}\langle\mathbb{I}\xi,\xi\rangle.\] The Euler-Poincare equations applied to the reduced Lagrangian function \(l\) give the Euler equations for the rigid body: \[\mathbb{I}\dot{\xi}=\mathbb{I}\xi\times\xi.\] Now, we can consider a slight modification of these constructions and start with a contact Lagrangian function \(L:TSO(3)\times\mathbb{R}\to\mathbb{R}\) given by \[L(R,\dot{R},z)=\frac{1}{2}\operatorname{tr}(\dot{R}\mathbb{I}\dot{R}^{T})+ \gamma z,\] where \(\gamma\in\mathbb{R}\). This motivates the definition of a contact reduced Lagrangian function \(l:\mathfrak{so}(3)\times\mathbb{R}\to\mathbb{R}\) given by \[l(\xi,z)=\frac{1}{2}\langle\mathbb{I}\xi,\xi\rangle+\gamma z. \tag{2}\] The main purpose of this work is to introduce the _Euler-Poincare-Herglotz equations_ as a generalization of Euler-Poincare equations for contact Lagrangian functions, which are given by the expression \[\frac{d}{dt}\frac{\delta l}{\delta\xi}=\operatorname{ad}_{\xi}^{*}\frac{ \delta l}{\delta\xi}+\frac{\delta l}{\delta\xi}\frac{\partial l}{\partial z}.\] Applying them to the Lagrangian function (2), we obtain equations (1) with \(C=\gamma\mathbb{I}\). In this situation, the reduced Legendre transform \(\mathbb{F}l:\mathfrak{g}\times\mathbb{R}\to\mathfrak{g}^{*}\times\mathbb{R}\) is given by the map \(\mathbb{F}l(\xi,z)=(\mathbb{I}\xi,z)\). Assuming that \(\mathbb{I}\) is a positive definite symmetric matrix, so that \(\mathbb{F}l\) is a diffeomorphism and we might define the Hamiltonian function \[h(\mu,z)=\frac{1}{2}\mu^{T}\mathbb{J}\mu+\gamma z,\] where \(\mathbb{J}=\mathbb{I}^{-1}\). Given a regular reduced Lagrangian \(l\), if \((\xi(t),z(t))\) is a solution of Euler-Poincare-Herglotz equations and \(\mu(t)=\frac{\delta l}{\delta\xi}(\xi(t),z(t))\) we will see in this paper that the curve \(t\mapsto(\mu(t),z(t))\) satisfies \[\dot{\mu}=\operatorname{ad}_{\frac{\delta h}{\delta\mu}}^{*}\mu-\mu\frac{ \partial h}{\partial z},\ \dot{z}=\langle\mu,\frac{\delta h}{\delta\mu}\rangle-h(\mu,z). \tag{3}\] In addition, we will define a Lie-Poisson-Jacobi structure and bracket on \(\mathfrak{g}^{*}\times\mathbb{R}\), and derive the corresponding Lie-Poisson-Jacobi equations (3). The Jacobi structure on the vector space \(\mathfrak{g}^{*}\times\mathbb{R}\) is linear. This kind of structures was discussed in Iglesias and Marrero 2000; Iglesias and Marrero 2001. Symmetry breaking is common in several physical contexts, from classical mechanics to particle physics and complex fluids (see Marsden, Ratiu, and Weinstein 1984, Holm, Marsden, and Ratiu 1998). The simplest example is the heavy top dynamics (the motion of a rigid body with a fixed point in a gravitational field), where due to the presence of gravity, we get a Lagrangian which is SO(2)-invariant but not SO(3)-invariant, contrary to what happens for the free rigid body. Reduction theory for symmetry breaking with applications to nematic systems was studied in Gay-Balmaz and Tronci 2010. Optimal Control for systems with symmetry breaking has been studied in Gay-Balmaz and Ratiu 2011. In the context of motion planning, the symmetry breaking appears naturally in the form of a navigation function Bloch et al. 2017, Colombo and Stratoglou 2023. More recently it has been employed to reduce necessary conditions for optimality in a collision and obstacle avoidance problem (see Stratoglou, Colombo, and Ohsawa 2022, Stratoglou, Simoes, and Colombo 2022). Motivated by the rigid body attitude dynamics studied in Shen, Sanyal, and McClamroch 2003, Cho, Shen, and McClamroch 2003 for the triaxial attitude control testbed given in Bernstein, McClamroch, and Bloch 2001, we will study Euler-Poincare-Herglotz reduction for systems with symmetry breaking and we will also derive the corresponding Lie-Poisson-Jacobi equations. This paper is structured as follows: in Section 2, we briefly recall Euler-Poincare reduction on Lie groups and in Section 3, we recall the basic concepts involving contact structures and contact Hamiltonian vector fields. In section 4, we present the Euler-Poincare-Herglotz reduction of invariant contact systems on Lie groups. In section 5, we define the Jacobi bracket on the extended dual of the Lie algebra \(\mathfrak{g}^{*}\times\mathbb{R}\) and obtain the Hamiltonian counterpart of Euler-Poincare-Herglotz equations. Finally, in Section 6, we apply the previous development to systems with a broken symmetry, through a Euler-Poincare-Herglotz reduction on a semi-direct product. ### Notation In order to be self-contained and use consistent notation, let us establish the functional derivative notation that we will use without reference all along the text. Let \(f\) be a smooth function on a finite-dimensional vector space \(V\). The derivative of \(f\) at a point \(v\in V\) along the direction \(u\in V\), may written in terms of geometric objects as \[Df(v)\cdot u=\left.\frac{d}{dt}\right|_{t=0}f(v+tu)=\langle df(v),(u)_{v}^{V}\rangle,\] where \(df(v)\in T_{v}^{*}V\) is the differential of the function at \(V\) and \[(u)_{v}^{V}\,=\left.\frac{d}{dt}\right|_{t=0}(v+tu)\in T_{v}V\] is the vertical lift of the vector \(u\) at \(v\). The vertical lift at \(v\) induces a canonical identification between the vector space \(V\) and its tangent space at \(v\), i.e., the map \((\cdot)_{v}^{V}:V\to T_{v}V\) is an isomorphism. This implies that the dual map \(\varphi:T_{v}^{*}V\to V^{*}\) is also an isomorphism, so that \[Df(v)\cdot u=\langle\varphi(df(v)),u\rangle.\] The unique vector \(\varphi(df(v))\in V^{*}\) is called the functional derivative of \(f\) at \(v\). The _functional derivative_ of \(f\) at \(v\) is usually denoted by \[\varphi(df(v))=\frac{\delta f}{\delta v}(v).\] In finite dimensions, through a coordinate analysis, we see that the coordinate expression of \(\frac{\delta f}{\delta v}(v)\) matches the component functions of the differential \(df(v)\). ## 2 Euler-Poincare equations Let \(G\) be a Lie group and let \(\mathcal{L}_{g}:G\to G\) be the left multiplication action. Since \(\mathcal{L}_{g}\) is a diffeomorphism, it naturally induces a "trivialized chart" of \(TG\), which more precisely is an identification between each tangent space \(T_{g}G\) with the Lie algebra \(\mathfrak{g}\) through the following map \[TG\to G\times\mathfrak{g},\quad(g,\dot{g})\mapsto(g,T_{g}\mathcal{L}_{g^{-1}}( \dot{g})). \tag{4}\] Consider the Lagrangian function \(L\) as a real function \(L:G\times\mathfrak{g}\to\mathbb{R}\) on \(G\times\mathfrak{g}\). Then, we can express the Euler-Lagrange equations in terms of this trivialization as \[\frac{d}{dt}\frac{\delta L}{\delta\xi} =\operatorname{ad}_{\xi}^{*}\frac{\delta L}{\delta\xi}+T_{e}^{*} \mathcal{L}_{g}\left(\frac{\delta L}{\delta g}\right)\] \[\dot{g} =T_{e}\mathcal{L}_{g}(\xi).\] Here \(\operatorname{ad}_{\xi}^{*}:\mathfrak{g}^{*}\to\mathfrak{g}^{*}\) is the dual of the adjoint operator \(\operatorname{ad}_{\xi}:\mathfrak{g}\to\mathfrak{g}\) defined by \(\operatorname{ad}_{\xi}(\eta)=[\xi,\eta]\), for \(\eta\in\mathfrak{g}\). Now, if \(L:TG\to\mathbb{R}\) is a _left-invariant_ Lagrangian function, that is, its composition with the tangent lift of left-translations \(T\mathcal{L}_{g}\) leaves \(L\) invariant \[L(T_{h}\mathcal{L}_{g}(v_{h}))=L(v_{h}),\ \forall v_{h}\in T_{h}G,\] then the restriction of \(L\) to the tangent space at the identity \(e\in G\), which we identify with the Lie algebra \(\mathfrak{g}\), is called the _reduced Lagrangian function_ and we will denote it by \(l:\mathfrak{g}\rightarrow\mathbb{R}\). Then all the information is contained in the reduced Lagrangian \(l\) since \[l(\eta)=L(T_{e}\mathcal{L}_{g}(\eta)),\quad\forall g\in G,\eta\in\mathfrak{g}.\] It is well-known that a curve \(g:I\to G\) is a solution of Euler-Lagrange equations if and only if the curve \(\xi:I\rightarrow\mathfrak{g}\) determined by \[\dot{g}=T_{e}\mathcal{L}_{g}(\dot{\xi})\] satisfies the _Euler-Poincare_ equations given by \[\frac{d}{dt}\frac{\delta l}{\delta\xi}=\mathrm{ad}_{\xi}^{*}\frac{\delta l}{ \delta\xi}.\] If \((y^{i})\) are local coordinates associated to a given basis \(\{e_{i}\}\) of \(\mathfrak{g}\), then its local expression reads \[\frac{d}{dt}\frac{\partial l}{\partial y^{i}}=C^{k}_{ji}y^{j}\frac{\partial l }{\partial y^{k}},\] where \(C^{k}_{ij}\) are the structure constants of the Lie algebra given by \[[e_{i},e_{j}]=C^{k}_{ij}e_{k}\] (see Bloch 2015; Marsden and Ratiu 2013 for instance). ## 3 Contact manifolds and contact dynamics In this section, we will recall the main definition of a **contact manifold** and the corresponding Hamiltonian vector fields (see Arnold 1978; Libermann and Marle 1987; Leon and Lainz Valcazar 2019a for a more detailed overview). A contact manifold \((M,\eta)\) is an \((2n+1)\)-dimensional manifold equipped with a contact form \(\eta\), i.e., \(\eta\) is a 1-form on \(M\) such that \(\eta\wedge(d\eta)^{n}\) is a volume form. The Reeb vector field \(\mathcal{R}\in\mathfrak{X}(M)\) is the unique vector field that satisfies: \[\iota_{\mathcal{R}}d\eta=0,\quad\eta(\mathcal{R})=1. \tag{5}\] On a contact manifold \((M,\eta)\), we define the following isomorphism of vector bundles: \[\begin{array}{rcl}\flat:&TM&\longrightarrow&T^{*}M,\\ &&v&\longmapsto&\iota_{v}d\eta+\eta(v)\eta.\end{array} \tag{6}\] Notice that \(\flat(\mathcal{R})=\eta\). There is a Darboux theorem for contact manifolds. In a neighborhood of each point in \(M\) one can find local coordinates \((q^{i},p_{i},z)\) such that \[\eta=dz-p_{i}dq^{i}. \tag{7}\] In these coordinates, we have \[\mathcal{R}=\frac{\partial}{\partial z}. \tag{8}\] The canonical example of contact manifold is \(T^{*}Q\times\mathbb{R}\). Here, the contact form is given by \[\eta_{Q}=dz-\theta_{Q}=dz-p_{i}dq^{i}, \tag{9}\] where \(\theta_{Q}\) is the pullback of the tautological 1-form of \(T^{*}Q\), \((q^{i},p_{i})\) are bundle coordinates on \(T^{*}Q\) and \(z\) is the \(\mathbb{R}\)-coordinate. Given a smooth function \(f:M\to\mathbb{R}\), its Hamiltonian vector field \(X_{f}\) is given by \[\flat(X_{f})=df-(f+\mathcal{R}(f))\eta, \tag{10}\] or, equivalently, \[i_{X_{f}}d\eta=df-R(f)\eta,\quad i_{X_{f}}\eta=-f.\] We call the triple \((M,\eta,H)\) a contact Hamiltonian system, where \((M,\eta)\) is a contact manifold and \(H:M\to\mathbb{R}\) is the Hamiltonian function. In contrast to their symplectic counterpart, contact Hamiltonian vector fields do not preserve the Hamiltonian fuction. In fact, \[X_{H}(H)=-\mathcal{R}(H)H. \tag{11}\] ### Contact Lagrangian systems Now we recall the Lagrangian picture of contact systems (see Leon and Lainz Valcazar 2019a; Leon and Lainz Valcazar 2019b for a more comprehensive description). Indeed, under a suitable choice of contact structure on the extended phase space \(TQ\times\mathbb{R}\), the contact Hamiltonian equations with respect to a Lagrangian function on this space are precisely Euler-Poincare equations. Let \(Q\) be an \(n\)-dimensional configuration manifold and consider the extended phase space \(TQ\times\mathbb{R}\) and we call contact Lagrangian function to any smooth function of the type \(L:TQ\times\mathbb{R}\to\mathbb{R}\). In this paper, we will assume that the Lagrangian is regular, that is, the Hessian matrix with respect to the velocities \((W_{ij})\) is regular where \[W_{ij}=\frac{\partial^{2}L}{\partial\dot{q}^{i}\partial\dot{q}^{j}}, \tag{12}\] and \((q^{i},\dot{q}^{i},z)\) are bundle coordinates for \(TQ\times\mathbb{R}\). Equivalently, \(L\) is regular if and only if the one-form \[\eta_{L}=dz-\theta_{L} \tag{13}\] is a contact form. Here, \[\theta_{L}=S^{*}(dL)=\frac{\partial L}{\partial\dot{q}^{i}}dq^{i},\] where \(S\) is the canonical vertical endomorphism \(S:TTQ\to TTQ\) extended to \(TQ\times\mathbb{R}\), that is, in local \(TQ\times\mathbb{R}\) bundle coordinates, \[S=dq^{i}\otimes\frac{\partial}{\partial\dot{q}^{i}}. \tag{14}\] The energy of the system is defined by \[E_{L}=\Delta(L)-L=\dot{q}^{i}\frac{\partial L}{\partial\dot{q}^{i}}-L, \tag{15}\] where \(\Delta\) is the Liouville vector field on \(TQ\) extended to \(TQ\times\mathbb{R}\) in the natural way. The Reeb vector field of \(\eta_{L}\), which we will denoted by \(\mathcal{R}_{L}\), is locally given by \[\mathcal{R}_{L}=\frac{\partial}{\partial z}-(W^{ij})\frac{\partial^{2}L}{ \partial\dot{q}^{i}\partial z}\frac{\partial}{\partial\dot{q}^{j}}, \tag{16}\] where \((W^{ij})\) is the inverse of the Hessian matrix with respect to the velocities \((W_{ij})\). The Hamiltonian vector field of the energy \(E_{L}\) will be denoted \(\xi_{L}=X_{E_{L}}\), which is a second order differential equation (SODE) (that is, \(S(\xi_{L})=\Delta\)) and its solutions are just the ones of the Herglotz equations for \(L\)(see Leon and Lainz Valcazar 2019a; Leon and Lainz Valcazar 2019b): \[\frac{d}{dt}\frac{\partial L}{\partial\dot{q}^{i}} =\frac{\partial L}{\partial q^{i}}+\frac{\partial L}{\partial\dot {q}^{i}}\frac{\partial L}{\partial z} \tag{17}\] \[\dot{z} =L.\] Now, we will recall the definition of the _Legendre transformation_ for contact Lagrangian systems. Given the vector bundle \(TQ\times\mathbb{R}\to Q\times\mathbb{R}\), one can consider the fiber derivative \(\mathbb{F}L\) of \(L:TQ\times\mathbb{R}\to\mathbb{R}\), which has the following coordinate expression in natural coordinates: \[\begin{split}\mathbb{F}L:TQ\times\mathbb{R}& \to T^{*}Q\times\mathbb{R}\\ (q^{i},\dot{q}^{i},z)&\mapsto(q^{i},\frac{\partial L }{\partial\dot{q}^{i}},z).\end{split} \tag{18}\] If the Lagrangian is regular, then \(\mathbb{F}L\) is a local diffeomorphism and one can show that \(\eta_{L}\) is simply \((\mathbb{F}L)^{*}\eta_{Q}\). In the case that \(\mathbb{F}L\) is a global contactomorphism, then we say that \(L\) is _hyperregular_. In this situation, we can define a Hamiltonian function \(H:T^{*}Q\times\mathbb{R}\to\mathbb{R}\) such that \(E_{L}=H\circ\mathbb{F}L\). So, the Lagrangian and Hamiltonian dynamics are \(\mathbb{F}L\)-related, that is, \((\mathbb{F}L)_{*}(\xi_{L})=X_{H}\circ\mathbb{F}L\) (see Leon and Lainz Valcazar 2019a; Leon and Lainz Valcazar 2019b). ## 4 Euler-Poincare-Herglotz equations for contact reduced Lagrangian systems In this section, we will prove that a similar result to that in Section 2 holds in the contact situation where we consider the extended space \(TG\times\mathbb{R}\) as our initial ambient phase space and we assume that the Lagrangian function is \(G\)-invariant. Let \(L:TG\times\mathbb{R}\to\mathbb{R}\) be a contact Lagrangian function. Again, the fact that the left translation \(\mathcal{L}_{g}:G\to G\) is a diffeomorphism allows us to consider a "left trivialized chart" of \(TG\times\mathbb{R}\), that is an identification of the product \(T_{g}G\times\mathbb{R}\) with the vector space \(\mathfrak{g}\times\mathbb{R}\) given precisely by the map \[TG\times\mathbb{R}\to G\times\mathfrak{g}\times\mathbb{R},\quad(g,\dot{g},z) \mapsto(g,T_{g}\mathcal{L}_{g^{-1}}(\dot{g}),z). \tag{19}\] Thus, if we introduce coordinates \((g,\xi,z)\) on \(TG\times\mathbb{R}\) given by (19), we have the following: **Theorem 4.1**.: _Let \(G\) be a Lie group and \(L:TG\times\mathbb{R}\to\mathbb{R}\) a contact Lagrangian function. If we introduce the coordinates \((g,\xi,z)\) on \(TG\times\mathbb{R}\) given by (19), then the curves \(\xi\) and \(z\) satisfy the Herglotz equations on Lie groups_ \[\begin{split}\frac{d}{dt}\frac{\delta L}{\delta\xi}& =\mathrm{ad}_{\xi}^{*}\frac{\delta L}{\delta\xi}+T_{e}^{*}\mathcal{ L}_{g}\left(\frac{\delta L}{\delta g}\right)+\frac{\delta L}{\delta\xi}\frac{ \partial L}{\partial z}\\ \dot{g}&=T_{e}\mathcal{L}_{g}\left(\xi\right)\\ \dot{z}&=L.\end{split} \tag{20}\] Proof.: Before starting the proof, let \(\tilde{L}:G\times\mathfrak{g}\times\mathbb{R}\to\mathbb{R}\) be the left trivialized Lagrangian function using the coordinates (19). So that \[\tilde{L}(g,\xi,z)=L(g,g\xi,z),\] where we are using the notation \(g\xi:=T_{e}\mathcal{L}_{g}(\xi)\). In what follows, we will find critical values of the action along a trajectory on \(TG\times\mathbb{R}\) of the form \((g(t),\dot{g}(t),z(t))\). But, under the left trivialized coordinates, this is equivalent to find critical values along trajectories on \(G\times\mathfrak{g}\times\mathbb{R}\) of the form \((g(t),\xi(t),z(t))\), where \[\xi(t):=T_{g(t)}\mathcal{L}_{g(t)^{-1}}(\dot{g}(t)):=g(t)^{-1}\cdot\dot{g}(t).\] Also, note that for any variation of a solution \(g:I\to G\) of Euler-Lagrange equations denoted by \(c:U\subset\mathbb{R}^{2}\to G\), the tangent vectors \(\dfrac{\partial c}{\partial s}\) and \(\dfrac{\partial c}{\partial t}\) may be transported to the Lie algebra \(\mathfrak{g}\) in the following way \[\begin{split}\eta:U&\to\mathfrak{g}&\xi:U \to\mathfrak{g}\\ (t,s)&\mapsto T_{c(t,s)}\mathcal{L}_{c(t,s)^{-1}} \left(\dfrac{\partial c}{\partial s}\right)&(t,s)\mapsto T_{c(t, s)}\mathcal{L}_{c(t,s)^{-1}}\left(\dfrac{\partial c}{\partial t}\right).\end{split} \tag{21}\] Moreover, it is a standard fact (see Bloch et al. 1996; Holm, Marsden, and Ratiu 1998) that these maps satisfy \[\dfrac{\partial\xi}{\partial s}-\dfrac{\partial\eta}{\partial t}=[\xi,\eta]. \tag{22}\] Conversely, if there are 2-parameter family of curves \(\xi\) and \(\eta\) on the Lie algebra satisfying (22), there exists a smooth function \(c:U\subset\mathbb{R}^{2}\to G\) such that \(\xi\) and \(\eta\) are given by equations (21). We will often commit a slight abuse of notation with the letters \(\xi\) and \(\eta\), in the sense that they represent both a curve and a variation of curves in \(\mathfrak{g}\). The first will be denoted by \(\xi(t)\) and the second by \(\xi(t,s)\), respectively \(\eta(t)\) and \(\eta(t,s)\). The relation between them is that \[\xi(t)=\xi(t,0)=g^{-1}\cdot\dot{g}\quad\text{and}\quad\eta(t)=\eta(t,0):=g^{-1 }\cdot\delta g \tag{23}\] Now, find equations (20) are equivalent to Herglotz equations on the contact extended space \(TG\times\mathbb{R}\) let us show that its solutions satisfy Herglotz principle. Let the action over a curve be given by \[\mathcal{A}(g(\cdot),z_{0})=\int_{0}^{h}L(g(t),\dot{g}(t),z(t))\ dt,\] where the curve \(z(t)\) satisfies the equation \[\dot{z}=L(g(t),\dot{g}(t),z(t)),\quad z(0)=z_{0}.\] The action over a variation \(c\) of the curve \(g\) might be written as \[\mathcal{A}(c(\cdot,s),z_{0})=\int_{0}^{h}L\left(c(t,s),\frac{\partial c}{ \partial t}(t,s),z(t,s)\right)\ dt,\] which in the coordinates (19) is just \[\mathcal{A}(c(\cdot,s),z_{0})=\int_{0}^{h}\tilde{L}\left(c(t,s),\xi(t,s),z(t,s )\right)\ dt,\] where \(\xi\) is defined as in (21) and the variation \(z(t,s)\) satisfies the equation \[\dot{z}(t,s)=\tilde{L}(c(t,s),\xi(t,s),z(t,s)),\quad z(0,s)=z_{0}.\] Taking variations \(\xi(t,s)\) and \(\eta(t,s)\) as in (21) and (22), the first variation of the action functional gives \[\frac{d}{ds}\bigg{|}_{s=0}\int_{0}^{h} \tilde{L}\left(c(t,s),\xi(t,s),z(t,s)\right)\ dt\] \[=\int_{0}^{h}\left[\frac{\partial\tilde{L}}{\partial g}\delta g+ \frac{\partial\tilde{L}}{\partial\xi}\delta\xi+\frac{\partial\tilde{L}}{ \partial z}\delta z\right]\ dt=z(h,0).\] Observe now that the infinitesimal variation of the curve \(z\), that is, \(\delta z\) must satisfy the inhomogeneous linear differential equation \[\frac{\partial\delta z}{\partial t}(t)=\frac{\partial\tilde{L}}{\partial g} \delta g+\frac{\partial\tilde{L}}{\partial\xi}\delta\xi+\frac{\partial\tilde{ L}}{\partial z}\delta z.\] This is a first order ordinary differential equation whose solution is \[\delta z(\tau)=e^{\int_{0}^{\tau}\frac{\partial\tilde{L}}{\partial z}ds}\left( \int_{0}^{\tau}e^{-\int_{0}^{t}\frac{\partial\tilde{L}}{\partial z}ds}\left[ \frac{\partial\tilde{L}}{\partial g}\delta g+\frac{\partial\tilde{L}}{ \partial\xi}\delta\xi\right]\ dt\right),\] since \(\delta z(0)\) vanishes. Now, observe that we have seen above that the first variation of the action is equal to \(\delta z(h)\). Hence, \[\frac{d}{ds}\bigg{|}_{s=0}\mathcal{A}(c(\cdot,s),z_{0})=e^{\int_{0}^{h}\frac {\partial\tilde{L}}{\partial z}ds}\left(\int_{0}^{h}e^{-\int_{0}^{t}\frac{ \partial\tilde{L}}{\partial z}ds}\left[\frac{\partial\tilde{L}}{\partial g} \delta g+\frac{\partial\tilde{L}}{\partial\xi}\delta\xi\right]\ dt\right),\] where from (22) we have that \(\delta\xi=\dot{\eta}+[\xi,\eta]\) and from (23) we have that \(\delta g=g\cdot\eta\). Defining the auxiliary function \[\sigma(t)=e^{-\int_{0}^{t}\frac{\partial\tilde{L}}{\partial z}ds},\] we obtain \[\left.\frac{d}{ds}\right|_{s=0}\mathcal{A}(c(\cdot,s),z_{0})=\frac{1}{\sigma( h)}\left(\int_{0}^{h}\sigma(t)\left[\frac{\partial\tilde{L}}{\partial\xi}( \dot{\eta}+\mathrm{ad}_{\xi}\eta)+\frac{\partial\tilde{L}}{\partial g}g\cdot \eta\right]\ dt\right).\] Integrating by parts, in order to get rid of time derivatives of \(\eta\), and noting that the boundary terms vanish since \(\eta\) is zero at the endpoints we conclude that the action reduces to the expression below \[\frac{1}{\sigma(h)}\left(\int_{0}^{h}\left[\sigma(t)\mathrm{ad}_{\xi}^{*} \frac{\partial\tilde{L}}{\partial\xi}-\frac{d}{dt}\left(\sigma(t)\frac{ \partial\tilde{L}}{\partial\xi}\right)+\sigma(t)T_{e}^{*}\mathcal{L}_{g} \left(\frac{\partial\tilde{L}}{\partial g}\right)\right]\eta\ dt\right).\] Since \(\eta\) is arbitrary and due to the fundamental theorem of calculus of variations, the curve \((g(t),\xi(t),z(t))\) is a critical point of the action if and only if it satisfies the equation \[\sigma(t)\mathrm{ad}_{\xi}^{*}\frac{\partial\tilde{L}}{\partial\xi}-\sigma(t )\frac{d}{dt}\left(\frac{\partial\tilde{L}}{\partial\xi}\right)-\dot{\sigma} \frac{\partial\tilde{L}}{\partial\xi}+\sigma(t)T_{e}^{*}\mathcal{L}_{g}\left( \frac{\partial\tilde{L}}{\partial g}\right)=0,\] which finishes the proof since this is equivalent to \[\sigma(t)\left[\mathrm{ad}_{\xi}^{*}\frac{\partial\tilde{L}}{\partial\xi}- \frac{d}{dt}\left(\frac{\partial\tilde{L}}{\partial\xi}\right)+\frac{\partial \tilde{L}}{\partial z}\frac{\partial\tilde{L}}{\partial\xi}+T_{e}^{*} \mathcal{L}_{g}\left(\frac{\partial\tilde{L}}{\partial g}\right)\right]=0\] and the auxiliary function \(\sigma\) is nowhere zero. The last two equations in (20) follow by construction. Next, let us define first the _lifted action of \(G\) on \(TG\times\mathbb{R}\)_ whose translation by \(g\in G\) is the map \(\hat{\mathcal{L}}_{g}:TG\times\mathbb{R}\to TG\times\mathbb{R}\) given by \[\hat{\mathcal{L}}_{g}(v_{h},z)=(T_{h}\mathcal{L}_{g}(v_{h}),z). \tag{24}\] Suppose that \(L:TG\times\mathbb{R}\rightarrow\mathbb{R}\) is a _left invariant_ contact Lagrangian function, thus satisfying \(L\circ\hat{\mathcal{L}}_{g}=L\). We define the _reduced contact Lagrangian_ to be the function \(l:\mathfrak{g}\times\mathbb{R}\rightarrow\mathbb{R}\) which is the restriction of the contact Lagrangian function \(L\) to the space \(T_{e}G\times\mathbb{R}\). Observe that in this case \[l(\xi,z)=L(e,\xi,z)=L(g,g\cdot\xi,z)\] by the left-invariant property. The results of the previous theorem are further simplified using this additional assumption and we obtain in consequence the _Euler-Poincare-Herglotz_ equations on the product space \(\mathfrak{g}\times\mathbb{R}\). **Theorem 4.2**.: _Let \(G\) be a Lie group and \(L:TG\times\mathbb{R}\to\mathbb{R}\) a left-invariant contact Lagrangian function. If \(l:\mathfrak{g}\times\mathbb{R}\to\mathbb{R}\) is the corresponding reduced Lagrangian, \(g:I\to G\) is a curve on the Lie group and \(\xi:I\to\mathfrak{g}\) a curve on the Lie algebra, then the following are equivalent:_ 1. _The curves_ \(g\) _and_ \(z:I\to\mathbb{R}\) _satisfy Herglotz equations for_ \(L\)_;_ 2. _The Herglotz principle_ \[\delta\int_{0}^{h}L(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L(g(t),\dot{g}(t),z(t))\] _holds for variations of_ \(g\) _with fixed endpoints;_ 3. _The curves_ \(\xi(t)=T_{g(t)}\mathcal{L}_{g^{-1}(t)}(\dot{g}(t))\) _and_ \(z\) _satisfy the Euler-Poincare-Herglotz equations_ \[\frac{d}{dt}\frac{\delta l}{\delta\xi}=\mathrm{ad}_{\xi}^{*}\frac{\delta l}{ \delta\xi}+\frac{\delta l}{\delta\xi}\frac{\partial l}{\partial z}\quad\text {and}\quad\dot{z}=l.\] (25) 4. _The reduced Herglotz variational principle_ \[\delta\int_{0}^{h}l(\xi(t),z(t))\ dt=0,\quad\dot{z}=l(\xi(t),z(t))\] (26) _holds using variations of the form_ \(\delta\xi=\dot{\eta}+[\xi,\eta]\)_, where_ \(\eta\) _is a curve on_ \(\mathfrak{g}\) _vanishing at the endpoints._ Proof.: The equivalence of items 1. and 2. is general for all contact manifolds, so in particular for \(TG\times\mathbb{R}\) (see Anahory Simoes et al. 2021). Next, we prove the equivalence of both variational principles, that is, items 2. and 4. Observe that \(l:\mathfrak{g}\times\mathbb{R}\to\mathbb{R}\) determines uniquely the function \(L:TG\times\mathbb{R}\to\mathbb{R}\) by left-translations and viceversa. Thus one only needs to show that variations \((\delta g,\delta z)\in TG\times\mathbb{R}\) with \(\delta g\) having fixed endpoints induces and are induced by variations \((\delta\xi,\delta z)\) with \(\delta\xi\) of the form \(\delta\xi=\dot{\eta}+[\xi,\eta]\), where \(\eta(t)\) vanishes at end points. But, this is the content in the proof of Theorem 4.1. To conclude, we show the equivalence between 3. and 4. Indeed by using the definition of the auxiliary function \(\sigma(t)\), the calculations in the proof of Theorem 4.1 and integration by parts, we deduce that the reduced Herglotz variational principle holds if and only if \[\sigma(t)\left[\mathrm{ad}_{\xi}^{*}\frac{\partial l}{\partial\xi}-\frac{d}{dt} \left(\frac{\partial l}{\partial\xi}\right)+\frac{\partial l}{\partial z}\frac{ \partial l}{\partial\xi}\right]=0. \tag{27}\] So, since the auxiliary function \(\sigma\) is nowhere zero, the result follows. **Remark 4.3**.: If we have determined a trajectory \((\xi(t),z(t))\) in \(\mathfrak{g}\times\mathbb{R}\) solving Euler-Poincare-Herglotz equations, we may obtain the corresponding trajectory on the original phase space \(TG\times\mathbb{R}\) through the _reconstruction_ procedure, which amounts to solving the reconstruction equation \[\dot{g}=g\cdot\xi.\] **Example 4.4**.: Consider now the matrix Lie group \(SO(3)\), and a Lagrangian function \(L:TSO(3)\times\mathbb{R}\to\mathbb{R}\) of the form \[L(R,\dot{R},z)=\frac{1}{2}\langle\langle\dot{R},\dot{R}\rangle\rangle-\gamma z,\] with \[\langle\langle\dot{R},\dot{R}\rangle\rangle=\int_{B}\rho(X)\|\dot{R}X\|^{2}\ d^{3}X,\] where \(B\) is the subset of \(\mathbb{R}^{3}\) occupied by some rigid body. In this case, the left multiplication map is defined by \(\mathcal{L}_{R_{1}}:SO(3)\to SO(3)\) \[\mathcal{L}_{R_{1}}(R)=R_{1}R\] and so its tangent map is just \[T_{R}\mathcal{L}_{R_{1}}(Y)=R_{1}Y,\quad Y\in T_{R}SO(3).\] Moreover, the Lie algebra of \(SO(3)\), denoted by \(\mathfrak{so}(3)\) is composed by \(3\times 3\) skew-symmetric matrices and the adjoint map is given by matrix commutator, i.e., \[\mathrm{ad}_{\xi}\eta=\xi\eta-\eta\xi,\quad\xi,\eta\in\mathfrak{so}(3).\] Now, the expression of \(L\) on left trivialized coordinates is well-known to be \[L(R,\xi,z)=\frac{1}{2}\langle\xi,\xi\rangle-\gamma z,\] where the inner product on \(\mathfrak{so}(3)\) is given by \[\langle\xi_{1},\xi_{2}\rangle=\frac{1}{2}\mathrm{tr}\ (\xi_{1}^{T}\mathbb{I} \xi_{2}),\] with \(\mathbb{I}\) the inertia tensor. Many of the computation are easily carried using the hat map \(\hat{(\cdot)}:\mathbb{R}^{3}\to\mathfrak{so}(3)\) defined by \[(\omega_{1},\omega_{2},\omega_{3})\mapsto\begin{pmatrix}0&\omega_{1}&\omega_{2} \\ -\omega_{1}&0&\omega_{3}\\ -\omega_{2}&-\omega_{3}&0\end{pmatrix}\] which is a Lie algebra isomorphism. Under these definitions the Lagrangian function is written as \[L(R,\xi,z)=\frac{1}{2}\xi^{T}\mathbb{I}\xi-\gamma z\] and the equations (20) give \[\mathbb{I}\dot{\xi} =-\xi\times\mathbb{I}\xi-\gamma\mathbb{I}\xi\] \[\dot{\xi} =R\xi\] \[\dot{z} =L.\] \(\diamond\) ## 5 Lie-Poisson-Jacobi reduction In this section, we will introduce a Jacobi structure on the space \(\mathfrak{g}^{*}\times\mathbb{R}\) and we will see that for a regular \(G\)-invariant Lagrangian function the reduced dynamics is Hamiltonian with respect to this Jacobi structure. ### Contact dynamics on a Lie group Now, assume that the configuration space \(Q\) is a Lie group \(G\) and that the regular Lagrangian function \(L:TG\times\mathbb{R}\to\mathbb{R}\) is \(G\)-invariant. Then, similarly to what happens between the contact manifold \(TG\times\mathbb{R}\) and the canonical contact manifold \(T^{*}G\times\mathbb{R}\), we may also identify \(\mathfrak{g}\times\mathbb{R}\) and \(\mathfrak{g}^{*}\times\mathbb{R}\) through the _reduced Legendre transformation_\(\mathbb{F}l:\mathfrak{g}\times\mathbb{R}\to\mathfrak{g}^{*}\times\mathbb{R}\) given by \[\mathbb{F}l(\xi,z)=\left(\frac{\delta l}{\delta\xi},z\right).\] Note that, \[\mathbb{F}L(e,\xi,z)=(e,\mathbb{F}l(\xi,z)),\] that is, the reduced Legendre transform is essentially the restriction of the Legendre transform to \(\mathfrak{g}\times\mathbb{R}\). Thus, the requiring \(L\) to be regular is sufficient in order to make \(\mathbb{F}l\) a local diffeomorphism. **Proposition 5.1**.: _If \(L\) is \(G-\)invariant, the Legendre transform is \(G\)-equivariant, that is,_ \[\mathbb{F}L(g\cdot v,z)=g\cdot\mathbb{F}L(v,z),\quad(v,z)\in TG\times\mathbb{R}.\] Suppose that the Legendre transformation is a diffeomorphism and consider the function \(h:\mathfrak{g}^{*}\times\mathbb{R}\to\mathbb{R}\) given by \[h(\mu,z)=\langle\mu,\xi\rangle-l(\xi,z),\] where \(\mu=\dfrac{\delta l}{\delta\xi}\). Then it is easy to see that \[\dfrac{\delta h}{\delta\mu}=\xi.\] By inserting these equalities in Euler-Poincare-Herglotz equations we get the following result: **Proposition 5.2**.: _Given a regular reduced Lagrangian function \(l\), if \((\xi(t),z(t))\) is a solution of Euler-Poincare-Herglotz equations and \(\mu(t)=\frac{\delta l}{\delta\xi}(\xi(t),z(t))\) then the curve \(t\mapsto(\mu(t),z(t))\) satisfies_ \[\dot{\mu} =\text{ad}^{*}_{\frac{\delta h}{\delta\mu}}\mu-\mu\dfrac{\partial h }{\partial z}\] \[\dot{z} =\langle\mu,\dfrac{\delta h}{\delta\mu}\rangle-h(\mu,z).\] Proof.: By differentiating the curve \(t\mapsto\mu(t)\) we get \[\dot{\mu}=\dfrac{d}{dt}\dfrac{\delta l}{\delta\xi}=ad^{*}_{\xi}\dfrac{\delta l }{\delta\xi}+\dfrac{\delta l}{\delta\xi}\dfrac{\partial l}{\partial z},\] where we used Euler-Poincare-Herglotz equations in the last equality. Using the equalities preceding the proposition statement, we deduce \[\dot{\mu}=\text{ad}^{*}_{\frac{\delta h}{\delta\mu}}\mu-\mu\dfrac{\partial h}{ \partial z}.\] The equation for \(\dot{z}\) follows from the definition of \(h\). These equations will be called _Lie-Poisson-Jacobi equations_. In the next subsections, we will show that this is a Hamiltonian system with respect to a Jacobi structure on \(\mathfrak{g}^{*}\times\mathbb{R}\). **Example 5.3**.: Let us consider again the matrix Lie group \(SO(3)\) and consider the hyperregular \(G\)-invariant Lagrangian function \[L(R,\dot{R},z)=\frac{1}{2}\langle\langle\dot{R},\dot{R}\rangle\rangle-\gamma z,\] so that we might consider the reduced Lagrangian which is the restriction of \(L\) to \(\mathfrak{g}\times\mathbb{R}\) \[l(\xi,z)=\frac{1}{2}\xi^{T}\mathbb{I}\xi-\gamma z.\] Under all the assumptions taken in Example 4.4, the Legendre transform is given by the map \[\mathbb{F}l(\xi,z)=(\mathbb{I}\xi,z).\] Suppose that \(\mathbb{I}\) is a positive definite symmetric matrix, so that \(\mathbb{F}l\) is a diffeomorphism and we might define the Hamiltonian function \[h(\mu,z)=\mu^{T}\mathbb{I}^{-1}\mu-l(\mathbb{I}^{-1}\mu,z)=\mu^{T}\mathbb{I}^ {-1}\mu-\frac{1}{2}(\mathbb{I}^{-1}\mu)^{T}\mu+\gamma z,\] and letting \(\mathbb{J}=(2\mathbb{I}^{-1}-(\mathbb{I}^{-1})^{T})=\mathbb{I}^{-1}\) we get \[h(\mu,z)=\frac{1}{2}\mu^{T}\mathbb{J}\mu+\gamma z.\] The Lie-Poisson-Jacobi equations are then given by \[\dot{\mu}=\mathrm{ad}_{\frac{\delta h}{\delta\mu}}^{*}\mu-\gamma\mu\] \[\dot{z}=\frac{1}{2}\mu^{T}\mathbb{J}\mu-\gamma z.\] \(\diamond\) ### Lie-Poisson-Jacobi bracket Let us recall first the definition of a Jacobi structure (see Kirillov 1976 and Lichnerowicz 1978). **Definition 5.4**.: (Jacobi structure) A _Jacobi structure_ on a manifold \(M\) is a tuple \((\Lambda,E)\), where \(\Lambda\) is a bi-vector field and \(E\) is a vector field, satisfying the following equations \[[\Lambda,\Lambda]=2E\wedge\Lambda,\quad[E,\Lambda]=0,\] with \([\cdot,\cdot]\) the Schouten-Nijenhuis bracket. **Definition 5.5**.: (Jacobi bracket) A Jacobi bracket on a manifold \(M\) is a bilinear, skew-symmetric map \(\{\cdot,\cdot\}:C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)\) satisfying the Jacobi identity and the following weak Leibniz rule \[\operatorname{supp}(\{f,g\})\subseteq\operatorname{supp}(f)\cap\operatorname{ supp}(g).\] A _Jacobi manifold_ is a manifold possessing either a Jacobi structure or a Jacobi bracket since these two definitions are equivalent (see Kirillov 1976, Lichnerowicz 1978, Libermann and Marle 1987, Ibanez et al. 1997). However, it is much more convenient to introduce a Jacobi structure for practical purposes. Now, from the Jacobi structure we can define an associated Jacobi bracket as follows: \[\{f,g\}=\Lambda(df,dg)+fE(g)-gE(f),\quad f,g\in C^{\infty}(M,\mathbb{R})\] In this case, the weak Leibniz rule is equivalent to the generalized Leibniz rule \[\{f,gh\}=g\{f,h\}+h\{f,g\}+ghE(f), \tag{28}\] In this sense, this bracket generalizes the well-known Poisson brackets. Indeed, a Poisson manifold is a particular case of Jacobi manifold in which \(E=0\). Given a Jacobi manifold \((M,\Lambda,E)\), we consider the map \(\sharp_{\Lambda}:\Omega^{1}(M)\to\mathfrak{X}(M)\) defined by \[\sharp_{\Lambda}(\alpha)=\Lambda(\alpha,\cdot).\] We have that \(\sharp_{\Lambda}\) is a morphism of \(C^{\infty}\)-modules, though it may fail to be an isomorphism. Given a function \(f:M\to\mathbb{R}\), we define the Hamiltonian vector field \(X_{f}\) by \[X_{f}=\sharp_{\Lambda}(df)+fE.\] Contact structures are examples of Jacobi structures. Given a contact manifold \((M,\eta)\), we may associate it a natural Jacobi structure. Indeed, we define the bivector \(\Lambda\) as \[\Lambda(\alpha,\beta)=-d\eta(b^{-1}(\alpha),b^{-1}(\beta)),\qquad\alpha,\beta \in\Omega^{1}(M). \tag{29}\] So that the pair \((\Lambda,E=-\mathcal{R})\) is a Jacobi structure (see Lichnerowicz 1978, De Leon and Sardon 2017). In Darboux coordinates, the bivector \(\Lambda\) reads as \[\Lambda=\frac{\partial}{\partial p_{i}}\wedge\left(\frac{\partial}{\partial q ^{i}}+p_{i}\frac{\partial}{\partial z}\right). \tag{30}\] In addition, given a function \(h:M\to\mathbb{R}\), the _contact Hamiltonian vector field_ is given by \[X_{h}=\sharp_{\Lambda}(dh)-h\mathcal{R}\] and along its integral curves the following equation is satisfied with respect to the associated Jacobi bracket \[\dot{f}=\{h,f\}-f\frac{\partial h}{\partial z},\quad\forall f\in C^{\infty}(M). \tag{31}\] The preceding equation implies in particular that the Hamiltonian function is not conserved along its integral curves since \[\dot{h}=-h\frac{\partial h}{\partial z}.\] In Darboux coordinates, the bracket is given by \[\{f,g\}=\frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}- \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}-\frac{ \partial f}{\partial z}\left(p_{i}\frac{\partial g}{\partial p_{i}}-g\right)+ \frac{\partial g}{\partial z}\left(p_{i}\frac{\partial f}{\partial p_{i}}-f\right)\] and the Hamiltonian vector field is given in canonical coordinates by: \[X_{h}=\frac{\partial h}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial h}{\partial q^{i}}+p_{i}\frac{\partial h}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial h}{\partial p_{i}} -h\right)\frac{\partial}{\partial z}\] So, given a manifold \(Q\), the contact manifold \((T^{*}Q\times\mathbb{R},\eta_{Q})\), where \(\eta_{Q}\) is the contact form defined in (9), has a canonical Jacobi structure and an associated Jacobi bracket defined as above, which we will denote by \(\{\cdot,\cdot\}_{can}\) from now on. Let us now introduce a Jacobi structure on \(\mathfrak{g}^{*}\times\mathbb{R}\). **Proposition 5.6**.: _The structure given by_ \[\Lambda(\mu,z)(df,dg)=\left\langle\mu,\left[\frac{\delta f}{\delta\mu},\frac{ \delta g}{\delta\mu}\right]\right\rangle+\left\langle\mu,\frac{\delta f}{ \delta\mu}\right\rangle\frac{\partial g}{\partial z}-\left\langle\mu,\frac{ \delta g}{\delta\mu}\right\rangle\frac{\partial f}{\partial z} \tag{32}\] _and \(E=-\mathcal{R}=-\frac{\partial}{\partial z}\) is a Jacobi structure on \(\mathfrak{g}^{*}\times\mathbb{R}\) and induces a Jacobi bracket given by_ \[\{f,g\}(\mu,z)=\Lambda(\mu)(df,dg)-f\frac{\partial g}{\partial z}+g\frac{ \partial f}{\partial z}. \tag{33}\] Proof.: We just have to prove that \(\Lambda\) is a Jacobi structure, since then the map \(\{\cdot,\cdot\}\) would be the associated Jacobi bracket. Using the fact that \[\Lambda=\operatorname{pr}_{1}^{*}\Lambda_{0}+\Delta\wedge\mathcal{R},\] where \(\Lambda_{0}\) is the Lie-Poisson structure on \(\mathfrak{g}^{*}\) defined by \[\Lambda_{0}(\mu)(df,dg)=\left\langle\mu,\left[\frac{\delta f}{\delta\mu},\frac {\delta g}{\delta\mu}\right]\right\rangle,\quad f,g\in C^{\infty}(\mathfrak{g} ^{*}),\] \(\operatorname{pr}_{1}:\mathfrak{g}^{*}\times\mathbb{R}\to\mathfrak{g}^{*}\) is the projection onto the first factor, \(\Delta\) is the vector field on \(\mathfrak{g}^{*}\times\mathbb{R}\) defined by \[\Delta(\mu,z)=\left.\frac{d}{dt}\right|_{t=0}(\mu+t\mu,z)\] and \(\mathcal{R}=\frac{\partial}{\partial z}\), we may deduce after some computations involving the Schouten-Nijenhuis bracket and interior products that (see Marle 1997) \[[\Lambda,\Lambda]=2[\Lambda_{0},\Delta\wedge\mathcal{R}]=2\Lambda_{0}\wedge \mathcal{R},\] where we used that \([\Delta\wedge\mathcal{R},\Lambda_{0}]=[\Lambda_{0},\Delta\wedge\mathcal{R}]\) (since \(\Lambda_{0}\) and \(\Delta\wedge\mathcal{R}\) are both \((2,0)\)-tensors) and the fact that \([\Lambda_{0},\Delta]=\Lambda_{0}\). The previous equality is equivalent to \[[\Lambda,\Lambda]=2\Lambda\wedge\mathcal{R},\] using linearity and skew-symmetry of the wedge product. In addition, we also have that \([\mathcal{R},\Lambda]\) vanishes: \[[\mathcal{R},\Lambda]=[\mathcal{R},\Lambda_{0}]+[\mathcal{R},\Delta\wedge \mathcal{R}].\] the first term vanishes since \(\Lambda_{0}\) is pulled-back from \(\mathfrak{g}^{*}\) and so \([\mathcal{R},\Lambda_{0}]=\mathcal{L}_{\mathcal{R}}\Lambda_{0}=0\). The second term also vanishes since \([\mathcal{R},\Delta\wedge\mathcal{R}]=[\mathcal{R},\Delta]\wedge\mathcal{R}+ \Delta\wedge[\mathcal{R},\mathcal{R}]=[\mathcal{R},\Delta]\wedge\mathcal{R}\) and the Lie bracket of \([\mathcal{R},\Delta]\) is zero. Hence, \((\Lambda,E=-\mathcal{R})\) is indeed a Jacobi structure. **Remark 5.7**.: The Jacobi structure (\(\Lambda=\operatorname{pr}_{1}^{*}\Lambda_{0}+\Delta\wedge R\), \(R=-\partial/\partial z\)) on \(\mathfrak{g}^{*}\times\mathbb{R}\) is linear on the vector space \(\mathfrak{g}^{*}\times\mathbb{R}\to\mathbb{R}\). In fact, it is a particular case of a more general class of linear Jacobi structures on vector bundles which were previously considered in Iglesias and Marrero 2000 (see also Iglesias and Marrero 2001; Iglesias 2003). **Definition 5.8**.: The Jacobi structure and bracket defined in the previous Proposition are called the _Lie-Poisson-Jacobi structure_ and bracket on \(\mathfrak{g}^{*}\times\mathbb{R}\), respectively. In the next proposition, we will define the Hamiltonian vector fields associated with this Jacobi structure and we will see what is the expression of the corresponding Hamilton equations. **Proposition 5.9**.: _Let \((\Lambda,R)\) be the Jacobi structure on \(\mathfrak{g}^{*}\times\mathbb{R}\) defined above. If \(h:\mathfrak{g}^{*}\times\mathbb{R}\to\mathbb{R}\) is the reduced Hamiltonian function, the integral curves of the Hamiltonian vector field \(X_{h}\) satisfy the Lie-Poisson-Jacobi equations_ \[\dot{\mu} =\mathrm{ad}^{*}_{\frac{\delta h}{\delta\mu}}\mu-\mu\frac{\partial h }{\partial z}\] \[\dot{z} =\langle\mu,\frac{\delta h}{\delta\mu}\rangle-h(\mu,z).\] Proof.: Note that if \((\mu(t),z(t))\) is an integral curve of the contact Hamiltonian vector field \(X_{h}\), then we have that for all function \(f\) \[\dot{f}=\langle df(\mu,z),(\dot{\mu},\dot{z})\rangle=\left\langle\dot{\mu}, \frac{\delta f}{\delta\mu}\right\rangle+\frac{\partial f}{\partial z}\dot{z}.\] Also, using the definition of the Jacobi bracket we have that \[\{h,f\}(\mu,z)=\left\langle\mu,\left[\frac{\delta h}{\delta\mu},\frac{\delta f }{\delta\mu}\right]\right\rangle+\left(\left\langle\mu,\frac{\delta h}{ \delta\mu}\right\rangle-h\right)\frac{\partial f}{\partial z}-\left(\left \langle\mu,\frac{\delta f}{\delta\mu}\right\rangle-f\right)\frac{\partial h}{ \partial z}.\] Thus using (31) we deduce that \[\dot{f}=\{h,f\}(\mu,z)-f\frac{\partial h}{\partial z}=\left\langle\mathrm{ad} ^{*}_{\frac{\delta h}{\delta\mu}}\mu-\mu\frac{\partial h}{\partial z},\frac{ \delta f}{\delta\mu}\right\rangle+\left(\left\langle\mu,\frac{\delta h}{ \delta\mu}\right\rangle-h\right)\frac{\partial f}{\partial z}.\] Since the function \(f\) is arbitrary the result follows. The following result relates the new bracket on the reduced product space \(\mathfrak{g}^{*}\times\mathbb{R}\) with the canonical Jacobi bracket on \(T^{*}G\times\mathbb{R}\). Recall how the the left action of \(G\) on itself lifts to the cotangent space \(T^{*}G\). The cotangent left action is given by the map \[\phi_{g}:T^{*}G\to T^{*}G,\quad\alpha_{h}\in T^{*}_{h}G\mapsto T^{*}_{gh} \mathcal{L}_{g^{-1}}(\alpha_{h})\in T^{*}_{gh}G.\] **Proposition 5.10**.: _Let \(F\) and \(H\) be \(G\)-invariant functions on \(T^{*}G\times\mathbb{R}\) and let \(f\) and \(h\) be their restrictions to \(\mathfrak{g}^{*}\times\mathbb{R}\), respectively. Then_ \[\{F,H\}_{can}(\alpha,z)=\{f,h\}(T^{*}_{e}\mathcal{L}_{g}(\alpha),z),\quad \alpha\in T^{*}_{g}G,\] _where \(\{\cdot,\cdot\}_{can}\) is the canonical Jacobi bracket on \(T^{*}G\times\mathbb{R}\) and \(\{\cdot,\cdot\}\) is the Lie-Poisson-Jacobi structure on \(\mathfrak{g}^{*}\times\mathbb{R}\)._ Proof.: The canonical Jacobi bracket on \(T^{*}G\times\mathbb{R}\) has the following form \[\{F,H\}_{can}=\{F\circ i_{z},H\circ i_{z}\}_{T^{*}G}-F\frac{\partial H}{\partial z }+H\frac{\partial F}{\partial z}, \tag{34}\] where \(\{\cdot,\cdot\}_{T^{*}G}\) is the canonical Poisson bracket on \(T^{*}G\) and \(i_{z}:T^{*}G\hookrightarrow T^{*}G\times\mathbb{R}\) is the inclusion mapping \(i_{z}(\alpha)=(\alpha,z)\). Observe that it is a standard fact that \[\{F,H\}_{T^{*}G}(\alpha)=\{f,h\}_{0}(T^{*}_{e}\mathcal{L}_{g}(\alpha)),\] where \(\{\cdot,\cdot\}_{0}\) is the Lie-Poisson bracket on \(\mathfrak{g}^{*}\)(see Marsden and Ratiu 2013, for instance). Moreover, since \(F\) and \(H\) are invariant functions we have that also their derivative with respect to the contact variable \(z\). Therefore, (34) equals \[\{F,H\}_{can}(\alpha,z)= \{f\circ i_{z},h\circ i_{z}\}_{0}(T^{*}_{e}\mathcal{L}_{g}(\alpha))\] \[-f(\alpha,z)\frac{\partial h}{\partial z}(\alpha,z)+h(\alpha,z) \frac{\partial f}{\partial z}(\alpha,z),\] which is exactly the bracket \(\{f,h\}(T^{*}_{e}\mathcal{L}_{g}(\alpha),z)\) (see Proposition 5.6). **Example 5.11**.: The reduced space of \(T^{*}SO(3)\times\mathbb{R}\) is \(\mathfrak{so}(3)^{*}\times\mathbb{R}\), which we can identify with \(\mathbb{R}^{4}\) under the suitable Lie algebra isomorphism. The corresponding Lie-Poisson-Jacobi bracket is \[\{h,f\}(\mu,z) =\mu\cdot\left(\frac{\delta h}{\delta\mu}\times\frac{\delta f}{ \delta\mu}\right)+\left(\mu\cdot\frac{\delta h}{\delta\mu}-h\right)\frac{ \partial f}{\partial z}-\left(\mu\cdot\frac{\delta f}{\delta\mu}-f\right) \frac{\partial h}{\partial z}\] \[=\frac{\delta f}{\delta\mu}\cdot\left(\mu\times\frac{\delta h}{ \delta\mu}-\mu\frac{\partial h}{\partial z}\right)+\left(\mu\cdot\frac{ \delta h}{\delta\mu}-h\right)\frac{\partial f}{\partial z}+f\frac{\partial h} {\partial z}\] Given the Hamiltonian function \(h(\mu,z)=\frac{1}{2}\mu^{T}\mathbb{J}\mu+\gamma z\), the corresponding Lie-Poisson-Jacobi equations are then \[\dot{\mu} =\mu\times\mathbb{J}\mu-\gamma\mu\] \[\dot{z} =\frac{1}{2}\mu^{T}\mathbb{J}\mu-\gamma z.\] ### Lie-Poisson-Jacobi reduction theorem The objective of this section is two-fold: we are going to prove that the standard momentum map of the Lie group \(T^{*}G\) induces a Jacobi map from \(T^{*}G\times\mathbb{R}\) to \(\mathfrak{g}^{*}\times\mathbb{R}\). Then we prove that Jacobi maps allow to perform a reduction and to find a Jacobi structure on \(\mathfrak{g}^{*}\times\mathbb{R}\). If we start with the canonical Jacobi structure then the reduced structure will be precisely the Lie-Poisson-Jacobi structure we defined in the last section. Moreover, we will prove that Jacobi maps project Hamiltonian vector fields onto Hamiltonian vector fields. This is the Jacobi version of the Poisson reduction Theorem given in Marsden and Ratiu 2013. It is well-known (see Abraham and Marsden 1978, Marsden and Ratiu 2013) that the map \(J:T^{*}G\to\mathfrak{g}^{*}\) defined by \[\langle J(\alpha_{g}),\xi\rangle=\langle T^{*}_{e}\mathcal{L}_{g}(\alpha_{g}),\xi\rangle,\quad\alpha_{g}\in T^{*}_{g}G,\ \xi\in\mathfrak{g}, \tag{35}\] is a momentum map on the cotangent bundle of the Lie group. Moreover it is a Poisson map with respect to the standard Poisson structure and Lie-Poisson structure on \(T^{*}G\) and \(\mathfrak{g}^{*}\), respectively. **Definition 5.12**.: Given two Jacobi manifolds \((M_{1},\Lambda_{1},E_{1})\) and \((M_{2},\Lambda_{2},E_{2})\), with Jacobi brackets \(\{\cdot,\cdot\}_{1}\) and \(\{\cdot,\cdot\}_{2}\), respectively, and a smooth map \(\varphi:M_{1}\to M_{2}\), then the map \(\varphi\) is said to be a _Jacobi map_ if for every \(f\) and \(h\) in \(C^{\infty}(M_{2})\) we have that \[\{f\circ\varphi,h\circ\varphi\}_{1}=\{f,h\}_{2}\circ\varphi.\] The following theorem establishes that in the presence of a Jacobi action, the Jacobi bracket reduces to a Jacobi bracket on the space of orbits of the action. **Theorem 5.13** (Jacobi reduction theorem by a Lie Group action).: _Let \(G\) be a Lie group acting on a Jacobi manifold \((P,\{\cdot,\cdot\})\) by Jacobi maps. Suppose that \(P/G\) is a smooth manifold and the projection \(\pi:P\to P/G\) is a submersion. Then, there is a unique Jacobi bracket on \(P/G\) denoted by \(\{\cdot,\cdot\}_{red}\) called the reduced Jacobi bracket such that \(\pi\) is a Jacobi map._ _Moreover, suppose that \(H:P\to\mathbb{R}\) is \(G\)-invariant and define \(h:P/G\to\mathbb{R}\) by \(H:=h\circ\pi\). If \(\phi_{t}^{X_{H}}\) and \(\phi_{t}^{X_{h}}\) are the Hamiltonian flows of \(H\) and \(h\), respectively, then they satisfy the equation_ \[\pi\circ\phi_{t}^{X_{H}}=\phi_{t}^{X_{h}}\circ\pi.\] Proof.: Let \(f,h\in C^{\infty}(P/G)\). Notice that the Jacobi bracket of the functions \(f\circ\pi\) and \(h\circ\pi\) is also \(G\)-invariant. Indeed if \(\phi_{g}:P\to P\) denotes the action then \[\{f\circ\pi,h\circ\pi\}\circ\phi_{g}=\{f\circ\pi\circ\phi_{g},h\circ\pi\circ \phi_{g}\}=\{f\circ\pi,h\circ\pi\},\] where the first equality holds since \(\phi_{g}\) acts by Jacobi maps and the second holds since \(\pi\) is invariant under composition with the action by definition. Now, observe that a function \(\Phi\) on \(P\) is \(G\)-invariant if and only if there exists a function \(\varphi:P/G\rightarrow\mathbb{R}\) such that \(\Phi=\varphi\circ\pi\). Thus, denote by \(\{f,h\}_{red}\) the function on \(P/G\) defined by \[\{f\circ\pi,h\circ\pi\}=\{f,h\}_{red}\circ\pi.\] We must check that \(\{f,h\}_{red}\) is indeed a Jacobi bracket. It is clearly bilinear and skew-symmetric, since \(\{f\circ\pi,h\circ\pi\}\) also is. It satisfies Jacobi identity since \(\{\cdot,\cdot\}\) also does. And finally, we must check if it satisfies the weak Leibniz condition. But note that if \(k\in C^{\infty}(P/G)\) and \((\Lambda,E)\) is the Jacobi structure associated to \(\{\cdot,\cdot\}\) \[\begin{split}\{f,hk\}_{red}\circ\pi=&\{f\circ\pi, (h\circ\pi)(k\circ\pi)\}\\ =&(h\circ\pi)\{f\circ\pi,k\circ\pi\}+(k\circ\pi) \{f\circ\pi,h\circ\pi\}\\ &+(h\circ\pi)(k\circ\pi)E(f\circ\pi)\end{split} \tag{36}\] Now, since \(E(\Phi)=\{1,\Phi\}\), for \(\Phi\in\mathcal{C}^{\infty}(P)\), we deduce that the vector field \(E\) is \(\pi\)-projectable and, thus, there exists a vector field \(E_{red}\) over \(P/G\) such that \(E_{red}(f)\circ\pi=E(f\circ\pi)\), \(\forall\ f\in\mathcal{C}^{\infty}(P/G)\). Therefore, from (36), it follows that \[\{f,hk\}_{red}\circ\pi=(h\{f,k\}_{red}+k\{f,h\}_{red}+hkE_{red}(f))\circ\pi.\] This implies that \(\{\cdot,\cdot\}_{red}\) satisfies the weak Leibniz condition. In fact, following the same argument we could prove that the Jacobi structure associated to \(\{\cdot,\cdot\}_{red}\), denoted by \((\Lambda_{red},E_{red})\), by uniqueness is given by \[\Lambda_{red}\circ\pi=T\pi(\Lambda)\quad\text{and}\quad E_{red}\circ\pi=T\pi( E).\] Now, note that given \(p\in P\) \[\frac{d}{dt}(f\circ\pi\circ\phi_{t}^{X_{H}}(p))=\{H,f\circ\pi\}\circ\phi_{t}^ {X_{H}}(p)+(f\circ\pi)E(H)\circ\phi_{t}^{X_{H}}(p).\] Since \(H=h\circ\pi\) the later is equivalent to \[\frac{d}{dt}(f\circ\pi\circ\phi_{t}^{X_{H}}(p))=\{h,f\}_{red}\circ\pi\circ \phi_{t}^{X_{H}}(p)+fE_{red}(h)\circ\pi\circ\phi_{t}^{X_{H}}(p). \tag{37}\] Similarly, on the quotient space \(P/G\), letting \([p]:=\pi(p)\), we have that \[\frac{d}{dt}(f\circ\phi_{t}^{X_{h}}([p]))=\{h,f\}_{red}\circ\phi_{t}^{X_{h}}( [p])+fE_{red}(h)\circ\phi_{t}^{X_{h}}([p]).\] This last equation, uniquely defines the Hamiltonian flow \(\phi_{t}^{X_{h}}\). Therefore, by comparison with equation (37), we must have \[\pi\circ\phi_{t}^{X_{H}}=\phi_{t}^{X_{h}}\circ\pi.\] **Remark 5.14**.: The first part of this theorem may be deduced of a more general construction for Jacobi manifolds in the literature (see Costa 1989 and also Ibort, Leon, and Marmo 1997). Anyway, we have included a proof of the result to make the paper more self-contained **Proposition 5.15**.: _The smooth submersion \(\hat{J}:T^{*}G\times\mathbb{R}\to\mathfrak{g}^{*}\times\mathbb{R}\) given by_ \[\hat{J}(\mu_{g},z)=(J(\mu_{g}),z)\] _is a Jacobi map between the canonical Jacobi manifolds \(T^{*}G\times\mathbb{R}\) and \(\mathfrak{g}^{*}\times\mathbb{R}\), where \(J\) is given by (35)._ Proof.: We must prove that for any \(f\) and \(h\) in \(C^{\infty}(\mathfrak{g}^{*}\times\mathbb{R})\) the following identity holds \[\{f\circ\hat{J},h\circ\hat{J}\}_{can}=\{f,h\}\circ\hat{J},\] where \(\{\cdot,\cdot\}_{can}\) is the Jacobi bracket on \(T^{*}G\times\mathbb{R}\) and \(\{\cdot,\cdot\}\) is the Lie-Poisson-Jacobi bracket on \(\mathfrak{g}^{*}\times\mathbb{R}\). Note that the function \(f\circ\hat{J}\) is \(G\)-invariant for any function \(f\). Indeed, given \(\alpha_{h}\in T^{*}_{h}G\), we have that \[f\circ\hat{J}\circ\phi_{g}(\alpha_{h},z)=f\circ\hat{J}(T^{*}_{gh}\mathcal{L}_ {g^{-1}}(\alpha_{h}),z)\] where \(\phi_{g}\) denotes the cotangent lifted action of \(G\) on \(T^{*}G\times\mathbb{R}\). Then applying the definition of \(\hat{J}\), we deduce \[f\circ\hat{J}\circ\phi_{g}(\alpha_{h},z)=f(T^{*}_{e}\mathcal{L}_{gh}(T^{*}_{ gh}\mathcal{L}_{g^{-1}}(\alpha_{h})),z).\] Finally, applying the composition rule of cotangent maps we conclude \[f\circ\hat{J}\circ\phi_{g}(\alpha_{h},z)=f(T^{*}_{e}\mathcal{L}_{h}(\alpha_{h })),z)=f\circ\hat{J}(\alpha_{h},z).\] Moreover, note that for any \(f\) the restriction of \(f\circ\hat{J}\) to the identity is just \(f\) itself since \[f\circ\hat{J}(\alpha_{e},z)=f(T^{*}_{e}\mathcal{L}_{e}(\alpha_{e}),z)=f( \alpha_{e},z).\] Hence, we may apply Proposition 5.10 to deduce that \[\{f\circ\hat{J},h\circ\hat{J}\}_{can}(\alpha_{h},z)=\{f,h\}(T^{*}_{e} \mathcal{L}_{h}(\alpha_{h}),z)\] which is equivalent to \[\{f\circ\hat{J},h\circ\hat{J}\}_{can}(\alpha_{h},z)=\{f,h\}\circ\hat{J}(\alpha_{h},z).\] **Theorem 5.16**.: _Let \(H:T^{*}G\times\mathbb{R}\to\mathbb{R}\) be a \(G\)-invariant function. Then the function \(h:\mathfrak{g}^{*}\times\mathbb{R}\to\mathbb{R}\) given as \(h:=H|_{\mathfrak{g}^{*}\times\mathbb{R}}\) satisfies_ \[H=h\circ\hat{J}.\] _Moreover, the contact Hamiltonian vector field \(X_{H}\) on \(T^{*}G\times\mathbb{R}\) with flow \(\phi_{t}^{X_{H}}\) and the Jacobi Hamiltonian vector field \(X_{h}\) on \(\mathfrak{g}^{*}\times\mathbb{R}\) with flow \(\phi_{t}^{X_{h}}\) are related by \(\hat{J}\) or, equivalently, their flows satisfy the equation_ \[\hat{J}\circ\phi_{t}^{X_{H}}=\phi_{t}^{X_{h}}\circ\hat{J}.\] Proof.: First note that for any \((\alpha_{g},z)\in T^{*}G\times\mathbb{R}\) we have that \[H(\alpha_{g},z)=H\circ\hat{J}(\alpha_{g},z),\] since \(H\) is \(G\)-invariant. Then, since \(\hat{J}(\alpha_{g},z)\in\mathfrak{g}^{*}\times\mathbb{R}\), we have that \[H(\alpha_{g},z)=h\circ\hat{J}(\alpha_{g},z),\] which proves the first statement in the theorem. The second statement is a consequence of Theorem 5.13. Indeed, note that the map \[\psi:(T^{*}G\times\mathbb{R})/G \longrightarrow\mathfrak{g}^{*}\times\mathbb{R}\] \[[\alpha_{g},z] \mapsto\hat{J}(\alpha_{g},z)\] is a diffeomorphism. Moreover, by construction we have that \(\psi\circ\pi=\hat{J}\), where \(\pi:T^{*}G\times\mathbb{R}\to(T^{*}G\times\mathbb{R})/G\) is the quotient map. Now, the map \(\psi\) and the reduced Jacobi brackets on \((T^{*}G\times\mathbb{R})/G\) given by Theorem 5.13 induce a Jacobi bracket \(\{\cdot,\cdot\}_{*}\) on \(\mathfrak{g}^{*}\times\mathbb{R}\) that makes \(\psi\) a Jacobi map so that \[\{f,h\}_{*}\circ\psi=\{f\circ\psi,h\circ\psi\}_{red},\quad f,g\in C^{\infty} (\mathfrak{g}^{*}\times\mathbb{R}).\] By definition of the reduced bracket, \(\pi\) is also a Jacobi map so that \[\{f,h\}_{*}\circ\psi\circ\pi=\{f\circ\psi\circ\pi,h\circ\psi\circ\pi\}_{can},\quad f,g\in C^{\infty}(\mathfrak{g}^{*}\times\mathbb{R}),\] which by construction gives \[\{f,h\}_{*}\circ\hat{J}=\{f\circ\hat{J},h\circ\hat{J}\}_{can},\quad f,g\in C^{ \infty}(\mathfrak{g}^{*}\times\mathbb{R}).\] But by Proposition 5.10 and since \(f\circ\hat{J},h\circ\hat{J}\) are \(G\)-invariant functions on \(T^{*}G\times\mathbb{R}\) we must have that \[\{f,h\}_{*}=\{f,h\},\] where \(\{\cdot,\cdot\}\) is the canonical bracket on \(\mathfrak{g}^{*}\times\mathbb{R}\) defined before. So, we may conclude that \[\hat{J}\circ\phi_{t}^{X_{H}}=\psi\circ\pi\circ\phi_{t}^{X_{H}}=\psi\circ\phi_{ t}^{X_{h_{red}}}\circ\pi,\] where we used the last statement of Theorem 5.13, relating the Hamiltonian flow of \(H\) with that of \(h_{red}\), the function defined by \(H:=h_{red}\circ\pi\). Now observe that, since \(\psi\) is a Jacobi map and \(h_{red}=h\circ\psi\), we have that \[h_{red}\circ\pi=h\circ\hat{J}=h\circ\psi\circ\pi,\] the following relation holds between Hamiltonian flows \[\psi\circ\phi_{t}^{X_{h_{red}}}=\phi_{t}^{X_{h}}\circ\psi.\] Therefore, we deduce \[\hat{J}\circ\phi_{t}^{X_{H}}=\phi_{t}^{X_{h}}\circ\psi\circ\pi=\phi_{t}^{X_{h} }\circ\hat{J}.\] **Example 5.17**.: Consider the Lie group \(SO(3)\) where the left action is given by \(\mathcal{L}_{R}(R_{0})=RR_{0}\) for any \(R,R_{0}\in SO(3)\). Therefore, \(T_{e}\mathcal{L}_{R}(\xi)=R\xi\) for \(\xi\in\mathfrak{so}(3)\) and \(T_{e}^{*}\mathcal{L}_{R}(\mu_{R})=R^{T}\mu_{R}\), where \(\mu_{R}\in T_{R}^{*}SO(3)\). Thus, we introduce the Jacobi map \(\hat{J}:T^{*}SO(3)\times\mathbb{R}\to\mathfrak{so}(3)^{*}\times\mathbb{R}\) given by \[\hat{J}(\mu_{R},z)=(R^{T}\mu_{R},z).\] Let the Hamiltonian \(H:T^{*}SO(3)\times\mathbb{R}\to\mathbb{R}\) be \[H(\mu_{R},z)=\frac{1}{2}\operatorname{tr}(\mu_{R}^{T}\mathbb{J}\mu_{R})+\gamma z.\] Let \(h:\mathfrak{so}^{*}(3)\times\mathbb{R}\to\mathbb{R}\) be given by \(h(\mu,z)=\frac{1}{2}\mu^{T}\mathbb{J}\mu+\gamma z\), where we are identifying \(\mathfrak{so}^{*}(3)\) with \(\mathbb{R}^{3}\) through the map \(\overset{\sim}{(\cdot)}:\mathbb{R}^{3}\to\mathfrak{so}^{*}(3)\) satisfying \(\langle\ddot{\Pi},\xi\rangle=\Pi_{i}\xi_{i}\), where \(\Pi=(\Pi_{1},\Pi_{2},\Pi_{3})\in\mathbb{R}^{3}\) and \(\xi\in\mathfrak{so}(3)\) is identified with \((\xi_{1},\xi_{2},\xi_{3})\) through the hat map given in Example 4.4. Then, it is straightforward to check that \(h\circ\hat{J}(\mu_{R},z)=H(\mu_{R},z)\). Euler-Poincare-Herglotz and Lie-Poisson-Jacobi equations with symmetry breaking The purpose of this section is to obtain the reduced Euler-Poincare-Herglotz equations as a generalization of the procedure followed in Holm, Marsden, and Ratiu 1998. Let \(X\) be a finite-dimensional vector space. Consider a Lagrangian system evolving on a Lie group \(G\) and assume the Lagrangian is not symmetry invariant. At the same time, we consider a representation of \(G\) on a dual vector space \(X^{*}\). Coupling to the Lagrangian a parameter depending on vectors in \(X^{*}\), we obtain a symmetry invariant Lagrangian function under the action of \(G\). Loosely speaking, the Lagrangian system is coupled with vectors in \(X^{*}\) that are acted by a Lie group symmetry. The associated action considered at this stage restores the full Lie group symmetry and allows us to apply the semi-direct product reduction theory Marsden, Ratiu, and Weinstein 1984, Holm, Marsden, and Ratiu 1998 (see also Gay-Balmaz and Tronci 2010, Gay-Balmaz and Ratiu 2011), to obtain the corresponding Euler-Poincare-Herglotz system on the semi-direct product Lie algebra \((\mathfrak{g}\ltimes X^{*})\times\mathbb{R}\). This gives rise to a new reduced system that finds no analogs in classical reduced-order models in contact mechanical systems. Assume (i) the Lagrangian function \(L_{\alpha_{0}}:TG\times\mathbb{R}\to\mathbb{R}\) is not \(G\)-invariant but depends on a parameter \(\alpha_{0}\) that may be considered to be an element of the dual space \(X^{*}\), where \(X\) is a vector space. Hence, we define the extended Lagrangian function as \(L_{\rm ext}:TG\times\mathbb{R}\times X^{*}\to\mathbb{R}\), with \(L_{\rm ext}(\cdot,\alpha_{0})=L_{\alpha_{0}}\). Assume (ii) there is a left representation \(\rho\) of \(G\) on \(X\), i.e., \(\rho:G\to{\rm GL}(X)\) is a Lie group morphism. Hence, there is a left representation \(\rho^{*}\) of \(G\) on \(X^{*}\), i.e., \(\rho^{*}:G\to{\rm GL}(X^{*})\), defined using adjoint linear maps. Indeed, for any \(g\in G\), we set \(\rho^{*}(g)=\rho(g^{-1})^{*}\in{\rm GL}(X^{*})\), where the right-hand side is the adjoint map of \(\rho(g^{-1})\in{\rm GL}(X)\), i.e., for any \(x\in X\) and \(\alpha\in X^{*}\) \[\langle\rho(g^{-1})^{*}(\alpha),x\rangle=\langle\alpha,\rho(g^{-1})(x)\rangle.\] From now, we will denote the image of \(g\) by \(\rho\) and \(\rho^{*}\) by \(\rho_{g}\) and \(\rho_{g}^{*}\), respectively, and \(\rho^{*}\) will be called the adjoint representation. The adjoint representation induces a left action of \(G\) on the extended phase space \(TG\times\mathbb{R}\times X^{*}\) defined as \[\Phi:G\times(TG\times\mathbb{R}\times X^{*}) \longrightarrow TG\times\mathbb{R}\times X^{*},\] \[(g,(v_{h},z,\alpha)) \longmapsto(\hat{\mathcal{L}}_{g}(v_{h},z),\rho_{g}^{*}(\alpha)), \tag{38}\] where \(\hat{\mathcal{L}}_{g}\) has been defined in (24). Assume also that (iii) the extended Lagrangian function is \(G\)-invariant under (38), i.e., \(L_{\rm ext}\circ\Phi_{g}=L_{\rm ext}\), for any \(g\in G\), or more explicitly \[L_{\rm ext}(\hat{\mathcal{L}}_{g}(v_{h},z),\rho_{g}^{*}(\alpha))=L_{\rm ext}(v_{h},z,\alpha),\] for any \(v_{h}\in TG\) and \(\alpha\in X^{*}\). We can now obtain the reduced augmented Lagrangian \(\ell_{ext}:\mathfrak{g}\times\mathbb{R}\times X^{*}\to\mathbb{R}\), which is given by \[\ell_{ext}(\xi,z,\alpha):=L_{ext}(e,\xi,z,\alpha), \tag{39}\] Critical points of the reduced variational principle for \(\ell_{ext}\) satisfy the following Euler-Poincare-Herglotz equations (see Theorem 6.1 below for a proof) \[\frac{d}{dt}\left(\frac{\delta\ell_{ext}}{\delta\xi}\right) =\mathrm{ad}_{\xi}^{*}\left(\frac{\delta\ell_{ext}}{\partial\xi} \right)+\mathbf{J}_{X}\left(\frac{\delta\ell_{\rm ext}}{\delta\alpha},\alpha \right)+\frac{\delta\ell_{\rm ext}}{\delta\xi}\frac{\partial\ell_{\rm ext}}{ \partial z}, \tag{40}\] \[\dot{\alpha} =-\rho_{\xi}^{\prime*}(\alpha),\quad\alpha(0)=\alpha_{0},\] (41) \[\dot{z} =\ell_{\rm ext} \tag{42}\] where \(\mathbf{J}_{X}:T^{*}X\cong X\times X^{*}\to\mathfrak{g}^{*}\) is the momentum map corresponding to the left action of \(G\) on \(X\) defined using the left representation \(\rho\) of \(G\) on \(X\), i.e., for any \(x\in X\), \(\xi\in\mathfrak{g}\) and \(\alpha\in X^{*}\) \[\langle\mathbf{J}_{X}(x,\alpha),\xi\rangle=\langle\alpha,\xi_{X}(x)\rangle, \tag{43}\] with \(\xi_{X}\) being the infinitesimal generator of the left action of \(G\) on \(X\) and \(\rho^{\prime*}:\mathfrak{g}\to\mathfrak{gl}(X^{*})\) is defined as the adjoint of \(\rho^{\prime}\), which is the infinitesimal representation induced by the left representation \(\rho\) of \(G\) on \(X\). Note that the solution to (41) is given by \(\alpha(t)=\rho_{g^{-1}(t)}^{*}(\alpha_{0})\). Also, note that using the notation of Holm, Marsden, and Ratiu 1998, Holm, Schmah, and Stoica 2009 we have \(\mathbf{J}_{X}(x,\alpha)=x\diamond\alpha\). To summarize, we have the following theorem. **Theorem 6.1**.: _Let \(g:I\to G\) and \(z:I\to\mathbb{R}\) be smooth curves on an interval \(I\). Define the curves \(\xi(t)=T_{g(t)}\mathcal{L}_{g^{-1}(t)}(\dot{g}(t))\) and \(\alpha(t)=\rho_{g^{-1}(t)}^{*}(\alpha_{0})\), for \(\alpha_{0}\in X^{*}\) Let also \(g_{0}=g(0)\in G\). Under assumptions (i)-(iii), the following are equivalent:_ 1. _The variational principle_ \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t))\] _holds for variations of_ \(g\) _with fixed endpoints._ 2. _The variational principle_ \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t))\] _holds for variations of_ \(g\) _with fixed endpoints._ 3. _The variational principle_ \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t))\] _holds for variations of_ \(g\) _with fixed endpoints._ 4. _The variational principle_ \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t))\] _holds for variations of_ \(g\) _with fixed endpoints._ Proof.: We first prove the existence of a variational principle for \(\delta\). By the definition of \(\delta\), we have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{44}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{45}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{46}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{47}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{48}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{49}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{50}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{51}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{52}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{53}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{54}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{55}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{56}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{57}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{58}\] for variations of \(g\) with fixed endpoints. We have \[\delta\int_{0}^{h}L_{\alpha_{0}}(g(t),\dot{g}(t),z(t))\ dt=0,\quad\dot{z}=L_{ \alpha_{0}}(g(t),\dot{g}(t),z(t)) \tag{59}\] for variations of \(g\) with fixed endpoints. 2. _The curves_ \(g:I\to G\) _and_ \(z:I\to\mathbb{R}\) _satisfy Euler-Poincare-Herglotz equations for_ \(L_{\alpha_{0}}\)_;_ 3. _The reduced constrained variational principle_ \[\delta\int_{0}^{h}\ell_{\mathrm{ext}}(\xi(t),z(t),\alpha(t))\ dt=0,\quad\dot{z}= \ell_{\mathrm{ext}}(\xi(t),z(t),\alpha(t))\] (44) _holds using variations of_ \(\xi\) _and_ \(\alpha\) _of the form_ \[\delta\xi =\dot{\eta}+\text{ad}_{\xi}\eta,\] \[\delta\alpha =-\rho_{\eta}^{\prime*}(\alpha).\] 4. _The curves_ \(\xi(t)=T_{g(t)}\mathcal{L}_{g^{-1}(t)}(\dot{g}(t))\) _and_ \(z\) _satisfy the Euler-Poincare-Herglotz equations_ \[\frac{d}{dt}\left(\frac{\delta\ell_{ext}}{\delta\xi}\right) =\text{ad}_{\xi}^{*}\left(\frac{\delta\ell_{ext}}{\partial\xi} \right)+\mathbf{J}_{X}\left(\frac{\delta\ell_{\mathrm{ext}}}{\delta\alpha}, \alpha\right)+\frac{\delta\ell_{\mathrm{ext}}}{\delta\xi}\frac{\partial\ell_ {\mathrm{ext}}}{\partial z},\] (45) \[\dot{\alpha} =-\rho_{\xi}^{\prime*}(\alpha),\ \ \alpha(0)=\rho_{g_{0}^{-1}}^{*}(\alpha_{0}),\ \dot{z}=\ell_{\mathrm{ext}}\] (46) Proof.: The proof follows arguments similar to the ones given in Theorem 4.1 and Theorem 4.2. The equivalence of items 1. and 2. is general for all contact manifolds, so in particular to \(TG\times\mathbb{R}\) (see Anahory Simoes et al. 2021). We will show that the first variational principle implies the second constrained variational principle, that is, we prove the equivalence between 1. and 3. Notice that, since \(L_{\mathrm{ext}}\) is \(G\)-invariant under \(\Phi_{g}\), and \(\alpha=\rho_{g^{-1}}^{*}(\alpha_{0})\) the integrand in the first variational principle is equal to the integrand in the second constrained variational principle. Indeed, using the defintion of \(\ell_{ext}\) and the \(G\)-invariance, we get \[\ell_{\mathrm{ext}}(\xi,z,\alpha)=L_{ext}(e,\xi,z,\alpha)=L_{ext}(e,T_{g} \mathcal{L}_{g^{-1}}(\dot{g}),z,\rho_{g^{-1}}^{*}(\alpha_{0}))=L_{\alpha_{0}} (g,\dot{g},z),\] where we dropped the time dependence to ease the notation. Moreover, all variations of \(g\) vanishing at the endpoints induce and are induced by variations of \(\xi\) of the form \(\delta\xi=\dot{\eta}+\text{ad}_{\xi}\,\eta\), with \(\eta(0)=\eta(T)=0\). Recall that the infinitesimal variations \(\eta\) are given by \(\eta=T_{g}\mathcal{L}_{g^{-1}}(\delta g)\). This fact is used to write the variations of the curve \(\alpha\) as \(\delta\alpha=-\rho_{\eta}^{\prime*}(\alpha)\) (see Holm, Schmah, and Stoica 2009). Hence, the first variational principle implies the second constrained variational principle. To conclude, we show the equivalence between 3. and 4. Indeed by using the definition of the auxiliar variable \(\sigma(t)\), the calculations for Theorem 4.1, integration by parts, and the fact that \[\left\langle\delta\alpha,\frac{\delta\ell_{\text{ext}}}{\delta\alpha}\right\rangle =\left\langle-\rho_{\eta}^{\prime*}(\alpha),\frac{\delta\ell_{\text{ext}}}{ \delta\alpha}\right\rangle=\left\langle\alpha,\rho_{\eta}^{\prime}\left(\frac {\delta\ell_{\text{ext}}}{\delta\alpha}\right)\right\rangle=\left\langle\mathbf{ J}_{X}\left(\frac{\delta\ell_{\text{ext}}}{\delta\alpha},\alpha\right),\eta \right\rangle,\] where we used the infinitesimal version of the definition of the adjoint representation,i.e., \[\left\langle\rho_{\eta}^{\prime*}(\alpha),x\right\rangle=-\left\langle\alpha, \rho_{\eta}^{\prime}\left(x\right)\right\rangle,\quad\forall x\in X,\alpha\in X ^{*},\eta\in\mathfrak{g},\] we deduce that the reduced Herglotz variational principle holds if and only if \[\sigma(t)\left[\operatorname{ad}_{\xi}^{*}\frac{\delta l}{\partial\xi}-\frac{ d}{dt}\left(\frac{\delta l}{\delta\xi}\right)+\mathbf{J}_{X}\left(\frac{\delta \ell_{\text{ext}}}{\delta\alpha},\alpha\right)+\frac{\partial l}{\partial z} \frac{\delta l}{\delta\xi}\right]=0. \tag{47}\] By taking the time derivative of \(\alpha\), we get \(\dot{\alpha}=-\rho_{\xi}^{\prime*}(\alpha),\ \alpha(0)=\rho_{g_{0}^{-1}}^{*}( \alpha_{0})\) and, since the auxiliary function \(\sigma\) is nowhere zero, the result follows. There is a special cases of Theorem 6.1 that is of particular interest and we state it as corollary. **Corollary 6.2**.: _Let \(X=\mathfrak{g}\) and \(\rho\) be the adjoint representation of \(G\) on \(X\), i.e., \(\rho_{g}=\text{Ad}_{g}\), for any \(g\in G\). Then, the Euler-Poincare-Herglotz equations (45)-(46) give the following equations_ \[\frac{d}{dt}\left(\frac{\delta\ell_{\text{ext}}}{\delta\xi}\right) =\operatorname{ad}_{\xi}^{*}\left(\frac{\delta\ell_{\text{ext}}} {\delta\xi}\right)-\operatorname{ad}_{\frac{\delta\ell_{\text{ext}}}{ \delta\alpha}}^{*}\alpha+\frac{\delta\ell_{\text{ext}}}{\delta\xi}\frac{ \partial\ell_{\text{ext}}}{\partial z},\] \[\dot{\alpha} =-\operatorname{ad}_{\xi}^{*}\alpha,\ \alpha(0)=\operatorname{Ad}_{g_{0}^{-1}}^{*}\alpha_{0},\,\dot{z}=\ell_{ \text{ext}}.\] Proof.: We first begin by noting that \(\alpha\in X^{*}=\mathfrak{g}^{*}\) and \(\rho^{*}\) is the coadjoint representation of \(G\) on \(X^{*}\), i.e., \(\rho_{g}^{*}=\operatorname{Ad}_{g}^{*}\), for any \(g\in G\). We also have \(\rho_{\xi}^{\prime}=\operatorname{ad}_{\xi}\), for any \(\xi\in\mathfrak{g}\) and it follows that \(\xi_{X}(x)=\operatorname{ad}_{\xi}x\), for any \(x\in X\). From (43), we have \[\left\langle\mathbf{J}_{X}(x,\alpha),\xi\right\rangle =\left\langle\alpha,\operatorname{ad}_{\xi}x\right\rangle\] \[=\left\langle\alpha,-\operatorname{ad}_{x}\xi\right\rangle\] \[=\left\langle-\operatorname{ad}_{x}^{*}\alpha,\xi\right\rangle,\] which gives \(\mathbf{J}_{X}(x,\alpha)=-\operatorname{ad}_{x}^{*}\alpha\). ### Reduced Legendre transformation If we assume that the reduced Lagrangian \(\ell_{\rm ext}\) is hyper-regular, then we can obtain the reduced Hamiltonian \(h_{\rm ext}:\mathfrak{g}^{*}\times\mathbb{R}\times X^{*}\to\mathbb{R}\) (by using the reduced Legendre transformation) given by \[h_{\rm ext}(\mu,z,\alpha)=\langle\mu,\xi\rangle-\ell_{\rm ext}(\xi,z,\alpha),\] where \(\mu=\frac{\partial\ell_{\rm ext}}{\partial\xi}\), with \(\mu(\cdot)\in C^{1}([0,T],\mathfrak{g}^{*})\). The Euler-Poincare-Herglotz equations (45)-(46) can now be written as the Lie-Poisson-Jacobi equations, which are given by \[\dot{\mu} =ad^{*}_{\frac{\delta h_{\rm ext}}{\delta\mu}}\mu-\mu\frac{ \partial h_{\rm ext}}{\partial z}-\mathbf{J}_{X}\left(\frac{\delta h_{\rm ext }}{\delta\alpha},\alpha\right), \tag{48}\] \[\dot{z} =\langle\mu,\frac{\delta h_{\rm ext}}{\delta\mu}\rangle-h_{\rm ext }(\mu,z,\alpha).\] (49) \[\dot{\alpha} =-\rho^{\prime*}_{\frac{\delta h_{\rm ext}}{\delta\alpha}}( \alpha),\ \alpha(0)=\rho^{*}_{g_{0}^{-1}}(\alpha_{0}). \tag{50}\] ### Lie-Poisson-Jacobi equations for systems with symmetry breaking We can define a Jacobi bracket on the space of smooth functions on \(\mathfrak{g}^{*}\times\mathbb{R}\times X^{*}\) with respect to which the previous equations are just the Hamiltonian equations. Consider the bracket \[\{f,g\}_{X^{*}}(\mu,z,\alpha)=\left\langle\mu,\left[\frac{\delta f }{\delta\mu},\frac{\delta g}{\delta\mu}\right]\right\rangle +\left\langle\mu,\frac{\delta f}{\delta\mu}\right\rangle\frac{ \partial g}{\partial z}-\left\langle\mu,\frac{\delta g}{\delta\mu}\right\rangle \frac{\partial f}{\partial z}\] \[+\left\langle\alpha,\frac{\delta f}{\delta\mu}\frac{\delta g}{ \delta\alpha}-\frac{\delta f}{\delta\alpha}\frac{\delta g}{\delta\mu}\right\rangle\] This bracket falls into the same definition of Lie-Poisson-Jacobi bracket as the one given in Definition 5.8. In this case, the associated Lie algebra is \(\mathfrak{g}\ltimes X\) (see Holm, Schmah, and Stoica 2009 for more details on the subject of semi-direct products). **Proposition 6.3**.: _The Hamiltonian equations with respect to the Lie-Poisson-Jacobi bracket \(\{\cdot,\cdot\}_{X^{*}}\) and the Hamiltonian function \(h\) on \(\mathfrak{g}^{*}\times\mathbb{R}\times X^{*}\) are_ \[\dot{\mu} =\mathrm{ad}^{*}_{\frac{\delta h}{\delta\mu}}\mu-\mu\frac{\partial h }{\partial z}-\mathbf{J}_{X}\left(\frac{\delta h}{\delta\alpha},\alpha\right),\] \[\dot{z} =\langle\mu,\frac{\delta h}{\delta\mu}\rangle-h(\mu,z,\alpha).\] \[\dot{\alpha} =-\rho^{\prime*}_{\frac{\delta h}{\delta\mu}}(\alpha).\] Proof.: Let \(f\in C^{\infty}(\mathfrak{g}^{*}\times\mathbb{R}\times X^{*})\). Then, \[\dot{f}=\left\langle df(\mu,z,\alpha),(\dot{\mu},\dot{z},\dot{ \alpha})\right\rangle=\left\langle\dot{\mu},\frac{\delta f}{\delta\mu}\right\rangle +\frac{\partial f}{\partial z}\dot{z}+\left\langle\dot{\alpha},\frac{\delta f}{ \delta\alpha}\right\rangle.\] Also, using the definition of the Jacobi bracket we have that \[\{h,f\}_{X^{*}}(\mu,z,\alpha)=\left\langle\mu,\left[\frac{\delta h }{\delta\mu},\frac{\delta f}{\delta\mu}\right]\right\rangle +\left(\left\langle\mu,\frac{\delta h}{\delta\mu}\right\rangle-h \right)\frac{\partial f}{\partial z}-\left(\left\langle\mu,\frac{\delta f}{ \delta\mu}\right\rangle-f\right)\frac{\partial h}{\partial z}\] \[+\left\langle\alpha,\frac{\delta h}{\delta\mu}\frac{\delta f}{ \delta\alpha}-\frac{\delta h}{\delta\alpha}\frac{\delta f}{\delta\mu}\right\rangle.\] Thus using (31) we have that \[\dot{f}=\{h,f\}_{X^{*}}-f\frac{\partial h}{\partial z} =\left\langle\mathrm{ad}_{\frac{\delta h}{\delta\mu}}^{*}\mu-\mu \frac{\partial h}{\partial z},\frac{\delta f}{\delta\mu}\right\rangle+\left( \left\langle\mu,\frac{\delta h}{\delta\mu}\right\rangle-h\right)\frac{\partial f }{\partial z}\] \[+\left\langle\alpha,\rho_{\frac{\delta h}{\delta\mu}}^{\prime} \frac{\delta f}{\delta\alpha}\right\rangle-\left\langle\alpha,\rho_{\frac{ \delta f}{\delta\mu}}^{\prime}\frac{\delta h}{\delta\alpha}\right\rangle\] \[=\left\langle\mathrm{ad}_{\frac{\delta h}{\delta\mu}}^{*}\mu-\mu \frac{\partial h}{\partial z},\frac{\delta f}{\delta\mu}\right\rangle+\left( \left\langle\mu,\frac{\delta h}{\delta\mu}\right\rangle-h\right)\frac{\partial f }{\partial z}\] \[+\left\langle\rho_{\frac{\delta h}{\delta\mu}}^{\prime}\alpha, \frac{\delta f}{\delta\alpha}\right\rangle-\left\langle\mathbf{J}_{X}\left( \frac{\delta h}{\delta\alpha},\alpha\right),\frac{\delta f}{\delta\mu}\right\rangle\] Since the function \(f\) is arbitrary, the result follows. ### The Heavy Top with dissipation Consider \(G=\mathrm{SO}(3)\), with Lagrangian function given by \[L_{\mathbf{e}_{3}}(g,\xi,z)=\frac{1}{2}\langle\xi,\mathbb{I} \xi\rangle-m\mathbf{g}l\langle\mathbf{e}_{3},R\boldsymbol{\chi}\rangle-\gamma z,\] where \(R\in SO(3)\), \(\langle\xi,\xi\rangle=\mathrm{tr}(\xi^{T}\xi)\), for any \(\xi\in\mathfrak{g}=\mathfrak{so}(3)\), \(\xi(\cdot)=\sum_{i=1}^{3}\xi^{i}(\cdot)e_{i}\), with the elements of the basis of \(\mathfrak{g}\) given by \[e_{1}=\begin{bmatrix}0&0&0\\ 0&0&-1\\ 0&1&0\end{bmatrix},\ e_{2}=\begin{bmatrix}0&0&1\\ 0&0&0\\ -1&0&0\end{bmatrix},\ e_{3}=\begin{bmatrix}0&-1&0\\ 1&0&0\\ 0&0&0\end{bmatrix},\] which satisfy \[[e_{1},e_{2}]=e_{3},\ [e_{2},e_{3}]=e_{1},\ [e_{3},e_{1}]=e_{2},\] \(\mathbb{I}:\mathfrak{g}\to\mathfrak{g}^{*}=\mathfrak{so}(3)^{*}\) is the inertia tensor of the top (it is calculated with respect to the pivot, which is not, in general, the center of mass), \(m\) is the mass of the body, \(\mathbf{g}\) is the acceleration due to gravity, \(\mathbf{e}_{3}\) is the vertical unit vector, \(\boldsymbol{\chi}\in\mathbb{R}^{3}\) is the unit vector from the point of support to the direction of the body's center of mass (constant) in body coordinates, \(l\) is the length of the line segment between these two points and \(\gamma\in\mathbb{R}\). Under the dual pairing, where \(\langle\alpha,\xi\rangle=\operatorname{tr}(\alpha\xi)\), for any \(\xi\in\mathfrak{g}\) and \(\alpha\in\mathfrak{g}^{*}\), the elements of the basis of \(\mathfrak{g}^{*}\) are given by \[e^{1}=\begin{bmatrix}0&0&0\\ 0&0&\dfrac{1}{2}\\ 0&-\dfrac{1}{2}&0\end{bmatrix},\,\,\,e^{2}=\left[\begin{array}{ccc}0&0&- \dfrac{1}{2}\\ 0&0&0\\ \dfrac{1}{2}&0&0\end{array}\right],\,\,\,e^{3}=\begin{bmatrix}0&\dfrac{1}{2}& 0\\ -\dfrac{1}{2}&0&0\\ 0&0&0\end{bmatrix}.\] It is easy to verify that the Lagrangian function is invariant under the left action of the isotropy group \[G_{\alpha_{0}}=\{g\in G\mid\operatorname{Ad}_{g}^{*}\alpha_{0}=\alpha_{0}\} \cong\operatorname{SO}(2),\] i.e., rotations about the vertical axis \(\mathbf{e}_{3}\), but not \(G\)-invariant. The potential function breaks the symmetry partially. However, the framework we introduced before will help us to restore the full symmetry. Let \(X=\mathfrak{g}\) and \(\rho\) be the adjoint representation of \(G\) on \(X\), i.e., \(\rho_{g}=\operatorname{Ad}_{g}\), for any \(g\in G\). So, \(\rho^{*}\) is the coadjoint representation of \(G\) on \(X^{*}=\mathfrak{g}^{*}\), i.e., \(\rho_{g}^{*}=\operatorname{Ad}_{g}^{*}\), for any \(g\in G\). Let the extended Lagrangian function \(L_{\mathrm{ext}}:G\times\mathfrak{g}\times X^{*}\to\mathbb{R}\) be given as follows \[L_{\mathrm{ext}}(g,\xi,\breve{\alpha})=\frac{1}{2}\langle\xi,\mathbb{I}\xi \rangle-m\mathbf{g}l\langle R^{-1}\alpha,\boldsymbol{\chi}\rangle-\gamma z,\] where \(\alpha\in\mathbb{R}^{3}\) is identified with \(\breve{\alpha}\in\mathfrak{g}^{*}\). If we set \(\breve{\alpha}_{0}=-2e^{3}\) in the extended Lagrangian function, we recover our original Lagrangian function. It is easy to verify that the extended Lagrangian function is \(G\)-invariant under (38), i.e., \(L_{\mathrm{ext}}\circ\Phi_{g}=L_{\mathrm{ext}}\), for any \(g\in G\). Thus, we can define the reduced extended Lagrangian function to be \[\ell_{ext}(\xi,\alpha,z)=\frac{1}{2}\langle\xi,\mathbb{I}\xi\rangle-m\mathbf{ g}l\langle\alpha,\boldsymbol{\chi}\rangle-\gamma z,\] and by Corollary 6.2, the Euler-Poincare-Herglotz equations (under the identifications \(\mathfrak{g}\cong\mathbb{R}^{3}\) and \(\mathfrak{g}^{*}\cong\mathbb{R}^{3}\)) give the following equations \[\mathbb{I}\dot{\xi} =\operatorname{ad}_{\xi}^{*}\mathbb{I}\xi-\operatorname{ad}_{ \frac{\partial L_{\mathrm{ext}}}{\partial\alpha}}^{*}\alpha-\gamma\mathbb{I}\xi, \tag{51}\] \[\dot{\alpha} =-\operatorname{ad}_{\xi}^{*}\alpha,\,\,\alpha(0)=\operatorname {Ad}_{g_{0}^{-1}}^{*}\alpha_{0},\quad\dot{z}=\ell_{\mathrm{ext}}. \tag{52}\] Using the expressions (for more details, see Holm, Schmah, and Stoica 2009, Marsden and Ratiu 2013) \[\mathrm{ad}_{\xi}^{*}\,\mathbb{I}\xi=\mathbb{I}\xi\times\xi,\ \ \mathrm{ad}_{\frac{ \partial L_{\mathrm{cov}}}{\partial\alpha}}^{*}\,\alpha=-m\mathbf{g}l\,\alpha \times\boldsymbol{\chi},\ \ \mathrm{ad}_{\xi}^{*}\,\alpha=\alpha\times\xi,\ \ \mathrm{Ad}_{g_{0}^{-1}}^{*}\, \alpha_{0}=g_{0}\alpha_{0}\] and \(\mathbb{I}\in\mathbb{R}^{3\times 3}\) is the inertia matrix, (51)-(52) give the following equations \[\mathbb{I}\dot{\xi} =\mathbb{I}\xi\times\xi-m\mathbf{g}l\,\boldsymbol{\chi}\times \alpha-\gamma\mathbb{I}\xi,\] \[\dot{\alpha} =-\alpha\times\xi,\ \alpha(0)=g_{0}\alpha_{0},\ \dot{z}=\ell_{\mathrm{ext}}.\] In the case \(C=\gamma\mathbb{I}\) we obtain the equations for rigid body attitude dynamics studied in Shen, Sanyal, and McClamroch 2003, Cho, Shen, and McClamroch 2003 for the triaxial attitude control testbed given in Bernstein, McClamroch, and Bloch 2001. We do not give all the details and leave it up to reader to verify that the Lie-Poisson-Jacobi equations (49)-(50) are given by \[\dot{\mu} =\mu\times\mathbb{I}^{-1}\mu-m\mathbf{g}l\,\boldsymbol{\chi} \times\alpha-\gamma\mu,\] \[\dot{\alpha} =-\alpha\times\mathbb{I}^{-1}\mu,\ \ \alpha(0)=g_{0}\alpha_{0},\ \ \ \ \dot{z}=\frac{1}{2}\mu^{T}\mathbb{I}\mu-m\mathbf{g}l\langle g^{-1}\alpha, \boldsymbol{\chi}\rangle-\gamma z.\] ## 7 Conclusions and Future Work In this paper, we have established a geometric framework to study reduced contact dynamics on Lie groups. In future work we wish to employ this framework to study * the stability of reduced contact systems, * the analogous notion of conserved quantities in symplectic dynamics. * contact structure-preserving discretization schemes of contact dynamics on Lie groups and semi-direct products. Discretization schemes complying with the reduction process could also be used to find and numerically implement optimal control policies in systems with dissipation. ## Acknowledgments The authors acknowledge financial support from Grant PID2019-106715GB-C21 funded by MCIN/AEI/ 10.13039/501100011033. J.C. Marrero and E. Padron acknowledge financial support from the Spanish Ministry of Science and Innovation under grant PGC2018-098265-B-C32.
2305.06666
Diagnosis of Fast Electron Transport by Coherent Transition Radiation
Transport of fast electron in overdense plasmas is of key importance in high energy density physics. However, it is challenging to diagnose the fast electron transport in experiments. In this article, we study coherent transition radiation (CTR) generated by fast electrons on the back surface of the target by using 2D and 3D first-principle particle-in-cell (PIC) simulations. In our simulations, aluminium target of 2.7 g/cc is simulated in two different situations by using a newly developed high order implicit PIC code. Comparing realistic simulations containing collision and ionization effects, artificial simulations without taking collision and ionization effects into account significantly underestimate the energy loss of electron beam when transporting in the target, which fail to describe the complete characteristics of CTR produced by electron beam on the back surface of the target. Realistic simulations indicate the diameter of CTR increases when the thickness of the target is increased. This is attributed to synergetic energy losses of high flux fast electrons due to Ohm heatings and colliding drags, which appear quite significant even when the thickness of the solid target only differs by micrometers. Especially, when the diagnosing position is fixed, we find that the intensity distribution of the CTR is also a function of time, with the diameter increased with time. As the diameter of CTR is related to the speed of electrons passing through the back surface of the target, our finding may be used as a new tool to diagnose the electron energy spectra near the surface of solid density plasmas.
Yangchun Liu, Xiaochuan Ning, Dong Wu, Tianyi Liang, Peng Liu, Shujun Liu, Xu Liu, Zhengmao Sheng, Wei Hong, Yuqiu Gu, Xiantu He
2023-05-11T09:05:44Z
http://arxiv.org/abs/2305.06666v1
# Diagnosis of Fast Electron Transport by Coherent Transition Radiation ###### Abstract Transport of fast electron in overdense plasmas is of key importance in high energy density physics. However, it is challenging to diagnose the fast electron transport in experiments. In this article, we study coherent transition radiation (CTR) generated by fast electrons on the back surface of the target by using 2D and 3D first-principle particle-in-cell (PIC) simulations. In our simulations, aluminium target of 2.7 g/cc is simulated in two different situations by using a newly developed high order implicit PIC code. Comparing realistic simulations containing collision and ionization effects, artificial simulations without taking collision and ionization effects into account significantly underestimate the energy loss of electron beam when transporting in the target, which fail to describe the complete characteristics of CTR produced by electron beam on the back surface of the target. Realistic simulations indicate the diameter of CTR increases when the thickness of the target is increased. This is attributed to synergetic energy losses of high flux fast electrons due to Ohm heatings and colliding drags, which appear quite significant even when the thickness of the solid target only differs by micrometers. Especially, when the diagnosing position is fixed, we find that the intensity distribution of the CTR is also a function of time, with the diameter increased with time. As the diameter of CTR is related to the speed of electrons passing through the back surface of the target, our finding may be used as a new tool to diagnose the electron energy spectra near the surface of solid density plasmas. + Footnote †: : _New J. Phys._ _Keywords_: coherent transition radiation, fast electron transport, laser solid interaction, particle-in-cell simulation, terahertz radiation ## 1 Introduction With the invention of chirped pulse amplification technology [1], the laser intensity increased by six orders of magnitude, and the research work in the field of high-energy density physics has been effectively promoted. When the laser intensity is higher than \(10^{13}\) W/cm\({}^{2}\), the laser will ionize the target material rapidly by multiphoton ionization, tunneling ionization and so on, forming plasma composed of electrons and ions. When the laser intensity reaches \(10^{18}\) W/cm\({}^{2}\), it will cause relativistic oscillation of electrons in the laser field and heat the temperature of electrons to more than 1 MeV. Such laser can also be used to accelerate high-energy charged particles, produce ultra-short and ultra-strong gamma rays and even positive and negative electron pairs. One of the key subjects in high energy density physics is the transport of fast electrons. It has many potential applications in various fields, such as laser accelerators [2, 3], fast ignition [4, 5], positron-electron plasmas [6, 7] and terahertz source [8, 9]. In experiments, \(K\alpha\), extreme ultraviolet (XUV) emission [10, 11], shadowgraphy [12, 13] and optical emission [14, 15] are usually used to study fast electron transport. In particular, coherent transition radiation (CTR) is becoming an effective tool to diagnose the fast electron population, especially for those electrons micro-bunched in time [14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Transition radiation is the radiation emitted by charged particles passing through the interface between two media with different dielectric constants. The electron beam is periodically produced by different laser heating mechanisms such as vacuum heating, resonance absorption and \(\mathbf{J}\times\mathbf{B}\) heating. Therefore, the transition radiation generated by such electron beam through the interface is easily coherent. It makes the CTR spectra peak at \(n\omega_{0}\)[24], where \(n\) is a positive integer and \(\omega_{0}\) is laser frequency. We can infer the main laser heating mechanism of electrons by diagnosing CTR. In addition, CTR is also a potential scheme for intense terahertz radiation sources [8]. In this article, we study the CTR generated by fast electrons on the back surface of the target with 2D and 3D particle-in-cell (PIC) simulations. Realistic simulations containing collision and ionization effects indicate the diameter of CTR increases when the thickness of the target is increased. This is attributed to synergetic energy losses of high flux fast electrons due to Ohm heatings and colliding drags [25, 26] which appear quite significant even when the thickness of the solid target only differs by micrometers. Especially, when the diagnosing position is fixed, we find that the intensity distribution of the CTR is also a function of time, with the diameter increased with time. As the diameter of CTR is related to the speed of electrons passing through the back surface of the target, our finding may be used as a new tool to diagnose the electron energy spectra near the surface of solid density plasmas. Meanwhile, in order to understand the role played by collision and ionization physics, artificial simulations without taking into collision and ionization effects are also performed. When comparing with realistic simulations, artificial simulations significantly underestimate the energy loss of electron beam when transporting in the target, which unable to describe the complete characteristics of CTR produced by electron beam on the back surface of the target. ## 2 Theoretical Model Here, we need to emphasize several properties of CTR. The first characteristic of CTR is polarization. The polarization characteristic of CTR is called radial polarization in optics [27]. The polarization component of a certain CTR ray lies in the plane formed by the ray and the main direction. As shown in figure 1(a), the polarization direction of ray \(OA\) is in plane \(OO^{\prime}A\), where \(OO^{\prime}\) is the main direction of radiation. That is to say, for the electron beam vertically passing through the interface, the projection of radiation electric field on \(XOY\) plane is along the radial direction, as shown in figure 1(b). For the electron beam obliquely passing through the interface, CTR is not completely radially polarized, as shown in figure 1(c). This electric field not only has components parallel to the radial direction, but also has components perpendicular to the radial direction, which was observed by Happek et al. [28], the radiation angle distribution is similar, but the intensity is not completely symmetrical [29]. Figure 1: (a) Schematic diagram of PIC simulation. (b), (c) is the polarization characteristics of CTR for different electron beams vertically and obliquely passing through the interfaces, respectively. The second is characteristic of the angular distribution. Both CTR and incoherent transition radiation (ITR) follow the same characteristic of angular distributions, which satisfy the formula described in [24], \[\frac{\mathrm{d}\varepsilon_{\mathrm{single}}}{\mathrm{d}\omega\mathrm{d}\Omega}= \frac{e^{2}}{\pi^{2}c}\frac{\beta^{2}\cos\Theta\left[\sin\theta-\beta\sin \Theta\cos\left(\phi-\Phi\right)\right]^{2}}{\left[\left(1-\beta\sin\theta \sin\Theta\cos\left(\phi-\Phi\right)\right)^{2}-\beta^{2}\cos^{2}\theta\cos^{ 2}\Theta\right]^{2}}, \tag{1}\] where \((\theta,\phi)\) and \((\Theta,\Phi)\) represent the angle between Z and X axes and the direction for the radiation and the electron beam, respectively. As shown in figure 2, when an electron beam vertically passes through the interface, the angular distribution of the transition radiation presents a conical distribution. The intensity of transition radiation in the direction of electron beam exit is zero, and there is a peak at a certain angle deviating from the direction of electron beam exit. This angle satisfies \(\theta_{\mathrm{max}}=\arcsin(\pm((\beta-1/\beta)^{2}+(1-\beta^{2}))^{1/2})\) and decreases with the increase of the electron beam energy [24]. That is to say, the faster the electron velocity passing through the rear surface of the target, the smaller the cone angle. In the relativistic limit, this angle is approximately equal to \(1/\gamma\). The third characteristic of CTR is reflected on the spectrum. Because of periodic laser heating mechanisms, transition radiation is always coherent. Resonance absorption and vacuum heating can heat and produce a electron bunch once in a cycle. This makes the transition radiation coherently superimposed at the integer multiple of the laser frequency. \(\mathbf{J}\times\mathbf{B}\) heating can heat electrons twice in one laser cycle. This makes the transition radiation coherently superimposed at even times of laser frequency. Unde the combined action of various heat mechanisms, the spectrum of transition radiation will Figure 2: Normalized intensity distribution of CTR when an electron beam vertically passes through the interface. Different colors correspond to different electron energies, \(\beta=v/c\). peak at \(n\omega_{0}\), and the intensity will decrease exponentially with the increase of \(n\)[24]. Where \(n\) is a positive integer and \(\omega_{0}\) is laser frequency. Besides, the transition radiation in the low frequency range is always coherent. This can be seen from a simple model in [24], \[\frac{\mathrm{d}^{2}\varepsilon_{\mathrm{CTR}}}{\mathrm{d}\omega\mathrm{d} \Omega}=\left(N-1\right)\left|\tilde{n}\left(\omega,\mathbf{q}\right)\right|^{2} \frac{\mathrm{d}^{2}\varepsilon_{\mathrm{ITR}}}{\mathrm{d}\omega\mathrm{d} \Omega}, \tag{2}\] with \(\tilde{n}\left(\omega,\mathbf{q}\right)=\tilde{n}_{\ell}\left(\omega\right)\tilde {n}_{\perp}\left(q\right)\), \[\tilde{n}_{\perp}\left(q\right)=\exp\left(-q^{2}a^{2}/2\right)=\exp\left(- \frac{\omega^{2}a^{2}}{2c^{2}}\sin^{2}\theta\right) \tag{3}\] and \[\tilde{n}_{\ell}\left(\omega\right) = \frac{1}{1+\Delta\exp\left(-\omega_{0}^{2}\tau_{0}^{2}/2\right)}\times\] \[\left\{\exp\left(-\frac{\omega^{2}\tau_{0}^{2}}{2}\right)\,+ \frac{\Delta}{2}\exp\left(-\frac{\left(\omega-\omega_{0}\right)^{2}\tau_{0}^{ 2}}{2}\right)\right\},\] where \(\mathrm{d}^{2}\varepsilon_{\mathrm{ITR}}/\mathrm{d}\omega\mathrm{d}\Omega\) is the ITR unrelated to \(\omega\), \(N\) represents the number of electrons passing through the interface, \(\tilde{n}\left(\omega,\mathbf{q}\right)\) is Fourier transform of electron distribution in which \(\mathbf{q}\) is the tangential radiation wave vector, \(a\) is the beam radius, \(\Delta\) is the microbunching amplitude, \(\tau_{0}\) is the duration of electron pulse and \(\omega_{0}\) is the microbunching frequency in the beam. According to the above formula, transition radiation is always coherent for \(\omega\leqslant\tau_{0}^{-1}\). This makes CTR have high intensity in terahertz range [8]. Besides, in the study of Liu et al. and Batani et al., it is found that the intensity relationship between CTR and blackbody radiation is related to the thickness of the target [30, 31]. In a thin target, the optical coherent transition radiation has the same order of magnitude as the blackbody radiation, and may even be smaller than the blackbody radiation, but the CTR in terahertz range is stronger than that in blackbody. However, as the thickness of the target increases, the intensity of blackbody radiation decreases faster, so the optical coherent transition radiation dominates in thick targets. Because of the special polarization characteristics and angular distribution characteristics of CTR, the CTR for complete diagnosis should be in a ring shape. As shown by Pakluea' s plot of CTR intensity distribution according to theoretical formula [29]. ## 3 Simulation Result ### Realistic simulations containing collision and ionization effects The simulations is carried out with the PIC code LAPINS developed by one of the authors. LAPINS is an implicit PIC code [32] which includes collision [33], ionization [34, 35], radiation [36], strong field QED [37] and nuclear reaction [38]. Because of the amount of calculation, we decided to use a thin target for simulation and diagnose the terahertz radiation, so as to obtain the CTR distribution of 2D and 3D simulations. In our 2D simulations, the target was modeled as a uniform aluminium (Al) slab with different thicknesses. The density of the target is 2.7 g/cc which is the density of solid aluminum. The initial temperature of the target is set to room temperature. The laser wavelength is 1 \(\mu\)m and laser radius is 3 \(\mu\)m. The laser is vertically incident on the target and and the laser is polarized along the y direction. The laser intensity is \(3.425\times 10^{19}\) W/cm\({}^{2}\) and the duration of the laser pulse is 10 \(T_{0}\), where \(T_{0}=\lambda/c\) is the normalized time unit. The 2D simulations were carried out in a Z-Y Cartesian geometry which is parallel to the propagation direction of laser. The absorption field and particle boundary conditions are set in the Y and Z directions. The laser parameters of our 3D simulation are exactly the same as those of 2D simulations. We selected a target with a thickness of 2 \(\mu\)m for 3D simulation. Especially, in order to be close to real situations, collision effect and ionization effect are both included in all of the simulations displayed in this subsection. The Monte Carlo collision model in LAPINS includes binary collisions between electron-electron, electron-ion, and ion-ion [33]. Besides, the ionization model in LAPINS includes the collision ionization, electron-ion recombination and ionization potential depression. This model can not only be applied for plasmas near thermal equilibrium, but also can be used in relaxation of ionization dynamics [34]. After considering the collision effect and ionization effect, our PIC simulation is very similar to that of the PIC simulation. Figure 3: Distribution of CTR (Upper right) and distribution of electron density in different positions of target for a realistic 3D PIC simulation containing collision and ionization effects (Bottom). Same colorbar is used for electron density. close to the real situation. Figure 3(bottom) shows the density distribution of electrons at different positions in the 3D simulation, where \(z=1.0\)\(\mu\)m and \(z=3.0\)\(\mu\)m are the front surface and the back surface of the target, respectively, and the same colorbar is used between the front surface and the back surface of the target. We can see that except on the front surface of the target, the electron distribution in other positions is not looped, and there is no ring formation in the electron beam transport process. The reason for the annular distribution of electron density on the front surface is that the target is deformed by the incident laser. Figure 3(upper right) shows the annular intensity distribution of CTR at the plane 0.5 \(\mu\)m behind the target and perpendicular to the laser propagation direction in 3D simulation. This is obtained by firstly taking Fourier transformation of the combined electric field strength in X and Y directions at the plane 0.5 \(\mu\)m behind target and then picking up the low frequency limit. This plot shows that the CTR is indeed annular. The spectrum of CTR can refer to figure 4(a). It is the spectrum distribution of CTR for 2D simulation at 2 \(\mu\)m behind target. We diagnose the electric field in the Y direction along a straight line parallel to the Y axis and do Fourier transform on it to get the spectrum. As we can see, there is a peak at low frequency and the optical Figure 4: 2D realistic PIC simulation results including collision and ionization effects. (a) Spectrum of CTR for 2 \(\mu\)m target. (b) CTR intensity distribution, different colors represent targets with different thicknesses. (c) Electrons phase space distribution with the thickness of target of 2 \(\mu\)m at 20 \(T_{0}\), where \(z=1\)\(\mu\)m is the front surface of the target, and \(z=3\)\(\mu\)m is the back surface of the target. (d) Energy spectra of electrons. Almost no electrons reach the rear surface of the target at 5 \(T_{0}\). transition radiation is difficult to distinguish. Therefore, its intensity distribution and spectrum characteristics is consistent with the theory. Figure 4(b) shows the intensity distribution of CTR for target with different thicknesses. As we can see, as the thickness of the target increases, the intensity of CTR decreases, and the opening angle of the cone becomes larger and larger. This result can be well understood if the energy loss of fast electrons in the target is taken into account, resulting in different speed of the electron beam passing through the back surface of the target with different thickness. As our 2D simulation is 2D in spatial spaces however 3D in velocity spaces, and the entire collision and ionization physics are included, therefore, this result can be well extended to full 3D situations. To explain more accurately that the radius of the ring increases with the thickness of the target. In figure 4(c), we have drawn the phase space distribution of electrons. We can see that the electrons in the target have a deceleration process. Moreover, there will be a certain deceleration after passing through the sheath field generated by the front surface. In order to see the deceleration effect of the target on electrons more clearly, we show the spectra of electrons in figure 4(d). The orange and purple lines represent the kinetic energy distribution of electrons in 2 \(\mu\)m target and 5 \(\mu\)m target Figure 5: Distribution of electric field in 1 \(\mu\)m behind target at (a) 10 \(T_{0}\), (b) 11 \(T_{0}\), (c) 12 \(T_{0}\), (d) 13 \(T_{0}\) for a 3D realistic PIC simulation containing collision and ionization effects. Same colorbar is used. at 5 \(T_{0}\), respectively. At this time, almost no electrons reach the rear surface of the target. We can see that the distribution of electrons produced by targets with different thicknesses is almost the same. The blue and red lines represent the kinetic energy distribution of electrons passing through the back surfaces of 2 \(\mu\)m and 5 \(\mu\)m targets at 20 \(T_{0}\), respectively. From this, we can see that the electrons passing through the thick target are obviously less than those passing through the thin target, because the thicker the target, the greater the kinetic energy loss of electrons. This reason makes the thicker the target, the larger the ring of CTR diagnosed. The research of A. J. Kemp et al. and M. Sherlock et al. shows that the energy loss of electron beam in the high density target may be due to resistive heating, the collision effect, and the interaction between wave and plasma [25, 26]. In addition, because of the relatively low frequency of the CTR in our simulation, we can directly diagnose the electric field to approximately describe the behavior of the CTR. By diagnosing the distribution of electric fields in the X and Y directions at different times, we found that the circle formed by the electric field increased with time, as shown in figure 5. The (a), (b), (c) and (d) in the figure 5 represents the total electric field intensity distribution in the X and Y directions at 1 \(\mu\)m behind the target at the 10 \(T_{0}\), 11 \(T_{0}\), 12 \(T_{0}\) and 13 \(T_{0}\) in 3D PIC simulation, respectively. That is reasonable, because the faster electrons pass through the back surface of the target preferentially, and the cone angle of the radiation field formed by them is smaller, while the slower electrons pass through the back surface of the target latterly, and the cone angle formed by them is larger, therefore the ring of the radiation field becomes larger and larger with time. If we can measure the change of the angle of the CTR with high time resolution, we can infer the energy spectra of the electron beam near the surface of the target. ### Artificial simulations without taking collision and ionization effects into account To understand the role played by collision and ionization physics, artificial simulations without these effects are performed, as with most of the previous works. The ionization state of the target in artificial simulations is set to trivalent. Figure 6 show the 2D artificial PIC simulation results without taking collision and ionization into account. Comparing figure 4(a) with figure 6(a), we can see that the spectrum shapes of CTR are the same for realistic simulations and artificial simulations. However, the intensity of the artificial simulations are higher than realistic simulations. Meanwhile, in the figure 6(b), the opening angles of the CTR remain almost unchanged in the artificial simulations. This is because electrons transport in the target with little energy loss when the collision effect is not taken into account. This weakens the dependence of the energy loss of the electron beam on the thickness of the target. It can also be drawn from figure 6(c) and (d). Besides, the intensity of CTR decreases when the thickness of the target is increased. This is because increase the thickness of the target makes the electron transport distance longer, and the number of electrons passing through the back surface of the target is smaller in the same amount of time. In the figure 6(c), show the electrons phase space distribution for 2 \(\mu m\) target at 20 \(T_{0}\). We can see that the velocity of electrons is evenly distributed along the radial direction in the artificial simulation, and there are also some electrons with large velocity. This also shows that the electron beam transport in the target with little energy loss. The electrons transport distance required for thick target is longer. For thick targets, more electrons are still inside the target, resulting in the CTR intensity of the thick target in figure 6(b) lower. In the figure 7, we show the spectra of electrons. The orange and purple lines represent the kinetic energy distribution of electrons in 2 \(\mu\)m target and 5 \(\mu\)m target at 5 \(T_{0}\), respectively. As we can see, for targets with different thicknesses, the energy spectra of electrons are almost identical, which reflects the electron beam transport in the target with little energy loss. Therefore, the opening angles of the CTR in figure 6(b) remain unchanged. The 3D artificial PIC simulation results without taking collision and ionization into account are shown in figure 7. Figure 7 show the total electric field intensity distribution Figure 6: 2D artificial PIC simulation results without taking collision and ionization effects into account. (a) Spectrum of CTR for 2 \(\mu\)m target. (b) CTR intensity distribution, different colors represent targets with different thicknesses. (c) Electrons phase space distribution with the thickness of target of 2 \(\mu\)m at 20 \(T_{0}\), where \(z=1\)\(\mu\)m is the front surface of the target, and \(z=3\)\(\mu\)m is the back surface of the target. (d) Energy spectra of electrons. in the X and Y directions at 1 \(\mu\)m behind the target at the 13 \(T_{0}\), 14 \(T_{0}\), 15 \(T_{0}\) and 16 \(T_{0}\) for a 3D artificial PIC simulation. The area surrounded by the red ring in the figure is the area with high strength. The red dotted line is to indicate the range of the ring. From the results, we can also conclude that the spatial scale of CTR will increase with time. That's because the faster electrons pass through the back surface of the target preferentially. However, due to the uniform distribution of electrons in the phase space, high-energy electrons pass through the back surface of the target continuously. This makes the CTR in small opening angle region with high intensity which has been marked by a red ring in figure 7 always exist. Comparing the realistic simulations results, we find that artificial simulations can also reflect the relationship between the diameter of CTR and the speed of electrons. However, ignoring the collision effect significantly underestimate the energy loss of electron beam when transporting in the target, which fail to describe the complete characteristics of CTR produced by electron beam on the back surface of the target. Figure 7: Distribution of electric field in 1 \(\mu\)m behind target at (a) 13 \(T_{0}\), (b) 14 \(T_{0}\), (c) 15 \(T_{0}\), (d) 16 \(T_{0}\) for a 3D artificial PIC simulation without taking collision and ionization into account. The area surrounded by the red ring in the figure is the area with high strength. The red dotted line is to indicate the range of the ring. Same colorbar is used. ## 4 Discussion and conclusion Most experiments failed to diagnose the annular CTR. However, in the experiment of Pakluea [29], they use an electron beam with an energy of MeV to irradiate the Al target, and diagnose the forward CTR. From their experimental results, we can see that when the CCD is placed on the focal plane of the focusing lens, circular CTR is diagnosed, while annular CTR is diagnosed at the position deviating from the focal plane. We can easily understand this result from the imaging theory of convex lens. When the image is real and the CCD is placed at the image distance, the shape and size of CTR at the back surface of the target can be obtained according to the imaging law. This result confirms CTR can be measured experimentally. However in most experiments, CCD is placed at the focus of convex lens [31, 39], circular CTR is therefore diagnosed. Full PIC simulation of solid density plasma is extremely difficult. This is because, in order to reduce numerical heating [32, 36, 40, 41, 42] and suppress numerical instability, the grid size for general explicit PIC codes is of Debye length, which makes PIC simulation of sizeable solid density plasma almost impossible. In LAPINS code, a fourth-order spatial difference scheme is combined with an implicit scheme for temporal stepping in solving electromagnetic fields. This new scheme [32] can completely remove numerical self-heating and significantly reduce the simulation burden by using coarse simulation grids when simulating solid-density plasmas. This code enables us to calculate coupled atomic and plasma processes for intense laser-solid interaction in a more realistic way than previous codes. Within the simulations, the ionization charge state and conductivity (or resistivity) of the target can self-consistently evolve in a precise manner according to the local plasma and electromagnetic field conditions. Different types of materials can now be modeled, with account taken of their intrinsic atomic properties. However one needs to note, although the simulation burden is significantly reduced, our 3D PIC simulation still takes more than 170,000 cpu-hours. This is an unprecedented 3D PIC simulation. Our work figures out the relationship between CTR and electron transport properties in solid. From our simulation results, we can see that the annular distribution characteristics of CTR are closely related to the properties of electron beams. This result provides a scheme for diagnosing electron transport properties by CTR. It can be used to study the energy deposition process in inertial confinement fusion and the improvement of ion beam quality in laser driven ion acceleration. In conclusion, the characteristics of CTR caused by fast electrons generated by laser solid target interaction are studied through PIC simulation in two different situations. Comparing the realistic simulations, artificial simulations significantly underestimate the energy loss of electron beam when transporting in the target, which fail to describe the complete characteristics of CTR produced by electron beam on the back surface of the target. From the realistic simulation containing collision and ionization effects, it is found that the diameter of CTR increases when the thickness of the target is increased. This is due to the more energy loss of thick target to electron beam. In addition, we find that the radius of the ring formed by the CTR generated at fixed positions is increasing with time, which can used to infer the energy spectra of fast electrons. Our finding may be used as a new tool to diagnose the electron energy spectra near the surface of solid density plasmas. This work was supported by the National Natural Science Foundation of China (Grant No. 12075204, No. 11875235, and No. 61627901), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDA250050500) and Shanghai Municipal Science and Technology Key Project (No. 22JC1401500). Dong Wu thanks the sponsorship from Yangyang Development Fund.
2301.05840
Filling with separating curves
A pair $(\alpha, \beta)$ of simple closed curves on a closed and orientable surface $S_g$ of genus $g$ is called a filling pair if the complement is a disjoint union of topological disks. If $\alpha$ is separating, then we call it as separating filling pair. In this article, we find a necessary and sufficient condition for the existence of a separating filling pair on $S_g$ with exactly two complementary disks. We study the combinatorics of the action of the mapping class group $\M$ on the set of such filling pairs. Furthermore, we construct a Morse function $\mathcal{F}_g$ on the moduli space $\mathcal{M}_g$ which, for a given hyperbolic surface $X$, outputs the length of shortest such filling pair with respect to the metric in $X$. We show that the cardinality of the set of global minima of the function $\mathcal{F}_g$ is the same as the number of $\M$-orbits of such filling pairs.
Bhola Nath Saha, Bidyut Sanki
2023-01-14T07:25:17Z
http://arxiv.org/abs/2301.05840v3
# Filling with separating curves ###### Abstract. A pair \((\alpha,\beta)\) of simple closed curves on a closed and orientable surface \(S_{g}\) of genus \(g\) is called a filling pair if the complement is a disjoint union of topological discs. If \(\alpha\) is separating, then we call it as separating filling pair. In this article, we find a necessary and sufficient condition for existence of a separating filling pair on \(S_{g}\) with exactly two complementary discs. We study the combinatorics of the action of the mapping class group \(\operatorname{Mod}(S_{g})\) on the set of such filling pairs. Furthermore, we construct a Morse function \(\mathcal{F}_{g}\) on the moduli space \(\mathcal{M}_{g}\) which, for a given hyperbolic space \(X\), outputs the length of shortest such filling pair with respect to the metric in \(X\). We show that the cardinality of the set of global minima of the function \(\mathcal{F}_{g}\) is same as the number of \(\operatorname{Mod}(S_{g})\)-orbits of such filling pair. ## 1. Introduction Let \(S_{g}\) be a closed orientable surface of genus \(g\). A _simple closed curve_\(\alpha\) on \(S_{g}\) is a continuous injective map \(\alpha:S^{1}\to S_{g}\), where \(S^{1}\) is the unit circle. For two simple closed curves \(\alpha\) and \(\beta\), the _geometric intersection number_ between them, denoted by \(i(\alpha,\beta)\), is defined by \[i(\alpha,\beta)=\min_{\alpha^{\prime}\sim\alpha}|\alpha^{\prime}\cap\beta|,\] where \(\sim\) stands for the free homotopy relation. Two simple closed curves \(\alpha\) and \(\beta\) are said to be in _minimal position_ if their geometric intersection number satisfies \(i(\alpha,\beta)=|\alpha\cap\beta|\) (for more details, we refer the readers to [6], Section 1.2.3). A set \(\Omega=\{\alpha_{1},\ldots,\alpha_{n}\}\) of simple closed curves on \(S_{g}\) is called a _filling set_ if for \(i\neq j\), the curves \(\alpha_{i}\) and \(\alpha_{j}\) are in minimal position and \(S_{g}\setminus(\cup_{i=1}^{n}\alpha_{i})\) is a disjoint union of topological disks. If the number of curves in a filling set is \(n=2\), then it is called as _filling pair_ and if the number of discs in \(S_{g}\setminus(\cup_{i=1}^{n}\alpha_{i})\) is minimum or equivalently the total number of intersection points between the curves is minimum, then the filling system is called minimally intersecting. The study of filling pairs has its importance for its various applications. For example, Thurston has used filling sets of size two to construct the most nontrivial mapping class group elements, so-called pseudo-Anosov homeomorphisms (for details see [6], Section 13.2.3). More recently, in [9], Penner has extended Thurston's construction to pairs of filling multi-curves as follows: given multi-curves \(A=\{a_{1},\ldots,a_{m}\}\) and \(B=\{b_{1},\ldots,b_{n}\}\), such that \(A\cup B\) fills the surface S. Then a homeomorphism \(f\) which is the product of \(T_{a_{i}}\) and \(T_{b_{j}}^{-1}\), where each \(a_{j}\) and \(b_{k}\) occur at least once, is a pseudo-Anosov homeomorphism. Note that \(T_{a_{i}}\) denotes the Dehn twist about the simple closed curve \(a_{i}\). Another motivation for studying filling sets is in the problem of construction of spine for the moduli space of \(S_{g}\). Thurston has proposed \(\chi_{g}\), the set of all closed hyperbolic surfaces of genus \(g\) having a filling set of systolic geodesics, as a candidate spine of moduli space of \(S_{g}\). He has given a sketch of a proof but unfortunately, that appears difficult to understand. Fillings of surfaces has been studied extensively by Aougab, Huang [2], Sanki [10], Jeffreys [8], Aougab, Menasco and Nieland [3] and many others [1], [11]. In [2], the authors have shown the existence of minimally intersecting filling pair on \(S_{g}\) for \(g\geq 3\) and estimated the bounds of the number of \(\operatorname{Mod}(S_{g})\)-orbits of such filling. In [10], the author has generalised the result in [2], where he has shown the existence of filling pair on \(S_{g}\) for \(g\geq 2\) with given any number of complementary regions. Luke Jeffryes further extended the study of minimally intersecting filling pair on punctured surfaces. In [8], he has shown the existence of minimally intersecting filling pair on punctured surfaces. Furthermore, in [3], the authors have considered those minimally intersecting filling pair for which the geometric intersection number and algebraic intersection number are same and they find an upper bound for number of \(\operatorname{Mod}(S_{g})\)-orbits of such pairs. The simple closed curves on a surface are classified into two classes: separating and non-separating. A simple closed curve \(\alpha\) on \(S_{g}\) is called _separating_ if \(S_{g}\setminus\alpha\), the complement of \(\alpha\) in \(S_{g}\), is disconnected. Otherwise, it is called _non-separating_. As per our knowledge, all the works done so far on filling systems of closed surfaces are with non-separating closed curves. In this paper, we study the topology, geometry and combinatorics of filling pairs containing at least one separating curve. A filling pair \((\alpha,\beta)\), where at least one of \(\alpha\) and \(\beta\) is a separating simple closed curve, is called a _separating filling pair_. We consider \(\alpha\) is a separating curve in a separating filling pair \((\alpha,\beta)\). Suppose \((\alpha,\beta)\) is a filling pair on \(S_{g}\). We have the following natural question. _Question 1.1_.: How to determine whether a filling pair \((\alpha,\beta)\) is a separating filling pair? In our first result, we answer Question 1.1. In particular, we prove a characterisation theorem which gives a necessary and sufficient condition for a curve in a filling pair to be separating. The key ingredient used in the proof is topological graph theory. Given a filling pair \((\alpha,\beta)\), we associate a number \(b\), the number of topological disks in the complement. A simple Euler's characteristic argument implies that \(i(\alpha,\beta)=2g-2+b.\) If \((\alpha,\beta)\) is a separating filling pair, then we have \(b\geq 2\) and hence \(i(\alpha,\beta)\geq 2g\). If \(i(\alpha,\beta)=2g\), we call it as a _minimally intersecting separating filling pair_ or simply a _minimal separating filling pair_. In [2], Aougab and Huang have shown that for \(g\geq 3\), there exists a filling pair on \(S_{g}\) with exactly one connected component in the complement. In the context of minimal separating filling pair, we study an analogous question. We prove the existence of minimal separating filling pair in the following theorem. **Theorem 1.2**.: _There exists a minimally intersecting separating filling pair on \(S_{g}\) if and only if g is even and \(g\geq 4\)._ The proof of Theorem 1.2 is constructive. We explicitly construct such a filling pair on \(S_{4}\) and for the existence on \(S_{g},g\geq 6\), we use the method of induction on \(g\). The converse part is proved using the standard Euler's characteristic formula. The mapping class group \(\operatorname{Mod}(S_{g})\) of a surface \(S_{g}\) of genus \(g\) is the group of all orientation preserving self-homeomorphisms up to isotopy ([6], Chapter 2). The group \(\operatorname{Mod}(S_{g})\) acts on the set of minimally intersecting separating filling pair as follows: for a minimally intersecting separating filling pairs \((\alpha,\beta)\) and \(f\in\operatorname{Mod}(S_{g})\), \(f\cdot(\alpha,\beta)=(f(\alpha),f(\beta))\). We estimate the number of \(\operatorname{Mod}(S_{g})\)-orbits of this action in the following theorem **Theorem 1.3**.: _If \(N(g)\) is the number of \(\operatorname{Mod}(S_{g})\)-orbits of minimally intersecting separating filling pair, then_ \[\frac{\overset{g-4}{\underset{k=1}{\overset{g-4}{\underset{k=1}{\overset{g-4 }{\underset{2}{\prod}}}}}}(3k+5)}{4\times(2g)^{2}\times(\frac{g-4}{2})!}\leq N (g)\leq 2(2g-2)(2g-2)!.\] The moduli space \(\mathcal{M}_{g}\) of genus \(g\geq 2\) is the collection of all hyperbolic metrics on \(S_{g}\) up to isometry. As in [2], we construct a topological Morse function on \(\mathcal{M}_{g}\) using the length function of filling pair. Consider the set, \[\mathcal{C}_{g}=\{(\alpha,\beta):(\alpha,\beta)\text{ is a minimally intersecting separating filling pair on }S_{g}\}.\] For \(X\in\mathcal{M}_{g}\), the length \(l_{X}(\alpha,\beta)\) of \((\alpha,\beta)\in\mathcal{C}_{g}\) is defined as \(l_{X}(\alpha,\beta)=l_{X}(\alpha)+l_{X}(\beta)\) where \(l_{X}(\alpha)\) is the length of the geodesic in the free homotopy class of \(\alpha\) with respect to the hyperbolic metric X. Now we define a function \(\mathcal{F}_{g}:\mathcal{M}_{g}\longrightarrow\mathbb{R}\), by \[\mathcal{F}_{g}(X)=\min\{l_{X}(\alpha,\beta)|(\alpha,\beta)\in\Sigma_{g}\}.\] We show that, **Theorem 1.4**.: \(\mathcal{F}_{g}\) _is a proper and topological Morse function. For any \(X\in\mathcal{M}_{g}\), we have_ \[\mathcal{F}_{g}\geq m_{g},\] _where_ \[m_{g}=4g\times\cosh^{-1}\left(2\left[\cos\left(\frac{2\pi}{4g} \right)+\frac{1}{2}\right]\right)\] _denotes the perimeter of a regular right angled \(4g\)-gon._ _Suppose \(\mathcal{B}_{g}=\{X\in\mathcal{M}_{g}:\mathcal{F}_{g}(X)=m_{g}\}\). Then \(\mathcal{B}_{g}\) is a finite set and \(|\mathcal{B}_{g}|=N(g)\)._ ## 2. Fat Graphs In this section, we recall some definitions and notations on fat graphs which are essential in the subsequent sections. We begin with recalling the definition of a graphs. The definition of graphs used here is not the standard one, but it is straightforward to see that it is equivalent to the standard one. Such a definition is convenient for defining fat graphs. **Definition 2.1**.: A graph \(G\) is a triple \(G=(E,\sim,\sigma_{1})\), where 1. \(E=\{\vec{e}_{1},\vec{e}_{1},\ldots,\vec{e}_{n},\vec{e}_{n}\}\) is a finite, non-empty set, called the set of directed edges. 2. \(\sim\) is an equivalence relation on \(E\). 3. \(\sigma_{1}\) is a fixed point free involution. The fixed-point free involution \(\sigma_{1}\) maps a directed edge to its reverse directed edge, i.e., \(\sigma_{1}(\vec{e})=\overline{e}\), for all \(\vec{e}\in E\). The equivalence relation \(\sim\) is defined by \(\vec{e}_{1}\sim\vec{e}_{2}\) if \(\vec{e}_{1}\) and \(\vec{e}_{2}\) have same initial vertex. In an ordinary language, \(V=E/\sim\) is the set of all vertices and \(E/\sigma_{1}\) is the set of un-directed edges of the graph. The number of edges incident at a vertex is called the _degree_ of the vertex (for more details, we refer to [10], section 2). Now, we define fat graphs in the following. **Definition 2.2**.: A _fat graph_ is a quadruple \(\Gamma=(E,\sim,\sigma_{1},\sigma_{0})\), where 1. \(G=(E,\sim,\sigma_{1})\) is a graph. 2. \(\sigma_{0}\) is a permutation on E so that each cycle corresponds to a cyclic order on the set of oriented edges going out from a vertex. **Definition 2.3**.: A fat graph is called _decorated_ if degree of each vertex is even and at least \(4\). **Definition 2.4**.: A simple cycle in a decorated fat graph is called a _standard cycle_ if every two consecutive edges in the cycle are opposite to each other in the cyclic order on the set of edges incident at their common vertex. If a cycle is not standard, we call it as non-standard. **Surface associated to a fat graph.** Given a fat graph \(\Gamma=(E,\sim,\sigma_{1},\sigma_{0})\), we construct an oriented topological surface \(\Sigma(\Gamma)\) with boundary by following: Consider a closed disk corresponding to each vertex and a rectangle corresponding to each un-directed edge. Then identify the sides of the rectangles with the boundary of the discs according to the order of the edges incident to the vertex. See Figure 2.1 for a local picture. The boundary of a fat graph \(\Gamma\) is defined as the boundary of the surface \(\Sigma(\Gamma)\). _Example 2.5_.: Let us consider the fat graph \(\Gamma=(E,\sim,\sigma_{1},\sigma_{0})\), where * \(E=\{\vec{e}_{i},\vec{e}_{j}|i,j=1,2,3\}\), is the set of all directed edges and \(E_{1}=\{e_{1},e_{2},e_{3}\}\) is the set of all un-directed edges, where \(e_{i}=\{\vec{e}_{1},\vec{e}_{i}\},i=1,2,3\). * \(V=\{v_{1},v_{2}\}\), where \(v_{1}=\{\vec{e}_{1},\vec{e}_{2},\vec{e}_{2}\}\) and \(v_{2}=\{\vec{e}_{1},\vec{e}_{2},\vec{e}_{3}\}\), is the set of vertices. * \(\sigma_{1}(\vec{e}_{i})=\vec{e}_{i}\), \(i=1,2,3\), and * \(\sigma_{0}=(\vec{e}_{1},\vec{e}_{2},\vec{e}_{2})(\overline{e}_{1},\overline{e }_{2},\overline{e}_{3})\). The fat graph \(\Gamma\) and the associated surface \(\Sigma(\Gamma)\) is given in Figure 2.2. A simple Euler characteristic argument and topological classification of surfaces imply that \(\Sigma(\Gamma)\) is homeomorphic to a compact surface of genus one and with a single boundary component. We also observe that if the boundary of the surface is labelled with the convention as in Figure Figure 2.1. Local picture of the surface obtained from a fat graph. 2.2, the boundary corresponds to the cycles of \(\sigma_{0}^{-1}\sigma_{1}\) which is \((\vec{e}_{1},\vec{e}_{3},\vec{e}_{2},\vec{e}_{1},\vec{e}_{3},\vec{e}_{2})\). We generalize this observation in Lemma 2.6. **Lemma 2.6**.: _The number of boundary components of a fat graph \(\Gamma=(E,\sim,\sigma_{1},\sigma_{0})\) is same as the number of disjoint cycles in \(\sigma_{0}^{-1}\sigma_{1}\)._ Proof.: For proof see Lemma 2.3 of [10]. ## 3. Classification of curves on fat graphs Decorated fat graphs play an important role in our construction of fillings on surfaces. Such a graph can be written as an edge disjoint union of its standard cycles. The set of standard cycles corresponds to a set of simple closed curves that gives a filling. In this section, we classify the standard cycles of a decorated fat graph in Theorem 3.2. Before going to the theorem directly, let us consider the following motivating example. _Example 3.1_.: Consider the filling pair \((\alpha,\beta)\) on \(S_{2}\) as shown in Figure 3.1 and the corresponding fat graph \(\Gamma(\alpha,\beta)=(E,\sim,\sigma_{1},\sigma_{0})\) as described below (see Figure 3.2): 1. \(E=\{x_{i},x_{i}^{-1},y_{i},y_{i}^{-1}\mid i=1,\ldots,6\}\). 2. The equivalence classes of \(\sim\) are \[v_{1} =\{x_{1},y_{6}^{-1},x_{6}^{-1},y_{1}\},\] \[v_{2} =\{x_{2},y_{2}^{-1},x_{1}^{-1},y_{3}\},\] \[v_{3} =\{x_{3},y_{6},x_{2}^{-1},y_{5}^{-1}\},\] \[v_{4} =\{x_{4},y_{4},x_{3}^{-1},y_{3}^{-1}\},\] \[v_{5} =\{x_{5},y_{2},x_{4}^{-1},y_{1}^{-1}\}\text{ and }\] \[v_{6} =\{x_{6},y_{4}^{-1},x_{5}^{-1},y_{5}^{-1}\}.\] Figure 2.2. Example of fat graph and associated surface. 3. \(\sigma_{1}(x_{i})=x_{i}^{-1},\sigma_{1}(y_{i})=y_{i}^{-1},i=1,\ldots,6\). 4. \(\sigma_{0}=\prod\limits_{i=1}^{6}\sigma_{v_{i}}\), where \[\sigma_{v_{1}} =(x_{1},y_{6}^{-1},x_{6}^{-1},y_{1}),\] \[\sigma_{v_{2}} =(x_{2},y_{2}^{-1},x_{1}^{-1},y_{3}),\] \[\sigma_{v_{3}} =(x_{3},y_{6},x_{2}^{-1},y_{5}^{-1}),\] \[\sigma_{v_{4}} =(x_{4},y_{4},x_{3}^{-1},y_{3}^{-1}),\] \[\sigma_{v_{5}} =(x_{5},y_{2},x_{4}^{-1},y_{1}^{-1})\text{ and }\] \[\sigma_{v_{6}} =(x_{6},y_{4}^{-1},x_{5}^{-1},y_{5}^{-1}).\] In this graph, \(\alpha=(x_{1},\ldots,x_{6})\) and \(\beta=(y_{1},\ldots,y_{6})\) are the standard cycles. The curve \(\alpha\) is separating. We construct a \(6\times 4\) matrix \(M(\alpha,\beta)\) whose rows are the cycles of the fat graph structure which are arranged such that edges \(x_{i}\)'s occur in the first position. For instance, the first row of \(M(\alpha,\beta)\) is \((x_{1},y_{6}^{-1},x_{6}^{-1},y_{1})\). The matrix is shown in Figure 3. We call this matrix \(M(\alpha,\beta)\) as the _normal matrix_. We observe that for each \(i\), \(y_{i}\) and \(y_{i}^{-1}\) are in the same column. For any fat graph with two standard cycles, its normal matrix is similarly defined. Furthermore, it has all the properties as discussed in example 3.1. Now, we are ready to state and prove the classification theorem. **Theorem 3.2**.: _Let \(\Gamma=(E,\sim,\sigma_{1},\sigma_{0})\) be a fat graph with two standard cycles \(\alpha=(x_{1},\ldots,x_{n})\) and \(\beta=(y_{1},\ldots,y_{n})\). The cycle \(\alpha\) is separating if and only if \(y_{i}\) and \(y_{i}^{-1}\) are in the same column of the normal matrix \(M(\alpha,\beta)\), for every \(i=1,\ldots,n\)._ Proof.: Consider the surface \(\Sigma(\Gamma)\), associated to the fat graph \(\Gamma\). We construct a closed and oriented surface \(S\) from \(\Sigma(\Gamma)\) by attaching a disk along each boundary component. The simple closed curves on \(S\) corresponding to the standard cycles \(\alpha\) and \(\beta\) are again denoted by \(\alpha\) and \(\beta\) respectively. If we cut the surface along \((\alpha\cup\beta)\), then we get a finite number of polygons with labelled boundary corresponding to the cycles of the permutation \(\sigma_{0}^{-1}\sigma_{1}\). The curve \(\alpha\) is separating if and only if whenever the polygons are identified along the edges \(y_{i}\) and \(y_{i}^{-1}\), then the resulting surface is still disconnected. Figure 3.2. Fat graph corresponding to the filling pair \((\alpha,\beta)\). Figure 3.3. The normal matrix \(M(\alpha,\beta)\). (\(\Leftarrow\)) Assume that the normal matrix \(M(\alpha,\beta)\) is as in Theorem 3.2. We show that \(\alpha\) is separating. Consider the disjoint cycle representation of the permutation \(\sigma_{0}^{-1}\sigma_{1}\). We claim that the \(y\)-edges contained in each cycle are either from second or forth column of \(M(\alpha,\beta)\), but not from the both. The entries in each cycle are alternatively coming from \(x\)-edges and \(y\)-edges. Consider a generic cycle \((y_{i},x_{j},y_{t},\dots)\) of the permutation \(\sigma_{0}^{-1}\sigma_{1}\). To prove the claim, it suffices to show that \(y_{i}\) and \(y_{t}\) are in the same column. Suppose \(y_{i}\) is in the second column, then \(\sigma_{1}(y_{i})=y_{i}^{-1}\) is also in the second column, by assumption on matrix \(M(\alpha,\beta)\). Therefore, \(\sigma_{0}^{-1}\left(y_{i}^{-1}\right)=x_{j}\) is in first column and \(\sigma_{1}(x_{j})=x_{j}^{-1}\) in third column, by the properties of \(M(\alpha,\beta)\). Now, it follows that \(\sigma_{0}^{-1}(x_{j}^{-1})\) is in the second column (see Figure 3.4 and Figure 3.5). In case \(y_{i}\) is in forth column, a similar argument shows that \(y_{t}\) is also in forth column. Suppose \(D_{2}\) and \(D_{4}\) be the collection of the polygons whose y-edges are in second and forth column respectively. Then, both \(D_{2}\) and \(D_{4}\) are nonempty disjoint sets. Therefore, after identifying the polygons along the y-edges, we get at least two disconnected sets. Hence, \(\alpha\) is a separating curve. (\(\Rightarrow\)) The above process is reversible and this completes the proof. Figure 3.4. Normal matrix ## 4. Existence of minimal separating filling pair The goal of this section is to prove Theorem 1.2 which gives a necessary and sufficient condition for the existence of a minimally intersecting separating filling pair on \(S_{g}\). We prove Proposition 4.1 below which has a consequence giving the proof of only if part of Theorem 1.2. **Proposition 4.1**.: _Let \((\alpha,\beta)\) be a separating filling pair on \(S_{g}\) and \(S_{g}\setminus\alpha=S_{g_{1},1}\sqcup S_{g_{2},1}\), where \(g_{1}\) and \(g_{2}\) are the genera of \(S_{g_{1},1}\) and \(S_{g_{2},1}\) respectively. Then the number of connected components in \(S_{g_{1},1}\setminus\beta\) and \(S_{g_{2},1}\setminus\beta\) are equal if and only if \(g_{1}=g_{2}\)._ Proof.: (\(\Rightarrow\)) Assume that \(S_{g_{1},1}\setminus\beta\) and \(S_{g_{2},1}\setminus\beta\) both have same number of connected components, say n. The curve \(\alpha\) is separating implies that the geometric intersection number \(i(\alpha,\beta)\) is an even integer, say \(2v\). Now, the union \((\alpha\cup\beta)\) gives a cell decomposition of \(S_{g_{1},1}\) and \(S_{g_{2},1}\). In each cell decomposition, the number of \(0\)-cells is \(2v\), the number of \(1\)-cells is \(3v\) and the number of \(2\)-cells is \(n\). Euler's characteristic formula implies that \[2v-3v+n=2-2g_{1}-1\] \[\implies g_{1}=(1-n+v)/2.\] A similar calculation gives \(g_{2}=(1-n+v)/2.\) Thus, we conclude \(g_{1}=g_{2}\). (\(\Leftarrow\)) Suppose \(g_{1}=g_{2}\). Applying Euler's characteristic formula we have \(g_{1}=(1-n_{1}+v)/2\) and \(g_{2}=(1-n_{2}+v)/2\), where \(n_{i}\)'s are the number of components in \(S_{g_{i},1}\setminus\beta\), \(i=1,2\) and \(v=i(\alpha,\beta)/2\). Now, \(g_{1}=g_{2}\) implies that \(n_{1}=n_{2}\). This completes the proof. **Lemma 4.2**.: _There exists no minimally intersecting separating filling pair on \(S_{2}\)._ Proof.: The proof is by contradiction. Assume that there is a minimally intersecting separating filling pair \((\alpha,\beta)\) on \(S_{2}\). Euler's characteristic formula implies that the geometric Figure 3.5. Encircled elements lie on same cycle of \(\sigma_{0}^{-1}\sigma_{1}\). intersection number \(i(\alpha,\beta)\) between the curves \(\alpha\) and \(\beta\) is \(4\). The curves \(\alpha\) and \(\beta\) correspond to two standard cycles, each of length \(4\), in the decorated fat graph \(\Gamma(\alpha,\beta)\). We denote them by \(\alpha=(x_{1},x_{2},x_{3},x_{4})\) and \(\beta=(y_{1},y_{2},y_{3},y_{4})\) (see Figure 4.1). Now Theorem 3.2 implies that the edges \(y_{i}\) and \(y_{i}^{-1}\) are in the same column of the normal matrix \(M(\alpha,\beta)\). There is a unique such normal matrix up to relabelling (see Figure 4.1) and in the corresponding fat graph \(\Gamma(\alpha,\beta)\), there is a boundary component of length two which implies that the curves \(\alpha\) and \(\beta\) form a bigon on the surface \(S_{2}\). This contradicts that the curves \(\alpha\) and \(\beta\) are in minimal position. Proof of Theorem 1.2.: (\(\Rightarrow\)) Suppose \((\alpha,\beta)\) is a minimally intersecting separating filling pair on \(S_{g}\). Consider \(S_{g_{1},1},S_{g_{2},1},g_{1}\) and \(g_{2}\) are as in Proposition 4.1. Then the number of components in each of \(S_{g_{1},1}\setminus\beta\) and \(S_{g_{2},1}\setminus\beta\) is one. Therefore, by Proposition 4.1, we have \(g_{1}=g_{2}\) and this implies \(g=g_{1}+g_{2}\), an even integer. Finally, by Lemma 4.2, we have \(g\geq 4\). (\(\Leftarrow\)) Suppose \(g\geq 4\) is an even integer. We show that there is a minimal separating filling pair on \(S_{g}\). The proof is by induction on \(g\). For \(g=4\), the existence of minimally intersecting separating filing pair follows from Figure 4.2. Let \((\alpha_{g},\beta_{g})\) be a minimal separating filling pair on \(S_{g}\) (obtained by induction) and we prove the result for \(g+2\). Figure 4.1. Normal matrix \(M(\alpha,\beta)\) and the corresponding fat graph. For this, let us take a filling pair \((\alpha^{\prime},\beta^{\prime})\) as in Figure 4.3 on \(S_{2}\). Then \(S_{2}\setminus(\alpha\cup\beta)\) is a disjoint union of \(4\) disks as shown in Figure 4.4. The point A lies on all of the four polygons. Let us extract an open ball around A which contains no other intersection points. We extract another open ball around an intersection point of \(\alpha_{g}\) and \(\beta_{g}\) in a similar way. We attach these two surfaces along the boundary in such a way that the end points of the arc \(\alpha_{g}\) are identified with the end points of arc of \(\alpha^{\prime}\) and the same holds for the \(\beta\) arcs. Then we get a filling pair \((\alpha_{g+2},\beta_{g+2})\) on \(S_{g+2}\) with \(\alpha_{g+2}\) separating and with number of disks in the complements is two. These completes the proof. Figure 4.3. Filling pair on \(S_{2}\). ## 5. Counting of Mapping Class Group Orbits. The mapping class group \(\operatorname{Mod}(S_{g})\) acts on the set \(\mathcal{C}_{g}\) of all minimally intersecting separating filling pairs of \(S_{g}\) by following: \(f\cdot(\alpha,\beta)=(f(\alpha),f(\beta))\), where \(f\in\operatorname{Mod}(S_{g})\) and \((\alpha,\beta)\in\mathcal{C}_{g}\). In this section, we find an upper bound of the number of orbits of this action, in particular, we prove Theorem 1.3. ### Oriented filling pair and filling permutation Consider \((\alpha,\beta)\in\mathcal{C}_{g}\). By Euler characteristic formula, we have the geometric intersection number between \(\alpha\) and \(\beta\) is \(2g\). These intersection points decompose \(\alpha\) into \(2g\) sub-arcs. We choose an initial arc and an orientation on \(\alpha\). Now, we label the sub-arcs by \(\alpha_{1},\ldots,\alpha_{2g}\), in accordance with the orientation with \(\alpha_{1}\) is the initial arc. Similarly, we label the sub-arcs of \(\beta\) by \(\beta_{1},\ldots,\beta_{2g}\). A filling pair \((\alpha,\beta)\) with a chosen orientation on each of \(\alpha\) and \(\beta\) is called an _oriented filling pair_ and an oriented filling pair with a labelling, we call as a _labelled filling pair_. We define a permutation \(\phi^{(\alpha,\beta)}\) on \(\mathrm{N}_{g}=\{1,2,\ldots,8g\}\) corresponding to a labelled filling pair \((\alpha,\beta)\) as described below. Consider the ordered set \[A_{g}=\left\{\alpha_{1},\beta_{1},\ldots,\alpha_{2g},\beta_{2g},\alpha_{1}^{-1},\beta_{1}^{-1},\ldots,\alpha_{2g}^{-1},\beta_{2g}^{-1}\right\}.\] If we cut the surface \(S_{g}\) along \((\alpha\cup\beta)\), then we get two \(4g\)-gons whose sides are labelled by the members of \(A_{g}\). Now, we define, \[\phi^{(\alpha,\beta)}(i)=j;\] if the \(i^{\mathrm{th}}\) element of \(A_{g}\) is succeeded by the \(j^{\mathrm{th}}\) element of \(A_{g}\) along the clockwise direction of the \(4g\)-gons. _Example 5.1_.: Consider the labelled filling pair \((\alpha,\beta)\) as in Figure 4.2. The labelled polygons obtained by cutting the surface along \(\alpha\cup\beta\) are shown in Figure 5.1. Then, \[\phi^{(\alpha,\beta)} =c_{1}c_{2},\text{ where}\] \[c_{1} =(1,28,13,8,7,20,5,24,15,32,11,12,3,4,9,16)\text{ and}\] \[c_{2} =(17,2,19,14,25,6,29,21,18,31,22,23,26,27,30).\] Now, we study the properties of the permutation \(\phi^{(\alpha,\beta)}\) in the following proposition. **Proposition 5.2**.: _The permutation \(\phi^{(\alpha,\beta)}\) has the following properties._ 1. \(\phi^{(\alpha,\beta)}\) _is parity respecting and sends even entries to odds and vice-versa._ 2. \(\phi^{(\alpha,\beta)}\) _is a product of two disjoint_ \(4g\)_-cycles._ Figure 5.1. The labelled polygons _._ 3. _One cycle of_ \(\phi^{(\alpha,\beta)}\) _contains each element of_ \(\{1,3,\ldots,4g-1\}\) _and the other contains each element of_ \(\{4g+1,4g+3,\ldots,8g-1\}\)_._ 4. _If a cycle contains some even integer_ \(2i\) _then it contains_ \((2i+4g)(\mathrm{mod}\ 8g)\)_._ 5. _If a cycle contains an even integer of the form_ \((4k+2)\)_, then all the even integers in this cycle have this form. A similar statement is true for the integers of the form_ \(4k\)_._ 6. \(\phi^{(\alpha,\beta)}\) _satisfies the equation_ \(\phi^{(\alpha,\beta)}Q^{4g}\phi^{(\alpha,\beta)}=\tau\)_, where_ \[Q =(1,2,\ldots,8g)\text{ and }\] \[\tau =\tau_{1}\tau_{2}\tau_{3}\tau_{4},\text{ where }\] \[\tau_{1} =(1,3,\ldots,4g-1)\] \[\tau_{2} =(2,4,\ldots,4g)\] \[\tau_{3} =(8g-1,8g-3,\ldots,4g+1)\text{ and }\] \[\tau_{4} =(8g,8g-2,\ldots,4g+2).\] Proof.: Let \(P_{1}\) and \(P_{2}\) are the \(4g\)-gons obtained by cutting the surface \(S_{g}\) along \(\alpha\cup\beta\). 1. Among every two consecutive sides of the \(4g\)-gons \(P_{1}\) and \(P_{2}\), one comes from the \(\alpha\)-arcs and the other from the \(\beta\)-arcs, which implies that \(\phi^{(\alpha,\beta)}\) is pairing respecting and sends even entries to odds and vice-versa. 2. Each polygon corresponds to a cycle of \(\phi^{(\alpha,\beta)}\) and the converse is also true. Now, the statement follows from the minimality of the separating filling pair \((\alpha,\beta)\). 3. By consideration, the curve \(\alpha\) is separating, which implies that \(\alpha_{1},\ldots,\alpha_{2g}\) are in one polygon and \(\alpha_{1}^{-1},\ldots,\alpha_{2g}^{-1}\) are in the other polygon. 4. The proof follows from the fact that for each \(i\), the sides labelled by \(\beta_{i}\) and \(\beta_{i}^{-1}\) are in the same polygon. 5. The edges labelled by \(\beta_{i}\) and \(\beta_{i}^{-1}\), for \(i\in\{1,\dots,2g\}\) odd integers, are sides of one polygon and for even \(i\)'s, they are the sides of the other polygon (see Figure 5.1). Therefore the statement follows. 6. Let \(k\in\{1,2,\dots,8g\}\). We consider the following four cases to prove the statement. **Case 1.** Consider \(k\in\{1,2,\dots,4g\}\) is odd. Then the \(k\)-th element of \(A_{g}\) is \(\alpha_{i}\), where \(i=\frac{k+1}{2}\). We refer to Figure 5.3(a) for a local picture near the intersection point between the arcs \(\alpha_{i}\) and \(\alpha_{i+1}\). By Proposition 5.2, (4), the sides \(\beta_{j}\) and \(\beta_{j}^{-1}\) lie on the same polygon. We have \(\phi^{(\alpha,\beta)}(\alpha_{i})=\beta_{j}\), \(Q^{4g}(\beta_{j})=\beta_{j}^{-1}\) and \(\phi^{(\alpha,\beta)}(\beta_{j}^{-1})=\alpha_{i+1}\). Therefore, \(\phi^{(\alpha,\beta)}Q^{4g}\phi^{(\alpha,\beta)}(\alpha_{i})=\alpha_{i+1}\) (see Figure 5.3(b)). This shows that \(\phi^{(\alpha,\beta)}Q^{4g}\phi^{(\alpha,\beta)}(k)=\tau(k)\). **Case 2.** In this case, we consider \(k\in\{1,2,\dots,4g\}\) is even. Then the \(k\)-th element of \(A_{g}\) is \(\beta_{j}\), where \(j=\frac{k}{2}\). We refer to Figure 5.4(a) for a local picture near the intersection point between the arcs \(\beta_{j}\) and \(\beta_{j+1}\). We have \(\phi^{(\alpha,\beta)}(\beta_{j})=\alpha_{i}^{-1}\), \(Q^{4g}(\alpha_{i}^{-1})=\alpha_{i}\) and \(\phi^{(\alpha,\beta)}(\alpha_{i})=\beta_{j+1}\). Therefore, \(\phi^{(\alpha,\beta)}Q^{4g}\phi^{(\alpha,\beta)}(k)=\tau(k)\). **Case 3.** Consider \(k\in\{4g+1,4g+2,\dots,8g\}\) is odd. Then the \(k\)-th element of \(A_{g}\) is \(\alpha_{i}^{-1}\), where \(i=\frac{k-4g+1}{2}\). A similar argument as in Case 1 shows that \(\phi^{(\alpha,\beta)}Q^{4g}\phi^{(\alpha,\beta)}(k)=\tau(k)\) (see Figure 5.5). **Case 4.** Consider \(k\in\{4g+1,4g+2,\dots,8g\}\) is even. Then the \(k\)-th element of \(A_{g}\) is \(\beta_{j}^{-1}\) where \(j=\frac{k-4g}{2}\). A similar argument as in Case 2 shows that \(\phi^{(\alpha,\beta)}Q^{4g}\phi^{(\alpha,\beta)}(k)=\tau(k)\) (see Figure 5.6). Figure 5.4. (a) Local picture of \(\alpha_{i}\) on the surface. (b) Polygon containing \(\alpha_{i}\). Figure 5.5. (a) Local picture of \(\alpha_{i}\) on the surface. (b) Polygon containing \(\alpha_{i}\). Figure 5.6. (a) Local picture of \(\alpha_{i}\) on the surface. (b) Polygon containing \(\alpha_{i}\). **Note:** If a permutation \(\phi\in\Sigma_{8g}\) satisfies the properties \((1)-(6)\) of Proposition 5.2, then we call it a _filling permutation_. Given a minimally intersecting separating filling pair \((\alpha,\beta)\) on \(S_{g}\), we have \(\phi=\phi^{(\alpha,\beta)}\), a filling permutation. In Theorem 5.3, we study the converse part, in particular, we prove that every filling permutation is realized by a minimally intersecting separating filling pair. **Theorem 5.3**.: _Let \(\phi\in\sum_{8g}\) be a filling permutation. There exists a minimally intersecting separating filling pair \((\alpha,\beta)\) such that \(\phi=\phi^{(\alpha,\beta)}\)._ Proof.: Consider two disjoint \(4g\)-gons \(P_{1}\) and \(P_{2}\) and an ordered set \(A_{g}\) of \(8g\) symbols given by \[A_{g}=\left\{\alpha_{1},\beta_{1},\alpha_{2},\beta_{2},\ldots,\alpha_{2g}, \beta_{2g},\alpha_{1}^{-1},\beta_{1}^{-1},\ldots,\alpha_{2g}^{-1},\beta_{2g}^{ -1}\right\}.\] Suppose \(\phi=\phi_{1}\phi_{2}\), where \(\phi_{1},\phi_{2}\) are two disjoint cycles of length \(4g\). Now, we label the edges of \(P_{1}\) and \(P_{2}\) by elements of \(A_{g}\) as follows. Let \(\phi_{1}=(a_{1},\ldots,a_{4g})\). Choose an initial edge of \(P_{1}\) and choose clockwise orientation on \(P_{1}\). We label \(i^{\text{th}}\) edge of \(P_{1}\) by \({a_{i}}^{th}\) element of \(A_{g}\). Similarly, we label \(P_{2}\). Note that, the labelling of \(P_{1}\) and \(P_{2}\) are well defined up to cyclic order. The above labelling on the polygons gives a side paring on \(P_{1}\cup P_{2}\) and after identifying the sides pairwise, we obtain a closed, connected and orientable surface \(S\). Now, we compute the genus of the surface \(S\) using Euler's characteristic formula. By the construction of the surface \(S\), we have a cell decomposition, where the number of \(2\)-cells is \(2\) and the number of \(1\)-cells is \(4g\). Now, we calculate the number of \(0\)-cells. Let us begin with a vertex \(v_{1}\) of \(P_{1}\cup P_{2}\). Suppose \(\alpha_{i}\) and \(\beta_{j}\) are the incident edges at the vertex \(v_{1}\) [see Figure 5.7]. By Proposition 5.2, (4), the edge labelled by \(\beta_{j}^{-1}\) is in the same polygon. By Proposition 5.2, (6), the edge immediately succeeding \(\beta_{j}^{-1}\) is \(\alpha_{i+1}\). Therefore the vertex \(v_{1}\) is identified with the vertex \(v_{2}\) between \(\beta_{j}^{-1}\) and \(\alpha_{i+1}\). By a similar reasoning, we see that the vertex \(v_{2}\) is identified with the vertex \(v_{3}\) between the edges \(\alpha_{i+1}^{-1}\) and \(\beta_{j-1}^{-1}\) and the latter one is identified with the vertex \(v_{4}\) between the edges \(\beta_{j-1}\) and \(\alpha_{i}^{-1}\) and \(v_{4}\) is identified with \(v_{1}\) which completes the cycle. This shows that four vertices are identified to a single point. Therefore, the total number of \(0\)-cells is \(2g\). Therefore, the Euler characteristic of the surface is given by \[\chi(S)=2g-4g+2=2-2g.\] So, by classification theorem of surfaces, we have the genus of the surface to be \(g\). From the construction of the surface \(S\), it is straightforward to see that the edges \(\alpha_{1},\alpha_{2},\dots\), \(\alpha_{2g}\) and \(\beta_{1},\beta_{2},\dots,\beta_{2g}\) project to two simple closed curves on the surface \(S\), denoted by \(\alpha\) and \(\beta\) respectively. Therefore, we have \((\alpha,\beta)\) is a minimally intersecting separating filling pair of \(S\) with \(\alpha\) separating and \(\phi^{(\alpha,\beta)}=\phi\). Next, we study a necessary and sufficient condition for two minimally intersecting separating filling pairs are in the same mapping class group orbit. **Lemma 5.4**.: _Suppose \(\Gamma=(\alpha,\beta)\) and \(\tilde{\Gamma}=(\tilde{\alpha},\tilde{\beta})\) are two minimally intersecting filling pairs on \(S_{g}\) in the same \(\operatorname{Mod}(S_{g})\)-orbit. Then \(\phi^{(\alpha,\beta)}=\phi^{(\tilde{\alpha},\tilde{\beta})}\) modulo conjugation by permutations of the form_ \[\mu_{g}^{l}\kappa_{g}^{k}\delta_{g}^{j}\eta_{g}^{i},\quad\ l,i\in\{0,1\};\quad j,k\in\{0,1,\dots,2g-1\},\] _where,_ \[\mu_{g} =(2,4g+2)(4,4g+4)\dots(4g,8g),\] \[\kappa_{g} =(1,3,\dots,4g-1)(4g+1,4g+1,\dots,8g-1),\] \[\delta_{g} =(2,4,\dots,4g)(4g+2,4g+4,\dots,8g)\text{ and }\] \[\eta_{g} =(1,4g+1)(3,4g+3)\dots(4g-1,8g-1).\] Proof.: We consider labelling on \((\alpha,\beta)\) and \((\tilde{\alpha},\tilde{\beta})\) so that \(\alpha=(\alpha_{1},\dots,\alpha_{2g})\), \(\beta=(\beta_{1},\dots,\beta_{2g})\), \(\tilde{\alpha}=(\tilde{\alpha}_{1},\dots,\tilde{\alpha}_{2g})\) and \(\tilde{\beta}=(\tilde{\beta}_{1},\dots,\tilde{\beta}_{2g})\) (see Section 5.1). The given filling pairs are in Figure 5.7. Vertices that are identified to a single point. the same mapping class group orbit implies that there exists \(f\in\operatorname{Mod}(S_{g})\) such that \(f(\alpha,\beta)=(\tilde{\alpha},\tilde{\beta})\). Now, there are following four cases to consider. **Case 1.** Consider \(f(\alpha_{n})=\tilde{\alpha}_{n+k_{0}}\), for some \(0\leq k_{0}\leq 2g-1\) and \(f(\beta_{m})=\tilde{\beta}_{m}\), for all \(m,n=1,\ldots,2g\). In this case, \(\phi^{(\alpha,\beta)}\) and \(\phi^{(\tilde{\alpha},\tilde{\beta})}\) are conjugate by \(\kappa_{g}^{k_{0}}\). **Case 2.** Consider \(f(\beta_{m})=\tilde{\beta}_{m+j_{0}}\), for some \(0\leq j_{0}\leq 2g-1\) and \(f(\alpha_{n})=\tilde{\alpha}_{n}\) for all \(m,n=1,\ldots,2g\). Then \(\phi^{(\alpha,\beta)}\) and \(\phi^{(\tilde{\alpha},\tilde{\beta})}\) are conjugate by \(\delta_{g}^{j_{0}}\). **Case 3.** Let \(f\) preserves the orientation of \(\alpha\) but reverses that of \(\beta\). Then \(\phi^{(\alpha,\beta)}\) and \(\phi^{(\tilde{\alpha},\tilde{\beta})}\) are conjugate by \(\mu_{g}\). **Case 4.** Let \(f\) preserves the orientation of \(\beta\) but reverses that of \(\alpha\). Then \(\phi^{(\alpha,\beta)}\) and \(\phi^{(\tilde{\alpha},\tilde{\beta})}\) are conjugate by \(\eta_{g}\). The general case comprises by the above cases and hence we conclude that \(\phi^{(\alpha,\beta)}\) and \(\phi^{(\tilde{\alpha},\tilde{\beta})}\) are conjugate by \(\mu_{g}^{l}\kappa_{g}^{k}\delta_{g}^{j}\eta_{g}^{i}\) where \(l,i\in\{0,1\}\) and \(j,k\in\{0,1,\ldots,2g-1\}\). ### A lower bound of \(N(g)\) In this section, we prove the following inequality \[N(g)>\frac{\prod\limits_{k=0}^{\frac{g-4}{2}-1}(8+3k)}{4\times(2g)^{2}\times( \frac{g-4}{2})!}.\] Recall that in the proof of Theorem 1.2 (see Section 4), to construct a minimal filling pair on the surface \(S_{g+2}\) from that of \(S_{g}\), we have used a copy of a \(S_{2,1}\). We label the sub-arcs corresponding to \(\alpha^{\prime}\) and \(\beta^{\prime}\) by \(x_{1},\ldots,x_{6}\) and \(y_{1},\ldots,y_{6}\), respectively. We call this as \(Z\)-piece on which the points below are distinct: 1. initial point of \(y_{1}\), 2. initial point of \(x_{1}\), 3. end point of \(y_{6}\) and 4. end point of \(x_{6}\). For a Z-piece \(Z_{1}\), we define \(Z_{1}^{(\alpha)}\) to be the interior arcs of the separating arc of the Z-piece. Here \(Z_{1}^{(\alpha)}=\{x_{2},\ldots,x_{5}\}\). Similarly we define \(Z_{1}^{(\beta)}=\{y_{2},\ldots,y_{5}\}\). For two Z-pieces \(Z_{1}\) and \(Z_{2}\), we define \((Z_{1},Z_{2})^{\cap}=(Z_{1}^{(\alpha)}\cap Z_{2}^{(\alpha)})\cup(Z_{1}^{(\beta )}\cap Z_{2}^{(\beta)})\). By \(x_{k}(Z_{1})\), we mean the end point of the arc \(x_{k}\) in the Z-piece \(Z_{1}\). **Lemma 5.5**.: _If \(Z_{1}\) and \(Z_{2}\) are two Z-pieces on \((\alpha,\beta)\) with \((Z_{1}^{(\alpha)}\cap Z_{2}^{(\alpha)})\neq\emptyset\), then \(Z_{1}=Z_{2}\)._ Proof.: Assume \(Z_{1}\) starts before \(Z_{2}\). **Case 1:** Consider \(x_{6}(Z_{1})=x_{3}(Z_{2})\). Then \(x_{4}*x_{5}*y_{4}^{-1}\) is a loop implies \(x_{1}*x_{2}*y_{5}^{-1}\) must be a loop which is not. **Case 2:** Consider \(x_{6}(Z_{1})=x_{2}(Z_{2})\). Then \(x_{4}*x_{5}*y_{4}^{-1}\) is a loop implies the concatenation of the \(x\)-arc before \(x_{1},x_{1}\) and an \(y\)-arc incident at the endpoint of \(x_{1}\) must be a loop, which is impossible. **Case 3:**\(x_{6}(Z_{1})=x_{4}(Z_{2})\). Then \(x_{2}*x_{3}*x_{4}*y_{2}\) is a loop which implies \(x_{4}*x_{5}*x_{6}*\tilde{y}\) is a loop, where \(\tilde{y}\) is an \(y\)-arc incident at the endpoint of \(x_{6}\). But this is not possible. **Case 4:**\(x_{6}(Z_{1})=x_{5}(Z_{2})\). Then \(x_{2}*x_{3}*y_{3}^{-1}\) is a loop which implies \(x_{1}*x_{2}*y^{\prime}\) is a loop, where \(y^{\prime}\) is an \(y\)-edge incident at the end point of \(x_{2}\). But this is not true. Therefore, \(x_{6}(Z_{1})=x_{6}(Z_{2})\) and this implies \(Z_{1}=Z_{2}\). Now, we are ready to find a lower bound for \(N(g)\). For \(g\geq 6\), we define \[\mathcal{A}_{g} =\left\{\left(s_{1},\ldots,s_{\frac{g-4}{2}}\right)|s_{i}\in \mathbb{N},s_{i}\leq 4(i+1),i=1,\ldots,\frac{g-4}{2}\text{ and }s_{1}<\cdots<s_{\frac{g-4}{2}}\right\}\text{ and }\] \[\mathcal{O}_{g} =\left\{(\alpha,\beta)|(\alpha,\beta)\text{ is an oriented minimal filling pair on }S_{g}\right\}.\] **Lemma 5.6**.: \(|\mathcal{A}_{g}|\leq|\mathcal{O}_{g}|\)_._ Proof.: To prove the lemma, we construct an injective function from \(\mathcal{A}_{g}\) to \(\mathcal{O}_{g}\). Let \(\left(s_{1},\ldots,s_{\frac{g-4}{2}}\right)\in\mathcal{A}_{g}\). Consider an oriented minimal filling pair \((\alpha(4),\beta(4))\) on \(S_{4}\) and label the vertices along \(\alpha(4)\) by \(v_{1},v_{2}\ldots,v_{8}\). At the vertex \(v_{s_{1}}\), we extract an open disk and attach a \(Z\)-piece to obtain an oriented minimal filling pair \((\alpha(6),\beta(6))\) on \(S_{6}\) whose vertices are labelled as follows: the labelling of the vertices \(v_{i},i<s_{1}\) are kept same, the vertices on the \(Z\)-piece are labeled as \(v_{s_{1}},\ldots,v_{s_{1}+4}\) and the subsequent vertices are labeled as \(v_{s_{1}+5},\ldots,v_{12}\). Next, we choose the vertex \(v_{s_{2}}\) and perform similar operation as above to obtain an oriented minimal filling pair \((\alpha(8),\beta(8))\) on \(S_{8}\). We repeat this process to get an oriented minimal filling pair \((\alpha(g),\beta(g))\) on \(S_{g}\). Now we define \(f:\mathcal{A}_{g}\longrightarrow\mathcal{O}_{g}\) by \[f(s_{1},\ldots,s_{\frac{g-4}{2}})=(\alpha(g),\beta(g)).\] We prove that \(f\) is injective. Let \(f(s_{1},\ldots,s_{\frac{g-4}{2}})=f(s^{\prime}_{1},\ldots,s^{\prime}_{\frac{g- 4}{2}})=(\alpha(g),\beta(g)).\) First, we show that \(s_{\frac{g-4}{2}}=s^{\prime}_{\frac{g-4}{2}}\). If not, suppose, without loss of generality, that \(s_{\frac{g-4}{2}}<s^{\prime}_{\frac{g-4}{2}}\). By the construction of the surface \(S_{g}\), there are two distinct \(Z\)-pieces at \(s_{\frac{g-4}{2}}^{th}\) and \(s^{\prime}_{\frac{g-4}{2}}{}^{th}\) vertices and by Lemma 5.5, they are disjoint. Now, we remove the \(Z\)-piece at \(s_{\frac{g-4}{2}}{}^{th}\) vertex and attach a disk to obtain an oriented minimal filling pair \((\alpha(g-2),\beta(g-2))\) on \(S_{g-2}\). We do this for the vertices corresponding to \(s_{\frac{g-4}{2}-1},\ldots,s_{1}\) and finally we get a minimal filling pair \((\alpha(4),\beta(4))\) on \(S_{4}\) and it contains a Z-piece, which is not possible. So \(s_{\frac{g-4}{2}}=s^{\prime}_{\frac{g-4}{2}}\). A similar argument shows that \(s_{i}=s^{{}^{\prime}}_{i}\) for all, \(i=1,\ldots,\frac{g-4}{2}-1\). Hence, \(f\) is injective. **Corollary 5.7**.: \[N(g)\geq\frac{\prod\limits_{k=1}^{\frac{g-4}{2}}(3k+5)}{4\times(2g)^{2}\times( \frac{g-4}{2})!}.\] Proof.: Consider pairwise distinct natural numbers \(t_{1},\ldots,t_{\frac{g-4}{2}}\) satisfying \(t_{k}\leq 4(k+1),\text{ for }k=1,\ldots,\frac{g-4}{2}\). Then there exists a permutation \(\sigma\) such that \(\left(t_{\sigma(1)},\ldots,t_{\sigma(\frac{g-4}{2})}\right)\in\mathcal{A}_{g}\). Furthermore, by consideration we have \(3k+5\) many choices for each \(t_{k}\) which implies \[|\mathcal{A}_{g}|\geq\frac{\prod\limits_{k=1}^{\frac{g-4}{2}}(3k+5)}{(\frac{g -4}{2})!}.\] Therefore, the lower bound is obtained by dividing by maximum number of twisting parameters, \[N(g)\geq\frac{|\mathcal{O}_{g}|}{4\times(2g)^{2}}\geq\frac{|\mathcal{A}_{g}|} {4\times(2g)^{2}}\geq\frac{\prod\limits_{k=1}^{\frac{g-4}{2}}(3k+5)}{4\times(2 g)^{2}\times(\frac{g-4}{2})!}.\] ## 6. Upper bound of \(N(g)\) In this section, we prove the inequality \(N(g)<2(2g-2)(2g-2)!\) in the following proposition which concludes the proof of Theorem 1.3. **Proposition 6.1**.: _For \(g\geq 4\), we have \(N(g)<2(2g-2)(2g-2)!\)._ Before going into the proof of Proposition 6.1, we prove a technical result in Lemma 6.2 which is essential for the proof of the proposition. **Lemma 6.2**.: _Let \(\phi\in\Sigma_{8g}\) be a filling permutation. Then \(\phi\) is of the form \(Q^{4g}C\), where C is a square root of \(Q^{4g}\tau\). Conversely, if \(C\) is a square root of \(Q^{4g}\tau\) with \(\phi=Q^{4g}C\) satisfying conditions \((1)-(5)\) of Proposition 5.2, then \(\phi\) is a filling permutation._ Proof.: Let \(\phi\) be a filling permutation. Then, \[\left(Q^{4g}\phi\right)^{2}=Q^{4g}\left(\phi Q^{4g}\phi\right)=Q^{4g}\tau.\] Conversely, \[\phi Q^{4g}\phi=Q^{4g}CQ^{4g}Q^{4g}C=Q^{4g}C^{2}=\tau.\] Proof of Proposition 6.1.: In late of Lemma 6.2, to prove the proposition, we find an upper bound of the number of square roots of the permutation \(Q^{4g}\tau\). We have \[Q^{4g}\tau=(1,4g+3)(3,4g+5)\cdots(4g-1,4g+1)(2,4g+4)\ldots(4g-2,8g).\] A square root \(C\) of \(Q^{4g}\tau\) is obtained by taking the transpositions of \(Q^{4g}\tau\) in pairs and interleaving them. For instance, for the pair of transpositions \((1,4g+3)\) and \((2,4g+4)\), the possible arrangements are \((1,2,4g+3,4g+4)\) and \((1,4g+4,2,4g+3)\). In each pair of transpositions, one comes from \(\{(1,4g+3),(3,4g+5),\ldots,(4g-1,4g+1)\}\) and the other from \(\{(2,4g+4),(4,4g+6),\ldots,(4g-2,8g)\}\), as \(\phi=Q^{4g}C\) is parity respecting and sends even numbers to odds. Suppose, for a suitable choice of \(C\), the permutation \(\phi\) satisfies conditions \((1)-(5)\) of Proposition 5.2 and \(\phi=\phi_{1}\phi_{2}\), where \(\phi_{1}\) and \(\phi_{2}\) are disjoint \(4g\)-cycles. Now, we have the following cases to consider. **Case 1:** The odd entries of \(\phi_{1}\) come from \(\{1,3,\ldots,4g-1\}\) and the even entries of \(\phi_{1}\) are of the form \(4k\) (following properties \((3)\) and \((5)\) of proposition 5.2). In this case, we have a unique choice for the \(4\)-cycle. More precisely, for the transpositions \((i,i+4g+2)\) and \((j,j+4g+2)\), the only possible \(4\)-cycle is \((i,j,i+4g+2,j+4g+2)\), where \(i\leq 4g\) is odd and \(j\leq 4g\) of the form \(4k\). Such pairs can be chosen in \((2g)!\) ways implies the number of such square roots \(C\) is bounded above by \((2g)!\). **Case 2:** The odd entries of \(\phi_{1}\) comes from \(\{1,3,\ldots,4g-1\}\) and the even entries of \(\phi_{1}\) are of the form \(4k+2\). A similar argument shows that in this case also there are at most \((2g)!\) choices. Therefore, the square root \(C\) and hence \(\phi\) has at most \(2(2g)!\) choices. Again using the conditions of Proposition 5.2, we further rule out some more choices of \(\phi\). Here, we find those \(\phi\)'s which satisfy \(\phi^{2}(1)=1\) and subtract this from the whole. First, take the transpositions \((1,4g+3)\) and \((i,i+4g+2)\), where \(i\leq 4g\) is an even integer. Then \((1,i,4g+3,i+4g+2)\) is a choice for a cycle of \(C\). Next, pair the transpositions \((i-2,4g+i)\) and \((4g-1,4g+1)\) to get another \(4\)-cycle \((4g-1,4g+i,4g+1,i-2)\) of \(C\) and choose the other \(4\)-cycles arbitrarily. As \(\phi^{2}(1)=1\), the number of filling permutations \(\phi\) is bounded above by \(2(2g)!-2g\times 2(2g-2)!=4g(2g-2)(2g-2)!\). Two minimally intersecting filling pairs are in the same \(\operatorname{Mod}(S_{g})\) orbit if and only if the corresponding filling permutations are conjugate by twisting permutations \(\mu_{g}^{l}\kappa_{g}^{k}\delta_{g}^{j}\eta_{g}^{i}\), where \(l,i\in\{0,1\},j,k\in\{0,1,\dots,2g-1\}\) and \(\mu_{g},\kappa_{g},\delta_{g},\eta_{g}\) are defined as in Lemma 5.4. As there are at least \(2g\) many twisting permutations, the number of \(\operatorname{Mod}(S_{g})\) orbits of minimally intersecting filling pair is bounded by \(\frac{4g(2g-2)(2g-2)!}{2g}=2(2g-2)(2g-2)!\). ## 7. The length function In this section, we prove Theorem 1.4. First we prove a technical lemma which deals with injectivity radius (see Section 4.1 in [5]) and length of fillings. The injectivity radius of a hyperbolic surface \(X\) is denoted by \(\operatorname{inj}(X)\). We note that the lemma is essential in proving the length function \(\mathcal{F}_{g}\) is proper. **Lemma 7.1**.: _Given \(M\in\mathbb{R}_{>0}\), there exists an \(\epsilon>0\) such that \(\mathcal{F}_{g}(X)\geq M\) if \(\operatorname{inj}(X)\leq\epsilon\)._ Proof.: Consider the width function \(w:\mathbb{R}^{+}\longrightarrow\mathbb{R}\), defined by \[w(x)=\operatorname{arcsinh}(1/\sinh(x/2)),\] as in Collar Lemma, satisfying following: for \(M\in\mathbb{R}_{>0}\), there exists an \(\epsilon>0\) such that \(w(x)\geq M\), for all \(x<\epsilon\) (for details, see Theorem 4.1.1 in [5]). Let \(X\in\mathcal{M}_{g}\) with \(\operatorname{inj}(X)\leq\frac{\epsilon}{2}\). Then there exists an essential simple closed geodesic \(\gamma\) with \(l_{X}(\gamma)\leq\epsilon\), as \(\operatorname{inj}(X)=\frac{1}{2}\text{sys}(X)\) (see Lemma 4.1.2 in [5]). By Collar's lemma, \(\gamma\) has a tubular neighbourhood of width at least \(M\). Now it follows that \(\mathcal{F}_{g}(X)\geq M\). **Lemma 7.2**.: \(\mathcal{F}_{g}\) _is proper._ Proof.: Let \(C\) be a compact set in \(\mathbb{R}\). Then there exists \(M\in\mathbb{R}_{>0}\) such that \(C\subset[-M,M]\). By Lemma 7.1, there exists an \(\epsilon>0\) such that \(\mathcal{F}_{g}(X)>M\), whenever \(\operatorname{inj}(X)<\epsilon\). This shows that \(\mathcal{F}_{g}^{-1}(C)\subset\mathcal{M}_{g}^{\epsilon}\), where \(\mathcal{M}_{g}^{\epsilon}\) is the \(\epsilon\)-thick part of \(\mathcal{M}_{g}\) (see Section 12.4 [6]). As \(\mathcal{F}_{g}\) is continuous and \(\mathcal{M}_{g}^{\epsilon}\) is compact, it follows that \(\mathcal{F}_{g}\) is a proper function. **Lemma 7.3**.: _The function \(\mathcal{F}_{g}\) is a topological Morse function._ Proof.: The proof follows from a similar argument as in the proof of Theorem 1.3 in [2]. **Lemma 7.4**.: \(\mathcal{F}_{g}\geq m_{g}\)_, where \(m_{g}\) is the perimeter of the regular right angled hyperbolic \(4g\)-gon._ Proof.: Let the minimum value of \(\mathcal{F}_{g}\) occurs at \(X\in\mathcal{M}_{g}\) and \((\alpha,\beta)\) be a shortest length filling pair on \(X\), where both of \(\alpha\) and \(\beta\) are geodesics. We cut the surface along \(\alpha\cup\beta\) and we obtain two hyperbolic \(4g\)-gons \(P_{1}\) and \(P_{2}\), each of area \(\pi(2g-2)\). It follows from the fact that the hyperbolic \(n\)-gon with least perimeter is regular (see Bezdek [4]), the polygons \(P_{1}\) and \(P_{2}\) are regular. The Gauss-Bonnet theorem implies that \(P_{1}\) and \(P_{2}\) are regular right angled \(4g\)-gons. Hence, the proof follows. Next, we find the growth of the set \(\mathcal{B}_{g}\) in the following lemma. **Lemma 7.5**.: _For \(g\geq 4\), \(\mathcal{B}_{g}\) is a finite set and \(|\mathcal{B}_{g}|=N(g)\)._ Proof.: By Lemma 7.2, \(\mathcal{F}_{g}\) is proper and hence \(\mathcal{B}_{g}\) is compact. All the points of \(\mathcal{B}_{g}\) are isolated as they are critical points of the Morse function \(\mathcal{F}_{g}\). Therefore, \(\mathcal{B}_{g}\) is a finite set. Now we find an one to one correspondence between the set \(\mathcal{B}_{g}\) and the collection of \(\operatorname{Mod}(S_{g})\)-orbits of minimal filling pairs. For each of \(\operatorname{Mod}(S_{g})\)-orbits, we associate an element \(X\) of \(\mathcal{B}_{g}\) by simply identifying the edges of two disjoint regular right angled \(4g\)-gons in accordance with the chosen orbit. It is straightforward to see that this association is surjective. The injectivity follows from Proposition 7.6. This completes the proof. **Proposition 7.6**.: _Let \((\alpha,\beta)\) and \((\tilde{\alpha},\tilde{\beta})\) be two minimal filling pairs on a hyperbolic surface \(X\in\mathcal{B}_{g}\) each of length \(m_{g}\). Then there exists a homeomorphism \(\phi:X\longrightarrow X\) such that \(\phi(\alpha,\beta)=(\tilde{\alpha},\tilde{\beta})\)._ Before going to the proof, we recall the notion of equivariant tiling, chamber system and Delaney-Dress symbols and their equivalence which are essential for the proof of Proposition 7.6. An _equivariant tiling_ is a pair \((\mathcal{T},\Gamma)\), where \(\mathcal{T}\) is a tiling of \(\mathbb{H}\) and \(\Gamma\) is a discrete subgroup of \(\operatorname{PSL}(2,\mathbb{R})\) satisfying \(\gamma\mathcal{T}=\{\gamma A|A\in\mathcal{T}\}=\mathcal{T}\) for all \(\gamma\in\Gamma\). Two equivariant tilings \((\mathcal{T},\Gamma)\) and \((\mathcal{T}^{\prime},\Gamma^{\prime})\) are said to be equivariantly equivalent if there exists a homeomorphism \(\phi:\mathbb{H}\to\mathbb{H}\) such that \(\phi\mathcal{T}=\mathcal{T}^{\prime}\) and \(\Gamma^{\prime}=\phi\Gamma\phi^{-1}\). To every vertex, edge and tile of \(\mathcal{T}\) choose an interior point, called a \(0\)-, \(1\)- and \(2\)-center, respectively. For every tile \(A\in\mathcal{T}\), join its \(2\)-center with \(0\)- and \(1\)-centers by geodesics (see Figure 7.1) which decompose the tiling \(A\) into triangles. We call each triangle a chamber and the set of all triangles, denoted by \(\mathcal{C}_{\mathcal{T}}\), is called _chamber system_. Consider \(\mathcal{D}=\mathcal{C}_{\mathcal{T}}/\Gamma\). The edge opposite the \(i\)-center of a chamber is called an \(i\)-edge. For \(T\in\mathcal{C}_{\mathcal{T}}\), the neighbour triangle that share the \(i\)-edge of \(T\), is denoted by \(\sigma_{i}(T)\). Thus we have functions \(\sigma_{i}:\mathcal{C}_{\mathcal{T}}\to\mathcal{C}_{\mathcal{T}}\) satisfying \(\sigma_{i}(\gamma T)=\gamma\sigma_{i}(T)\), for \(i=0,1,2\), \(T\in\mathcal{C}_{\mathcal{T}}\) and \(\gamma\in\Gamma\). This induces functions \(\sigma_{i}^{*}:\mathcal{D}\longrightarrow\mathcal{D}\). For \(0\leq i<j\leq 2\), we define \(m_{ij}:\mathcal{D}\longrightarrow\mathbb{N}\) with \[m_{ij}(D)=\min\left\{m\in\mathbb{N}|C(\sigma_{i}\sigma_{j})^{m}=C\text{ for all }C\in D\right\}.\] Then the system \((\mathcal{D},m):=((\mathcal{D},\mathcal{E});m_{0,1},m_{0,2},m_{1,2})\) is called a Delaney-Dress symbol if the followings hold: **DS1:**: \(m_{ij}(D)=m_{ij}(D\sigma_{i})=m_{ij}(D\sigma_{j})\) **DS2:**: \(D(\sigma_{i}\sigma_{j})^{m_{ij}(D)}=D(\sigma_{j}\sigma_{i})^{m_{ij}(D)}=D\) **DS3:**: \(m_{0,2}(D)=2\) **DS4:**: \(m_{0,1}\geq 2\) **DS5:**: \(m_{1,2}(D)\geq 3\). Two Delaney-Dress symbols \((\mathcal{D},m)\) and \((\mathcal{D}^{\prime},m^{\prime})\) are called isomorphic if and only if there exists a bijection \(\pi:\mathcal{D}\longrightarrow\mathcal{D}^{\prime}\) with \((\pi D)\sigma_{k}=\pi(D\sigma_{k})\) and \(m^{\prime}_{ij}(\pi D)=m_{ij}(D)\) for all \(D\in\mathcal{D},0\leq k\leq 2\) and \(0\leq i<j\leq 2\). For more details, we refer the reader to Section 1 of [7]. Next we will state an useful theorem (for more details, see Lemma 1.1 of [7]). **Lemma 7.7**.: _Two equivariant tilings \((\mathcal{T},\Gamma)\) and \((\mathcal{T}^{\prime},\Gamma^{\prime})\) are (equivariantly) equivalent if and only if the corresponding Delaney-Dress symbols \((\mathcal{D},m)\) and \((\mathcal{D}^{\prime},m^{\prime})\) are isomorphic._ Proof of Proposition 7.6.: Consider the filling pairs \((\alpha,\beta)\) and \((\tilde{\alpha},\tilde{\beta})\) as in the proposition. Let \(\Gamma\) be the discrete subgroup of \(\mathrm{PSL}_{2}(\mathbb{R})\) such that \(X\cong\mathbb{H}/\Gamma\). The lifts \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) of \((\alpha,\beta)\) Figure 7.1. A tile and its chambers and \((\tilde{\alpha},\tilde{\beta})\), respectively are two tilings of \(\mathbb{H}\) by regular right-angled \(4g\)-gons. Thus we have two equivariant tilings \((\mathcal{T},\Gamma)\) and \((\mathcal{T}^{\prime},\Gamma)\). Consider the chamber systems \(\mathcal{C}_{\mathcal{T}}\) and \(\mathcal{C}_{\mathcal{T}^{\prime}}\). Let \((\mathcal{D},m)\) and \((\mathcal{D}^{\prime},m^{\prime})\) be the corresponding Delaney-Dress symbols. Then, \[m_{0,2}(D) =2=m_{0,2}^{\prime}(D^{\prime}),\] \[m_{1,2}(D) =4=m_{1,2}^{\prime}(D^{\prime})\text{ and }\] \[m_{0,1}(D) =4g=m_{0,1}^{\prime}(D^{\prime}),\text{ for all }D\in\mathcal{D},D^{\prime}\in \mathcal{D}^{\prime}.\] As \(|\mathcal{D}|=|\mathcal{D}^{\prime}|=16g\) and value of \(m\) and \(m^{\prime}\) are identical, the Delaney-Dress symbols \((\mathcal{D},m)\) and \((\mathcal{D}^{\prime},m^{\prime})\) are isomorphic and hence by Lemma 1.1 of [7], \((\mathcal{T},\Gamma)\) and \((\mathcal{T}^{\prime},\Gamma)\) are equivariantly equivalent. Therefore, there exists a homeomorphism \(\phi:\mathbb{H}\longrightarrow\mathbb{H}\) such that \(\phi\mathcal{T}=\mathcal{T}^{\prime}\). This homeomorphism projects to a homeomorphism \(\tilde{\phi}\) of the surface such that \(\tilde{\phi}(\alpha,\beta)=(\tilde{\alpha},\tilde{\beta})\). Proof of Theorem 1.4.: By Lemma 7.2, \(\mathcal{F}_{g}\) is a proper function and that \(\mathcal{F}_{g}\) is a Morse function follows from Lemma 7.3. The inequality \(\mathcal{F}_{g}\geq m_{g}\) follows from Lemma 7.4. Finally, the fact that \(g\geq 4\), \(\mathcal{B}_{g}\) is a finite set and \(|\mathcal{B}_{g}|=N(g)\) follows from Lemma 7.5.
2310.13963
SDSS J1619 with blue-shifted broad components in H$α$ and in [O~{\sc iii}] having similar line width and velocity shifts: a recoiling SMBH candidate?
In this Letter, we report a potential candidate of recoiling supermassive black hole (rSMBH) in SDSS J1619 based on similar velocity shifts and line widths of the blue-shifted broad components in H$\alpha$ and [O~{\sc iii}] doublet. The measured line width ratio between blue-shifted broad H$\alpha$ and broad [O~{\sc iii}] line is 1.06, if compared with common values around 5.12 for normal Type-1 AGN, indicating different properties of the blue-shifted broad components in SDSS J1619 from those of normal QSOs. The virial BH mass $M_{BHr}$ derived from the broad H$\alpha$ is consistent with the mass expected from the M_{BH}-\sigma relation. The similar velocity shifts and line widths of the blue-shifted broad components in H$\alpha$ and [O~{\sc iii}] and the virial BH mass derived from the H$\alpha$ broad line emissions that is consistent with the mass expected from the M_{BH}-\sigma~ relation, can be explained by a rSMBH scenario. Besides the rSMBH scenario, either the similar line widths of the blue-shifted broad components in H$\alpha$ and in [O~{\sc iii}] or the consistency between the virial BH mass and the mass expected from the M_{BH}-\sigma~ relation cannot be explained by the other proposed models in SDSS J1619.
XueGuang Zhang
2023-10-21T10:40:08Z
http://arxiv.org/abs/2310.13963v1
SDSS J1619 with blue-shifted broad components in H\(\alpha\) and in [O iii] having similar line width and velocity shifts: a recoiling SMBH candidate? ###### Abstract In this Letter, we report a potential candidate of recoiling supermassive black hole (rSMBH) in SDSS J1619 based on similar velocity shifts and line widths of the blue-shifted broad components in H\(\alpha\) and [O iii] doublet. The measured line width ratio between blue-shifted broad H\(\alpha\) and broad [O iii] line is 1.06, if compared with common values around 5.12 for normal Type-1 AGN, indicating different properties of the blue-shifted broad components in SDSS J1619 from those of normal QSOs. The virial BH mass \(M_{BHr}\) derived from the broad H\(\alpha\) is consistent with the mass expected from the \(M_{\rm BH}-\sigma\) relation. The similar velocity shifts and line widths of the blue-shifted broad components in H\(\alpha\) and [O iii] and the virial BH mass derived from the H\(\alpha\) broad line emissions that is consistent with the mass expected from the \(M_{\rm BH}-\sigma\) relation, can be explained by a rSMBH scenario. Besides the rSMBH scenario, either the similar line widths of the blue-shifted broad components in H\(\alpha\) and in [O iii] or the consistency between the virial BH mass and the mass expected from the \(M_{\rm BH}-\sigma\) relation cannot be explained by the other proposed models in SDSS J1619. keywords: active galactic nuclei - emission line galaxies - supermassive black holes - quasars ## 1 Introduction A black hole (BH) can be kicked away from central region of an active galactic nuclei (AGN), due to gravitational wave carrying away linear momentum, as discussed in Merritt et al. (2006); Volonteri (2007); Blecha & Loeb (2008); Komossa & Merritt (2008); Gualandris & Merritt (2008); Blecha et al. (2016); Zakaria et al. (2017); Shen et al. (2019). The BH being kicked away is also called as gravitational wave recoiling supermassive BH (m*SMBH), leading to the known off-nucleus AGN with shifted broad emission lines due to materials in broad emission line regions (BLRs) bound to the recoiling BH. Until now, tens of AGN have been reported with blue-shifted broad lines. SDSS J0927+2943 at redshift 0.713 has been firstly reported in Komossa et al. (2008) with rSMBH expected blue-shifted velocity 2650km/s in low/high-ionization broad emission lines. However, a binary black hole (BBH) system can also lead to the blue-shifted lines in SDSS J0927+2943, as discussed in Bogdanovic et al. (2009). SDSS J1050 has been reported in Shields et al. (2009) with blue-shifted velocity 3500km/s in broad H\(\beta\), however, BLRs lying into central accretion disk (disk emitter) would be preferred in SDSS J1050 to explain the shifted broad H\(\beta\). Similar as the results in SDSS J0927+2943 and in SDSS J1050, there are blue-shifted broad lines reported in several individual AGN, such as SDSS J0956+5128 in Steinhardt et al. (2012), CXO J1015+6259 in Kim et al. (2017), SDSS J1056+5516 in Kalfour Zou et al. (2017), Mrk1018 in Kim et al. (2018), 3C186 in Chiaberge et al. (2017, 2018), J0437+2456 in Pesce et al. (2021), etc., but the rSMBH scenario cannot be accepted as the unique scenario to explain the reported blue-shifted emission lines. Meanwhile, besides reported individual AGN with blue-shifted broad emission lines, there are about 88 low redshift (\(z<0.7\)) SDSS quasars reported with blue-shifted velocities larger than 1000km/s in broad H\(\beta\) in Eracleous et al. (2012); Runnoe et al. (2015, 2017), and rather than the rSMBH scenario, BBH systems would be preferred in fraction of the low redshift quasars. Therefore, not only rSMBH scenario, but also BBH system, disk emitter or probable outflowing model can be applied to explain shifted broad emission lines in AGN. Currently, there are probably two main reasons which affect the plausibility of the rSMBH scenario. First, when a rSMBH is kicked away in AGN, probably not total materials in central BLRs are carried away with the rSMBH, but part of BLRs should be probably left in central region of AGN. In other words, there are two components in observed broad Balmer emission lines, one component from the BLRs bound to the rSMBH and the other component related to the materials in BLRs left in central region, leading to more complicated profiles of observed broad emission lines which not only include contributions from rSMBH expected blue-shifted broad components but also include rSMBH-independent components. How to effectively ignore effects of rSMBH-independent components is a challenge on studying properties of rSMBH expected blue-shifted broad emission features. However, considering the rSMBH-independent broad emission components coming from the left part of BLRs in central regions of AGN, it will be preferred to detect and study rSMBH-expected blue-shifted broad emission lines in Type-1.9 AGN, because rSMBH-independent broad emission components can be heavily obscured as many as possible by central dust torus. Second, as discussed in Merritt et al. (2006); Komossa & Merritt (2008); Gualandris & Merritt (2008); Blecha & Loeb (2011); Blecha et al. (2016), the rSMBH may be off-nucleus for \(10^{6-9}\) years with 1pc-1kpc distance from the center. Meanwhile, as discussed in Liu et al. (2013); Hainline et al. (2013); Fischer et al. (2018); Dempsey & Zakamska (2018); Zhang (2022a), narrow emission line regions (NLRs) of AGN extend up to 1 kpc (NLRs sizes) from the central BHs. Therefore, a \(\tau\)SMBH with bounded materials related to central BLRs can reach NLRs in AGN, leading part of emission materials in NLRs to co-move with the FSMBH, strongly indicating that narrow emission lines should have similar shifted velocities and similar line widths as those of \(\tau\)SMBH expected blue-shifted broad emission lines. Unfortunately, there are so far no reports on effects of \(\tau\)SMBH on properties of narrow emission lines from NLRs. Considering the two main reasons above, it is interesting to detect blue-shifted components both in BLRs and NLRs. Therefore, in this Letter, a special Type-1.9 AGN, SDSS J161950.67+500535.31 (\(\approx\)SDSS J1619), is reported with similar line widths and similar velocity shifts of blue-shifted broad components in both [O iii] doublet and H\(\alpha\), suggesting a \(\tau\)SMBH scenario in SDSS J1619. This Letter is organized as follows. Section 2 and 3 show the analysis of the spectrum and the necessary discussions. Section 4 shows our final conclusions. And in this Letter, the cosmological parameters of \(H_{0}~{}=~{}70\)km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{\Lambda}~{}=~{}0.7\) and \(\Omega_{m}~{}=~{}0.3\) have been adopted. ## 2 Spectroscopic results of SDSS J1619 Motivated by our previous measurements of opening angle of central dust tours in a Type-1.9 AGN with double-peaked broad H\(\alpha\) in Zhang (2022c), on studying properties of Type-1.9 AGN is one of our ongoing projects. Among our sample of Type-1.9 AGN, SDSS J1619 is selected as the subject of this Letter, due to its apparent blue-shifted broad components in both Balmer emission lines and forbidden [O iii] lines. SDSS spectrum (plate-mjd-fiberida=2884-54526-0145) of SDSS J1619 at \(z\sim 0.283\) is collected from SDSS DR16 (Ahumada et al., 2021) and shown in Fig. 1 with median signal-to-noise about 12. The redshift is determined by the SDSS pipeline1. Due to apparent stellar absorption features, host galaxy contributions should be firstly determined and subtracted, in order to measure properties of narrow/broad emission lines. Based on the 39 simple stellar population templates from Bruzual & Charlot (2003); Kauffmann et al. (2003), the SSP (simple Stellar Population) method discussed and accepted in Cid Fernandes et al. (2005); Cappellari (2017); Zhang (2021a,b, 2022a,b) is applied to determine the host galaxy contributions. Meanwhile, besides the stellar templates, a fourth order polynomial function is applied to describe intrinsic AGN continuum emissions. Left panels of Fig. 1 show the best descriptions to the SDSS spectrum (with emission lines being masked out) of SDSS J1619 and the corresponding line spectrum (the SDSS spectrum minus the best descriptions) through the Levenberg-Marquardt least-squares minimization technique (the known MPFIT package), with \(\chi^{2}/dof\sim 1.5\) (summed squared residuals divided by degree of freedom). Right panels of Fig. 1 shows clear descriptions to the absorption features around 4000A. The determined stellar velocity dispersion (the broadening velocity for the stellar templates) is about 165\(\pm\)29km/s. Meanwhile, accepted the SDSS pipeline determined redshift 0.283, an additional shifted velocity about 500\(\pm\)20km/s for the stellar templates is needed to describe the stellar absorption features. Footnote 1: [https://www.sdss3.org/dr8/algorithms/redshifts.php](https://www.sdss3.org/dr8/algorithms/redshifts.php) After subtractions of the stellar lights and the continuum emissions, emission lines around H\(\beta\) (from 4750 to 5150A in rest frame) and around H\(\alpha\) (from 6400 to 6700A in rest frame) can be measured, similar as what we have recently done in Zhang (2021a,b, 2022a,b,c). Two broad Gaussian functions are applied to describe broad component in H\(\beta\) (H\(\alpha\)). Two Gaussian functions plus one another Gaussian function are applied to describe the core components (with probable double-peaked features) and the component related to blue-shifted wing in each narrow emission line, including [O iii]\(\lambda\)4959, 5007A doublet, narrow H\(\beta\), [N ii]\(\lambda\)6548, 6583A doublet and narrow H\(\alpha\). When the model functions above are applied, the following restrictions are accepted. First, corresponding components of the [O iii] doublet (the [N ii] doublet, or the narrow Balmer lines) have the same redshift and the same line widths in velocity space. Second, corresponding components in the broad Balmer lines have the same redshift and the same line widths in velocity space. Third, flux ratio of the [O iii] doublet (the [N ii] doublet) is fixed to the theoretical value 3. Fourth, each Gaussian component has intensity not smaller than zero. Then, Fig. 2 shows the best descriptions to the emission lines of SDSS J1619, through the MPFIT package, with \(\chi^{2}/dof\sim 1.6\). The measured line parameters are listed in Table 1. Based on the best fitting results, two points can be found. First, the observed flux ratio (Balmer Decrement) of broad H\(\alpha\) to broad H\(\beta\) is about 12.35\({}^{+11.12}_{-5.38}\) in SDSS J1619, quite larger than common values around 3.1 as shown in Vanden Berk et al. (2001); Dong et al. (2008) in normal Type-1 AGN, indicating SDSS J1619 can be classified as a Type-1.9 AGN. Second, blue-shifted broad components can be found in H\(\alpha\) and also in forbidden [O iii] doublet. Moreover, due to double-peaked features in [O iii]\(\lambda\)5007A, it is hard to calculate velocity shifts of the broad component in [O iii]\(\lambda\)5007A relative to the narrow component in [O iii]\(\lambda\)5007A. Therefore, accepted the theoretical vacuum wavelength 6564.61A and 5008.24A for H\(\alpha\) and [O iii]\(\lambda\)5007A, the blue-shifted broad component in H\(\alpha\) has shifted velocity 690\(\pm\)240km/s (calculated by central wavelength difference between the theoretical vacuum value and the measured value in Table 1) and second moment 900\(\pm\)170km/s, which are not quite different from the shifted velocity 980\(\pm\)50km/s and second moment 850\(\pm\)34km/s of the blue-shifted broad component in [O iii] doublet, after considering uncertainties of the values. And the uncertainties of the velocity shifts and the second moments are calculated by the measured uncertainties (listed in Table 1) of the central wavelengths \begin{table} \begin{tabular}{l l l l} \hline \hline line & \(\lambda_{0}\) & \(\sigma\) & flux \\ \hline \hline Broad H\(\alpha\) & 6549.52\(\pm\)5.61 & 19.76\(\pm\)3.35 & 284\(\pm\)68 \\ \hline Broad H\(\beta\) & 4851.51\(\pm\)4.16 & 14.64\(\pm\)2.48 & 23\(\pm\)8 \\ \hline Narrow H\(\alpha\) & 6571.31\(\pm\)0.12 & 4.85\(\pm\)0.16 & 537\(\pm\)45 \\ & 6559.15\(\pm\)0.28 & 2.29\(\pm\)0.31 & 51\(\pm\)10 \\ \hline Narrow H\(\beta\) & 4867.64\(\pm\)0.09 & 3.59\(\pm\)0.12 & 81\(\pm\)4 \\ & 4858.63\(\pm\)0.21 & 1.70\(\pm\)0.23 & 7\(\pm\)2 \\ \hline & 5012.63\(\pm\)0.15 & 3.89\(\pm\)0.31 & 121\(\pm\)15 \\ [O iii]\(\lambda\)5007Å & 5004.44\(\pm\)0.37 & 3.25\(\pm\)0.24 & 100\(\pm\)13 \\ & 4991.86\(\pm\)0.79 & 14.22\(\pm\)0.56 & 272\(\pm\)12 \\ \hline [N ii]\(\lambda\)6583Å & 6591.80\(\pm\)0.46 & 4.55\(\pm\)0.62 & 223\(\pm\)100 \\ & 6587.23\(\pm\)3.28 & 8.18\(\pm\)1.65 & 208\(\pm\)96 \\ \hline \hline \end{tabular} \end{table} Table 1: Line parameters of each Gaussian emission component and the second moments of the broad blue-shifted broad component in H\(\alpha\) and in [O iii]\(\lambda\)5007\(\AA\). Meanwhile, due to single-peaked feature in [N ii]\(\lambda\)6583\(\AA\), relative to the central wavelength of the core component in [N ii]\(\lambda\)6583\(\AA\), the shifted broad extended component with second moment about 370\(\pm\)75km/s and velocity shift about 208\(\pm\)170km/s can be found in [N ii] doublet, quite smaller than the second moments and the velocity shifts of the blue-shifted broad components in H\(\alpha\) and in [O iii], indicating the blue-shifted components in [N ii] doublet have quite different kinematic properties from those of the blue-shifted broad components in H\(\alpha\) and in [O iii]. Certainly, if accepted the theoretical vacuum wavelength 6585.27\(\AA\) for [N ii]\(\lambda\)6583\(\AA\), the extended component in [N ii] doublet should be red-shifted component, totally different properties from those of the blue-shifted broad components in H\(\alpha\) and in [O iii]. Therefore, the shifted broad components in [N ii] doublet probably due to local kinematic properties of NLRs. There are no further discussions on the shifted components in [N ii], besides the fitting results shown in Fig 2 and the line parameters listed in Table 1. Furthermore, in Fig. 3, we show the distribution of the line width ratio \(R_{A3}\) between broad H\(\alpha\) (H\(\alpha_{B}\)) and broad extended component (O3E) in [O iii] line for the 535 normal QSOs (Zhang, 2021) with a mean value of \(\log(R_{A3})\sim 0.71\pm 0.26\) (the standard deviation accepted as the uncertainty 0.26). The dashed vertical red line indicates the value for SDSS J1619 (\(\sigma_{H\alpha_{B}}/\sigma_{O3E}=900/850\sim 1.06\)). There is no QSO with \(R_{A3}\) smaller than the value for SDSS J1619, which implies that the probability of detecting QSOs with \(R_{A3}<1.06\) is lower than \(1.87\times 10^{-3}\) (1/535) and suggests that SDSS J1619 is a unique source. Finally, four velocity shifts measured for SDSS 1619 based on the SDSS redshift (\(z=0.283\)) are listed in Table 2 with clear descriptions. And the velocity shifts \(SV\) of the broad components in H\(\alpha\) and [O iii] are mainly considered in the next section. If we adopt the stellar absorption feature as a reliable redshift of SDSS J1619 (\(z=0.285\)), the measured velocity shifts in Table 2 increase (\(SV_{B,B}\sim 1080\) km/s and \(SV_{B,O3}\sim 1370\)km/s) but the line widths do not change. ## 3 Discussion Based on the best fitting results, a blue-shifted broad component can be found in H\(\alpha\). However, it is necessary to determine where do the determined blue-shifted broad component in H\(\alpha\) come from, from emission regions in NLRs or in BLRs. Adopting the intrinsic flux ratio 3.1 of broad H\(\alpha\) to broad H\(\beta\), the observed flux ratio 12.35\({}^{+11.12}_{-5.38}\) of broad H\(\alpha\) to broad H\(\beta\) in SDSS J1619 indicates severe obscurations on the broad Balmer emission lines have \(E(B-V)\sim 1.22^{+0.55}_{-0.49}\), leading the intrinsic broad blue-shifted component in H\(\alpha\) to have line flux about \(4405^{+1054}_{-3025}\times 10^{-17}\)erg/s/cm\({}^{2}\) (corresponding intrinsic line luminosity about \(11.11^{+2.66}_{-7.61}\times 10^{42}\)erg/s). If accepted the blue-shifted broad component in H\(\alpha\) tightly related to BLRs bound to the expected SMBH in SDSS J1619, based on the virialization assumption (Vestergaard, 2002; Peterson et al., 2004; Shen et al., 2011; Mejia-Restrepo et al., 2022), combining with the second moment 19.76\(\pm\)3.35\(\AA\) of the blue-shifted broad H\(\alpha\) and the improved empirical dependence (Bentz et al., 2013) of BLRs size on the intrinsic broad line luminosity (Greene & Ho, 2005), virial mass of the recoiling BH in SDSS J1619 can be estimated by \[\begin{split} M_{BHr}&=15.6\times 10^{6}(\frac{L_{H \alpha}}{10^{42}\rm{erg/s}})^{0.55}(\frac{\sigma_{H\alpha}}{1000\rm{km/s}})^{2.06}\rm{M_{\odot}}\\ &=4.75^{+2.64}_{-3.04}\times 10^{7}\rm{M_{\odot}}\end{split} \tag{1}\] with uncertainties determined by the uncertainties of line width and intrinsic line luminosity of the blue-shifted broad component in H\(\alpha\). If the blue-shifted broad component in H\(\alpha\) was not related to normal BLRs, the estimated \(M_{BHr}\) should be quite different from the BH mass expected from the \(M_{\rm BH}-\sigma\) relation in SDSS J1619. The \(M_{\rm BH}-\sigma\) relation discussed in Ferrarese & Merritt (2000); Figure 1: Left panel shows the SDSS spectrum (in dark green) of SDSS J1619 and the SSP method determined best descriptions (solid red line) to the spectrum with emission lines being masked out. In top region, as shown legend in top-left corner, solid purple line shows the SSP method determined stellar fits, solid blue line shows the continuum emissions, vertical red lines from left to right mark the following emission features masked out, including [O ii]\(\lambda\)3727\(\AA\), \(\rm{H\beta}\), H\(\gamma\), [N iii]\(\lambda\)3869\(\AA\), He \(\lambda\)\(\lambda\)3891\(\AA\), Calcium K line, [Ne iii]\(\lambda\)3968\(\AA\), Calcium H line, [S ii]\(\lambda\)44070\(\AA\), H\(\delta\), H\(\gamma\), [O iii]\(\lambda\)4364\(\AA\), He \(\lambda\)\(\lambda\)5877\(\AA\) and [O i]\(\lambda\)16300, 6363\(\AA\), respectively, and the area filled by red lines around 5000\(\AA\) shows the region masked out including the emission features of probable He ii, broad and narrow H\(\beta\) and [O iii] doublet, and the area filled by red lines around 6550\(\AA\) shows the region masked out including the emission features of broad and narrow H\(\alpha\), [N ii] and [S ii] doublets. Bottom region of left panel shows the line spectrum calculated by the SDSS spectrum minus sum of the stellar lights and the continuum emissions. Right panels show the best descriptions to the absorption features around 4000\(\AA\) (rest wavelength range from 3750 to 4200\(\AA\)) (top right panel) and the corresponding line spectrum (the SDSS spectrum minus sum of the stellar lights and the continuum emissions) (bottom right panel). In right panels, line styles and symbols have the same meanings as those in the left panels. In top right panel, vertical red lines from left to right mark the following emission features of H\(\theta\), H\(\gamma\), [Ne iii]\(\lambda\)3869\(\AA\), He \(\lambda\)\(\lambda\)3891\(\AA\), Calcium K line, [Ne iii]\(\lambda\)43968\(\AA\), Calcium H line, [S ii]\(\lambda\)43968\(\AA\), Calcium H line, [S ii]\(\lambda\)4070\(\AA\), H\(\delta\). Gebhardt et al. (2000); Kormendy & Ho (2013); Batiste et al. (2017); Bennert et al. (2021) can be conveniently applied to estimate central BH mass in both quiescent galaxies and active galaxies. Moreover, as discussed in Di Matteo et al. (2005); Johansson et al. (2009) for BBH systems at sub-pc scales, central total BH mass could be also estimated by the \(M_{\rm BH}-\sigma\) relation. Therefore, it is interesting to check properties of the virial BH mass of SDSS J1619 in the plane of BH mass versus stellar velocity dispersion in Fig. 4. in order to show clearer results in Fig. 4, not only the \(M_{\rm BH}-\sigma\) relation reported in Kormendy & Ho (2013) is shown, but also the 89 quiescent galaxies from Savergman & Graham (2015) and the 29 reverberation mapped (RM) AGN from Woo et al. (2015) and the 12 tidal disruption events (TDEs) from Zhou et al. (2021) are also shown in the figure. Interestingly, in Fig. 4, \(M_{BHr}\) of SDSS J1619 is consistent with the mass expected from the \(M_{\rm BH}-\sigma\) relation, based on the measured stellar velocity dispersion 165\(\pm\)29km/s in SDSS J1619. Therefore, the blue-shifted broad component in H\(\alpha\) is from emission regions related to BLRs, and a rSMBH can be applied to explain the blue-shifted broad component in H\(\alpha\). Moreover, as discussed in Merritt et al. (2006); Gualandris & Merritt (2008); Komossa & Merritt (2008), materials can be bound to a rSMBH within a region with radius \(r_{k}\) given by \[r_{k}\sim 512\frac{M_{BH}}{10^{8}{\rm M_{\odot}}}\,(\frac{V_{k}}{10^{3}{\rm km/s}} )^{-2}{\rm light-days} \tag{2}\] with \(M_{BH}\) and \(V_{k}\) as the BH mass and the kick velocity of a rSMBH. In SDSS J1619, based on the continuum luminosity at 5100A about \(18.8^{+85.2}_{-14.7}\times 10^{44}{\rm erg/s}\) after considering the severe obscuration with \(E(B-V)\sim 1.22^{+0.55}_{-0.49}\), the expected BLRs size is about \(184^{+261}_{-108}\) lightdays. Considering the \(M_{BHr}\sim 4.75\times 10^{7}{\rm M_{\odot}}\) in SDSS J1619, \(V_{k}\sim 900\)km/s (the observed velocity shift) can lead to \(r_{k}\sim 500\)light-days, larger enough than the estimated BLRs size. In other words, the materials bound to the expected rSMBH in SDSS J1619 are enough to be applied to estimate the virial BH mass. Meanwhile, if accepted the new redshift 0.285 through the stellar absorption features, \(V_{k}\) should be corrected to \(V_{k}\sim 1300\)km/s, leading to \(r_{k}\sim 250\)light-days, still larger than the estimated BLRs size. Therefore, considering the amplitude of parsecs to kilo-pes of a rSMBH, when the rSMBH is wandering through the NLRs in SDSS J1619, the blue-shifted broad components in [O iii] doublet naturally have the same velocity shifts and the same line widths as those of the blue-shifted broad H\(\alpha\) from the BLRs bound to the rSMBH. Before end of the section, besides the preferred rSMBH scenario in SDSS J1619, the other three probable models are simply discussed as follows. First and foremost, there are double-peaked features in [O iii]\(\lambda\)5007A doublet widely accepted as signs of dual core systems at kilo-pc scales (Zhou et al., 2004; Xu & Komossa, 2009; Fu et al., 2011; Wang et al., 2019). Based on the double-peaked features in [O iii]\(\lambda\)5007A, the peak separation is about 491\(\pm\)31km/s, leading broad emission lines from central two cores to have the same peak separation 491\(\pm\)31km/s. However, the velocity shift about 900km/s of the blue-shifted broad H\(\alpha\) in SDSS J1619 is about two times (\(z\sim 0.283\) or \(z\sim 0.285\) accepted) higher than the 491\(\pm\)31km/s, indicating the blue-shifted broad H\(\alpha\) is not related to a dual core system expected by the double-peaked [O iii] in SDSS J1619. Meanwhile, if a BBH system was assumed in SDSS J1619 without considerations of the double-peaked features in [O iii]\(\lambda\)5007A, a blue-shifted broad H\(\alpha\) could be expected in SDSS J1619. However, it is hard to explain the blue-shifted broad components in [O iii]\(\lambda\)5007A and in H\(\alpha\) have the same line widths, as shown above that the probability is smaller than \(1.87\times 10^{-3}\) to detect an AGN with \(R_{A3}\leq 0.94\). Therefore, a BBH system is disfavoured in SDSS J1619. Besides, a disk emitter (Chen & Halpern, 1989; Eracleous et al., 1995; Storchi-Bergmann et al., 2003) can be applied to explain the blue-shifted broad H\(\alpha\), however, it is also hard to explain the blue-shifted broad H\(\alpha\) has the same line width as that of the blue-shifted broad component in [O iii]\(\lambda\)5007A in SDSS J1619, due to lower probability \(1.87\times 10^{-3}\) to find an AGN with \(R_{A3}\leq 0.94\). Therefore, the BBH model is not favoured in SDSS J1619. Last but not the least, an outflowing model could be applied to explain the similar velocity shifts and the similar line widths of the broad components in H\(\alpha\) and [O iii] doublet, if accepted the broad components in H\(\alpha\) and in [O iii] doublet were from NLRs. If the broad components were not from BLRs but from NLRs, it is hard to expect the consistency between the virial \(M_{BHr}\) and the mass from the \(M_{\rm BH}-\sigma\) relation, while considering the probability lower around \(1.87\times 10^{-3}\) to find an AGN with \(R_{A3}\leq 0.94\). Therefore, the outflowing model is disfavoured in SDSS J1619. ## 4 Conclusions After considering the advantages of studying rSMBH in Type-1.9 AGN, similar line widths and velocity shifts of the blue-shifted broad components in H\(\alpha\) and [O iii] doublet are reported in the Type-1.9 AGN SDSS J1619. Based on the consistency between the central virial BH mass in SDSS J1619 and the BH mass expected from the \(M_{\rm BH}-\sigma\) relation, the blue-shifted broad H\(\alpha\) can be accepted to come from BLRs bound to the expected rSMBH in SDSS J1619. Meanwhile, the expected rSMBH wandering through NLRs can be naturally applied to explain the similar velocity shifts and the similar line widths of the blue-shifted broad component in H\(\alpha\) and in [O iii] in SDSS J1619. Therefore, the rSMBH scenario is preferred in SDSS J1619. ## Acknowledgements ZXG gratefully acknowledges the anonymous referee for reading our manuscript carefully and patiently, and giving us constructive comments and suggestions to greatly improve our paper. ZXG gratefully thanks the grant support from research funding by GuangXi University and the grant support from NSFC-12173020 and NSFC-12373014. The Letter has made use of the data from the SDSS projects with web site [http://www.sdss3.org/](http://www.sdss3.org/). The Letter has made use of the MPFIT package ([http://cow.physics.wisc.edu/~craigm/idl/idl.html](http://cow.physics.wisc.edu/~craigm/idl/idl.html)) written by Craig B. Markwardt. ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author ([email protected]).
2306.10554
Optimal test statistic under normality assumption
The idea of an optimal test statistic in the context of simultaneous hypothesis testing was given by Sun and Tony Cai (2009) which is the conditional probability of a hypothesis being null given the data. Since we do not have a simplified expression of the statistic, it is impossible to implement the optimal test in more general dependency setup. This note simplifies the expression of optimal test statistic of Sun and Tony Cai (2009) under the multivariate normal model. We have considered the model of Xie et. al.(2011), where the test statistics are generated from a multivariate normal distribution conditional to the unobserved states of the hypotheses and the states are i.i.d. Bernoulli random variables. While the equivalence of LFDR and optimal test statistic was established under very stringent conditions of Xie et. al.(2016), the expression obtained in this paper is valid for any covariance matrix and for any fixed 0<p<1. The optimal procedure is implemented with the help of this expression and the performances have been compared with Benjamini Hochberg method and marginal procedure.
Nabaneet Das, Subir K. Bhandari
2023-06-18T13:28:40Z
http://arxiv.org/abs/2306.10554v1
# Optimal test statistic under normality assumption ###### Abstract The idea of an optimal test statistic in the context of simultaneous hypothesis testing was given by Sun and Tony Cai (2009) which is the conditional probability of a hypothesis being null given the data. Since we do not have a simplified expression of the statistic, it is impossible to implement the optimal test in more general dependency setup. This note simplifies the expression of optimal test statistic of Sun and Tony Cai (2009) under the multivariate normal model. We have considered the model of Xie et al. (2011), where the test statistics are generated from a multivariate normal distribution conditional to the unobserved states of the hypotheses and the states are i.i.d. Bernoulli random variables. While the equivalence of LFDR and optimal test statistic was established under very stringent conditions of Xie et al. (2016), the expression obtained in this paper is valid for any covariance matrix and for any fixed \(0<p<1\). The optimal procedure is implemented with the help of this expression and the performances have been compared with Benjamini Hochberg method and marginal procedure. ## 1 Introduction Dependent observations are frequently encountered in large scale multiple testing problems and they pose a major challenge because of the limitations of the traditional methods which were developed under the assumption of independence. Examples include micro-array experiments where we come across data on thousands of genes and the goal is to separate the'significant' ones which are very few in number. Analysis of false discovery rate (FDR) (Benjamini and Hochberg (1995)) have been widely used in such cases. Although the original FDR controlling procedure was developed for independent p values, Benjamini et al. (2001) showed that these p-value based procedures are adaptive to certain dependency structures. However, when the proportion of true nulls is relatively small, these procedures often exhibit undesired results (e.g.- too conservative) It can be seen that, in dealing with dependent hypotheses, the validity issue has been over emphasized and very few literature are available which actually address the issue of efficiency. Efron et al. (2001) introduced local false discovery rate (LFDR) in z-value based testing procedures and studied both size and power (Efron et al. (2007)). Efron (2007), Efron (2010) further investigated the effect of correlations on these z-value based procedures and pointed out that, root mean square (rms) of correlations is an important aspect in determining the validity of these z-value based methods. An excellent review of the whole work can be found in Efron (2012). Sun and Tony Cai (2009) took a different approach and developed an adaptive multiple testing rule for false discovery control. In their paper, they have used marginal false discovery rate (mFDR) and marginal false non-discovery rate (mFNR) in place of the traditional FDR and FNRs. However, Genovese and Wasserman (2002) have established that, under the assumption of independence, these are asymptotically the same in the sense that, mFDR = FDR + \(O(\frac{1}{\sqrt{n}})\) and mFNR = FNR +\(O(\frac{1}{\sqrt{n}})\), where \(n\) is the number of hypotheses. Such asymptotic equivalence is valid in a more general setting (Xie et al. (2011)) of short range dependency structure. Sun and Tony Cai (2009) established a one to one correspondence between weighted classification problem and multiple hypothesis testing problem under the monotone ratio condition (MRC) and introduced a new test statistic named local index of significance (LIS) which is optimal in the sense that the test based on this statistic minimizes the mFNR among all methods that control mFDR at a certain level of significance. The optimality of their test statistic is a remarkable development because of the following two reasons. * It does not depend on the structure of dependency of the hypotheses. * It has been established under MRC condition which is fairly general. As Sun and Tony Cai (2009) has highlighted that, the test statistics that are defined on the basis of z-values, such as local false discovery rate (Efron et al. (2001)), p-value and the weighted p-value vector (Genovese et al. (2006)) belong to the MRC class. The LIS statistic reduces to local false discovery rate (LFDR) under independence and and the optimality of LFDR based procedures of Efron (2012), Efron et al. (2001) is thus established (Sun and Cai (2007)). However, the closed form expression of this optimal statistic is usually very difficult to find and this poses a major challenge to its application in real data. Sun and Tony Cai (2009) considered the hidden Markov model (HMM) where the latent indicator variable of being non-null follows a homogeneous irreducible Markov Chain and developed a recursive method for implementation of the optimal statistic based test. Xie et al. (2011) have implemented this test under multivariate normal distribution model. However, their original claim that, the optimal LIS statistic and LFDR is asymptotically the same, only holds under very stringent conditions imposed on model parameters (Xie et al. (2016)). In this article, we have studied the same model and substantially simplified the test statistic. The reason for considering multivariate Gaussian model is its wide applicability in real life problems and the results proved in this article hold for any positive definite correlation matrix. ## 2 Oracle Decision rule for Multivariate normal model We consider testing n null hypotheses \(H_{01},...,H_{0n}\) and for \(i=1,2,..,n\) \[\theta_{i}=\left\{\begin{array}{ll}1&\mbox{if \ i-th null hypothesis is false}\\ 0&\mbox{if Otherwise}\end{array}\right.\] Let \(\mathbf{X}=(X_{1},...,X_{n})\) be a sequence of test statistics for testing \(\mathbf{H_{0}}=(H_{01},\ldots,H_{0n})\). In this paper we consider the following model. * \(\theta_{1},\ldots,\theta_{n}\stackrel{{ i.i.d.}}{{\sim}}Ber(p)\) for some \(0<p<1\) * \(\mathbf{X}|\theta\sim N_{n}(k\theta,\Sigma)\) where \(\theta=(\theta_{1},\ldots,\theta_{n})\) and \(k\neq 0\). Here \(\Sigma\) is any positive definite covariance matrix. ### Discussion on error rate criteria For any multiple testing procedure on these n hypotheses, let V,R,W,A denote the number of false rejections, no. of rejections, no. of false acceptances and no. of acceptances respectively. The false discovery rate (FDR) and marginal false discovery rate (mFDR) are defined as below. \[\text{FDR }=E\left[\frac{V}{R}\right]\quad\text{and}\quad mFDR=\frac{E[V]}{E[R]}\] These are versions of type - I error in the context of multiple testing. And, the versions of type -II errors are defined as \[\text{FNR }=E\left[\frac{W}{A}\right]\quad\text{and}\quad mFNR=\frac{E[W]}{E[A]}\] It can be easily shown by Jensen's inequality that, \[\text{FDR (FNR) }\leq\text{ mFDR (mFNR)}\] This implies, the methods which aims to control mFDR, tend to be more conservative than the methods controlling FDR. However, Genovese and Wasserman (2002) has shown that, mFDR (mFNR) \(=\) FDR (FNR) \(+O(\frac{1}{\sqrt{n}})\) under independence. Xie et al. (2011) have established the asymptotic equivalence of mFDR (mFNR) and FDR (FNR) under short range dependency criterion. **Theorem 2.1**: _(Xie et al. (2011)) Suppose \(\mathbf{X}=(X_{1},\ldots,X_{n})\) is a sequence of random variables with same marginal density \(f\) and \(X_{i}\) and \(X_{j}\) are independent if \(|i-j|>n^{\tau}\) for some \(0\leq\tau<1\). Let, \(\hat{\delta}_{i}=I_{(S_{i}\in R}\) be a short-ranged rule to test \(H_{i0}\), in the sense that \(S_{i}\) only depends on the variables that are dependent with \(X_{i}\),_ \[S_{i}=S(X_{i-[n^{\tau}]},....,X_{i+[n^{\tau}]})\] _Further, suppose that,_ \[P(S_{i}\in R,\theta_{i}=1)\geq P(S_{i}\in R,\theta_{i}=0),\] _and,_ \[P(S_{i}\in R,\theta_{i}=1)>0\ \forall\,i=1,2,..,n.\] _Then, the FDR(FNR) of the rule \(\hat{\delta}\) can be approximated by the mFDR(mFNR) in the sense that,_ \[mFDR=FDR+O(\frac{1}{n^{1-\tau}})\quad\text{and}\quad mFNR=FNR+O(\frac{1}{n^{1 -\tau}})\] **Note :-** This asymptotic equivalence does not hold when the correlation matrix is not sparse (e.g. equi-correlated case) In this article, we have considered mFDR and mFNR and derived an optimal test statistic which minimizes mFNR among all methods controlling mFDR at a pre-specified level of significance. ### Oracle decision rule for multiple testing problem Consider the weighted classification problem with decision rule \(\delta=(\delta_{1},...,\delta_{n})\in\{0,1\}^{n}\), with \(\delta_{i}=1\) if i-th hypothesis is rejected and \(\delta_{i}=0\) otherwise. Consider the loss function \[L_{\lambda}(\delta,\theta)=\frac{1}{n}\sum_{i=1}^{n}\{\delta_{i}(1-\theta_{i} )+\lambda\theta_{i}(1-\delta_{i})\} \tag{1}\] with \(\lambda>0\) the weight for a false positive result. It is well-known that, if \(g(x|\theta_{i}=j)\) denotes the density of \(x\) when \(\theta_{i}=j\) ( \(j=0,1\)), then, the classification risk \(E[L_{\lambda}(\theta,\delta)]\) is minimized by the Bayes rule \(\delta(\Lambda,\lambda)=(\delta_{1},...,\delta_{n})\), where \[\delta_{i}=I\{\Lambda_{i}(x)=\frac{(1-p)g(x|\theta_{i}=0)}{pg(x|\theta_{i}=1) }<\lambda\} \tag{2}\] Alternatively, if the goal is to discover as many significant hypotheses as possible while incurring a relatively low proportion of false positives, we can study a multiple testing problem where the goal is find a decision rule \(\delta\) that has the smallest FNR(mFNR) amoung all FDR(mFDR) procedures at level \(\alpha\). Sun and Tony Cai (2009), Xie et al. (2011) has shown that, among all procedures controlling mFDR at level \(\alpha\), a procedure which minimizes the mFNR must be of the form \(\delta(\mathbf{T},\mathbf{c})=I_{\mathbf{T}<c\mathbf{1}}=I(T_{i}<c,\,\,\,i=1,2..,n)\) for some statistic \(\mathbf{T}\) and some real number \(c\). (Here \(\mathbf{1}\) denotes the vector with all entries equal to \(1\)) The following theorem explicate the whole idea. **Theorem 2.2**: _(_Xie et al. (2011) _) Consider the class of decision rules \(\mathscr{D}_{s}=\{\delta\,\,:\,\,\delta_{i}=I_{\lambda_{i}<\lambda},i=1,\ldots,n\}\) where \(\mathbf{\Lambda}=\{\Lambda_{1},\ldots,\Lambda_{n}\}\) is defined in (2) and \(\lambda\in\mathbb{R}\). Given any mFDR level \(\alpha\) and a decision rule_ \[\delta(S,R)=\{I_{S_{1}\in R_{1}},...,I_{S_{n}\in R_{n}}\}\] _with \(mFDR(\delta(S,R))\leq\alpha\). Then there exists a \(\lambda\) depending on \(\delta(S,R)\), such that, \(\delta(\Lambda,\lambda)\in\mathscr{D}_{s}\) outperforms \(\delta(S,R)\) in the sense that,_ \[mFDR(\delta(\Lambda,\lambda)\leq mFDR(\delta(S,R))\leq\alpha,\] _and_ \[mFNR(\delta(\Lambda,\lambda)\leq mFNR(\delta(S,R))\] Theorem 2.2 implies that, the optimal solution of the multiple testing problem with mFDR and mFNR as the error rate criteria, belongs to the set \(\mathscr{D}_{s}\). Instead of searching for all decision rules, one only needs to search in the collection \(\mathscr{D}_{s}\) for the optimal rule. The following result shows that, for a given \(\alpha\), the optimal rule for the multiple testing problem is unique. **Theorem 2.3**: _( Xie et al. (2011) ) Consider the optimal decision rule \(\delta(\Lambda,\lambda)\) in the weighted classification problem with the loss function (1). For any \(0<\alpha<1\), there exists a unique \(\lambda(\alpha)\), such that \(\delta\{\Lambda,\lambda(\alpha)\}\) controls the mFDR at level \(\alpha\) and minimizes the mFNR among all decision rules._ Theorem 2.3 gives us the optimal testing rule with mFDR and mFNR as the error rate criteria. It also establishes a one-to-one correspondence between the multiple testing problem and weighted classification problem. However, it is often hard to determine the \(\lambda(\alpha)\) corresponding to the given \(\alpha\). ### Implementation of the optimal test Xie et al. (2011) have provided a method to implement the optimal test of theorem 2.2. Define, \[T_{OR,i}=P(\theta_{i}=0|x)=\frac{(1-p)g(x|\theta_{i}=0)}{g(x)}\] Clearly, \(T_{OR,i}=\frac{\Lambda_{i}}{1+\Lambda_{i}}\) increases with \(\Lambda_{i}\). Thus, for a given mFDR value \(\alpha\), one can rewrite the optimal rule as \[\delta_{OR,i}=\delta(\Lambda,\lambda(\alpha))=I\left\{T_{OR,i}<\frac{\lambda( \alpha)}{1+\lambda(\alpha)}\right\}\] Let \(T_{OR,(i)}\) denote the i-th order statistic of \(T_{OR,i}\) and \(H_{0,(i)}\) be the corresponding null hypothesis \((i=1,\ldots,n)\). Then, if \(R\) denote the no. of rejections, then \[mFDR=E\left[\frac{1}{R}\sum_{i=1}^{R}T_{OR,(i)}\right]\] Then, according to the theorem 5 of Xie et al. (2011), if \(p\) and \(g\) are known, then the following method controls mFDR at level \(\alpha\) : \[\text{Reject all }H_{0,(i)}\text{ for }i=1,\ldots,k\text{ \ where }k=\max\left\{l\;:\;\frac{1}{l}\sum_{i=1}^{l}T_{OR,(i)}\leq\alpha\right\} \tag{3}\] The final oracle rule (3) consists of two steps : * Calculate the oracle statistic \(T_{OR,i}\) for \(i=1,\ldots,n\). * Rank the statistics and calculate the running averages to determine the cutoff. All hypotheses below the cutoff are rejected. However, the major difficulty associated with this optimal test is that the test statistic \(T_{OR,i}\) is often very difficult to compute. A simplified expression of this test statistic is hard to find and the model parameters are difficult to estimate under dependent models. In this article, we provide a method for implementation of the optimal test under the multivariate normal model. Simplification of the oracle decision rule for multivariate normal model Under the model specified in section 2, the optimal test statistic can be simplified as follows. **Theorem 3.1**: _If \(\mathbf{X}\mid\theta\sim N_{n}(k\theta,\Sigma)\) and \(\theta_{1},\ldots,\theta_{n}\sim^{i.i.d.}\mbox{Ber}(p)\), then for \(i=1,\ldots,n\)_ \[P(\theta_{i}=0\mid\mathbf{X})\:=\:\frac{1}{1+\frac{pU_{i}}{1-p}}\] _Where \(U_{i}=\exp(-(\frac{k^{2}}{2}t_{i,i}-k\sum\limits_{j=1}^{n}t_{j,i}x_{j}))( \prod\limits_{j\neq i}(pe^{-k^{2}t_{j,i}}+(1-p)))\) and \(\underset{\sim}{t_{i}}=(t_{1,i},...,t_{n,i})\) is the \(i\)-th column of \(\Sigma^{-1}\)._ Proof of theorem 3.1 is given in appendix. **Remarks** :- While the equivalence of the joint conditional probability of Xie et al. (2011) and Xie et al. (2016) was established under very restrictive assumptions on the correlation matrix, theorem 3.1 sufficiently simplifies the conditional probability for any covariance matrix. The result of 3.1 enables us to implement the Oracle decision rule for any \(p,\Sigma\). We have performed extensive simulations with different combinations of \(p\) and \(\Sigma\) and compared the observed value of FDR and FNRs some of which will be discussed here. ### Simulation Studies In this section, we evaluate the performance of the oracle rule and compare with the BH procedure and the marginal procedure mentioned in Xie et al. (2011). We have evaluated the empirical FDR, FNR and also the number of rejections. In our simulations, we assumed a multivariate normal model : \[X|\theta\sim N(c\theta,\Sigma)\] where \(\theta_{i}\) follows Bernoulli(\(p\)). Under this model, the non-null distribution has mean \(c\) and \(\Sigma\) is a correlation matrix. For our simulations, we have considered \(c=2.5\) and \(\alpha=0.05\). Our objective is to assess the performance of these methods when there is sufficient deviation from independence. In the first case, we have considered equicorrelated \(\Sigma\). In all the simulations, number of hypotheses (\(n\)) have been considered to be 5000 and they are run on 10 combinations of proportion of non-null \(p=0.01,0.02,....,0.1\). In order to assess the performance under sufficient deviation from independence, seven cominations of correlation have been considered (\(\rho=0.2,0.3,0.4,0.5,0.6,0.7,0.8\)). The results suggest that, the Oracle procedure is least conservative among the three procedures in terms of FDR. It is interesting to note that, the FDR of the Oracle rule always lies within the prescribed limit of 0.05. Maintaining this upper bound on FDR, there is a substantial gain in the FNR over both BH and marginal procedure. It is interesting to note that, the marginal procedure becomes more conservative than the other two methods. However, the FNR of the marginal procedure remains similar to the BH procedure which suggests a possibility of improvement of this method and that is achieved by considering the information of joint distribution in the Oracle procedure. The conservative nature of the marginal procedure in comparison to the BH method is possibly due to the objective of controlling mFDR instead of FDR. Since the FNR remains equivalent for these two methods, careful examination of the class \(\mathscr{D}_{s}\) may provide a significantly better test statistic. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Marginal Procedure** & **BH procedure** & **Oracle procedure** & **p** & **Correlation** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: FDRs of the three methods From the FDRs, it is clear that, \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Marginal Procedure** & **BH procedure** & **Oracle procedure** & **p** & **Correlation** \\ \hline 0.011670306 & 0.026734704 & 0.031802998 & 0.05 & 0.7 \\ 0.011271098 & 0.022000566 & 0.036869315 & 0.05 & 0.8 \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: **(FDRs)– continued from previous page** * All the three methods (especially BH procedure) become more and more conservative with increasing value of the correlation. * Marginal procedure tend to be the most conservative among the other three methods for higher correlations. (i.e. \(\mbox{FDR}_{MP}\leq\mbox{ FDR}_{BH}\leq\mbox{FDR}_{OP}\). However, slight exceptions can be observed for smaller correlations and higher \(p\) where BH procedure has slightly higher FDR among all. With the above mentioned observations on FDR, it is imperative to note the FNRs of these three methods. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Marginal Procedure** & **BH procedure** & **Oracle procedure** & **p** & **Correlation** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: FNRs of the three methods **Table 2(FNRs) - continued from previous page** \begin{tabular}{|l|l|l|l|l|} \hline **Marginal Procedure** & **BH procedure** & **Oracle procedure** & **p** & **Correlation** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} Continued on next page As per the optimality of Oracle procedure, it has the lowest FNR among all. It is interesting to note that, while the marginal procedure was the most conservative in terms of FDR, its FNR is nearly equivalent (or even better in some cases) to the BH procedure. It is again reminded that, we are controlling the mFDR(mFNR) instead of FDR(FNR). The results suggest that there is a scope of further improvement in the class \(\mathscr{D}_{s}=\{\delta\::\:\delta_{i}=I_{\Lambda_{i}<\lambda},i=1,\ldots,n\}\) if \(\Lambda\) and \(\lambda\) can be chosen properly. Examining the FDRs and FNRs does not entirely describe how conservative a method is. We know that, these methods become conservative with increasing value of correlation. To examine this, we have also tabulated the no. of rejections of these three methods in different combinations of \(p\) and correlation. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Marginal Procedure** & **BH procedure** & **Oracle procedure** & **p** & **Correlation** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: No. of rejections of the three methods \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Marginal Procedure** & **BH procedure** & **Oracle procedure** & **p** & **Correlation** \\ \hline 25 & 79 & 145 & 0.03 & 0.7 \\ 24 & 95 & 155 & 0.03 & 0.8 \\ \hline [MISSING_PAGE_POST] \hline 157 & 158 & 200 & 0.09 & 0.2 \\ 157 & 165 & 244 & 0.09 & 0.3 \\ \hline \end{tabular} \end{table} Table 3(No. of rejections) – continued from previous page No. of rejections for the Oracle procedure is significantly higher than the other three methods and hence this is the least conservative among all. However, the equicorrelated \(\Sigma\) is an unlikely scenario in real life applications. We only considered this in order to generate a scenario which is substantially different from the independent setup and compare the performances of the methods. Now we present the results on block diagonal correlation matrix. Here we have divided the correlation matrix in four blocks of equicorrelated matrices with correlation \(0.15,0.25,0.5,0.75\). The results again suggest that, the Oracle Procedure is least conservative among the three methods in terms of FDR while maintaining the prescribed limit of \(0.05\). No. of rejections for Oracle procedure is significantly higher than the other two and the gain in power is also noteworthy. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Marginal Procedure & BH Procedure & Oracle Procedure & p & \(\rho_{1}\) & \(\rho_{2}\) & \(\rho_{3}\) & \(\rho_{4}\) \\ \hline 0.018467 & 0.036869 & 0.043467 & 0.01 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.027403 & 0.038453 & 0.04318 & 0.02 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.030884 & 0.039435 & 0.041572 & 0.03 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.032916 & 0.039906 & 0.039776 & 0.04 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.034597 & 0.040304 & 0.037964 & 0.05 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.035802 & 0.04027 & 0.03615 & 0.06 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.037047 & 0.040448 & 0.034459 & 0.07 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.038028 & 0.040457 & 0.032838 & 0.08 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.038755 & 0.040247 & 0.031275 & 0.09 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.039511 & 0.040099 & 0.029757 & 0.1 & 0.25 & 0.5 & 0.15 & 0.75 \\ \hline \end{tabular} \end{table} Table 4: FDRs in block diagonal case As mentioned earlier, Oracle procedure exploits the information of joint distribution unlike the marginal and BH procedure. The results from the simulation studies have shown a significant improvement in FNR and the no. of rejections in exchange of very little sacrifice in FDR. Hence, it is interesting to explore the class \(\mathscr{D}=\{I_{X_{i}>c}\,i=1,\ldots,n\}\) and to search for a different choice of \(c\) which can provide further improvement. Also, implementation of the optimal procedure under a more general dependency setup (e.g. m-dependent structure) is still a challenging open problem. ## 4 Annexture ### Proof of theorem 3.1 Let \(f(\mathbf{x},\theta)\) denote the value of \(N(k\theta,\Sigma)\) density at \(\mathbf{x}\). Then, \[P(\theta_{i}=0|\mathbf{x})=\frac{(1-p)E_{\theta_{0,i}}[f(\mathbf{x},\theta_{0,i})]}{E_{\theta}[f(\mathbf{x},\theta)]}\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Marginal Procedure & BH Procedure & Oracle Procedure & p & \(\rho_{1}\) & \(\rho_{2}\) & \(\rho_{3}\) & \(\rho_{4}\) \\ \hline 3 & 9 & 25 & 0.01 & 0.25 & 0.5 & 0.15 & 0.75 \\ 12 & 19 & 57 & 0.02 & 0.25 & 0.5 & 0.15 & 0.75 \\ 26 & 33 & 92 & 0.03 & 0.25 & 0.5 & 0.15 & 0.75 \\ 42 & 49 & 129 & 0.04 & 0.25 & 0.5 & 0.15 & 0.75 \\ 61 & 67 & 166 & 0.05 & 0.25 & 0.5 & 0.15 & 0.75 \\ 82 & 87 & 204 & 0.06 & 0.25 & 0.5 & 0.15 & 0.75 \\ 106 & 108 & 242 & 0.07 & 0.25 & 0.5 & 0.15 & 0.75 \\ 131 & 131 & 280 & 0.08 & 0.25 & 0.5 & 0.15 & 0.75 \\ 157 & 156 & 318 & 0.09 & 0.25 & 0.5 & 0.15 & 0.75 \\ 186 & 181 & 357 & 0.1 & 0.25 & 0.5 & 0.15 & 0.75 \\ \hline \end{tabular} \end{table} Table 6: No. of rejections in block diagonal case \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Marginal Procedure & BH Procedure & Oracle Procedure & p & \(\rho_{1}\) & \(\rho_{2}\) & \(\rho_{3}\) & \(\rho_{4}\) \\ \hline 0.00941 & 0.009047 & 0.005234 & 0.01 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.017709 & 0.017274 & 0.009103 & 0.02 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.025293 & 0.024921 & 0.012522 & 0.03 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.032339 & 0.032104 & 0.015691 & 0.04 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.038957 & 0.038926 & 0.018713 & 0.05 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.045229 & 0.045462 & 0.021645 & 0.06 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.051179 & 0.051733 & 0.024527 & 0.07 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.056849 & 0.057763 & 0.027396 & 0.08 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.062316 & 0.063647 & 0.030275 & 0.09 & 0.25 & 0.5 & 0.15 & 0.75 \\ 0.067539 & 0.069341 & 0.033173 & 0.1 & 0.25 & 0.5 & 0.15 & 0.75 \\ \hline \end{tabular} \end{table} Table 5: FNRs in block diagonal case Where \(\theta_{0,i}\) has \(0\) in it's \(i\)-th place. Observe that, \[\frac{(1-p)E_{\theta_{0,i}}[f({\bf x},\theta_{0,i})]}{E_{\theta}[f({\bf x},\theta )]}=\frac{(1-p)\sum\limits_{\theta_{0,i}}f({\bf x},\theta_{0,i})h_{0}(\theta_{0,i})}{(1-p)\sum\limits_{\theta_{0,i}}f({\bf x},\theta_{0,i})h_{0}(\theta_{0,i}) +p\sum\limits_{\theta_{1,i}}f({\bf x},\theta_{1,i})h_{1}(\theta_{1,i})}\] Where \(\theta_{1,i}\) has \(1\) in it's \(i\)-th place and \(h_{0}\) and \(h_{1}\) are the joint p.m.f.s of \(\theta\) given \(\theta_{i}=0\) and \(\theta_{i}=1\) respectively. In particular, \(\theta_{1,i}=\theta_{0,i}+e_{i}\) (\(e_{i}\) is the vector with \(1\) in the i-th place and \(0\) elsewhere) Let \(B=\sum\limits_{\theta_{1,i}}f({\bf x},\theta_{1,i})h_{1}(\theta_{1,i})\) and \(A=\sum\limits_{\theta_{0,i}}f({\bf x},\theta_{0,i})h_{0}(\theta_{0,i})\). Then, \(P(\theta_{i}=0|{\bf x})=\frac{1}{1+(1-p)A}=\) (A monotone function in \(\frac{B}{A}\)) Since \(\theta_{i}\)'s are i.i.d., we must have \(h_{0}=h_{1}\). Putting \(\theta_{1,i}=\theta_{0,i}+e_{i}\), we get, \[f({\bf x},\theta_{1,i})=f(x,\theta_{0,i})\exp(-\frac{k^{2}}{2}e_{i}^{T}\Sigma^ {-1}e_{i})\exp(k({\bf x}-k\theta_{0,i})^{T}\Sigma^{-1}e_{i})\] Let \(t_{i}=(t_{1,i},...,t_{n,i})\) be the \(i\)-th column of \(\Sigma^{-1}\). Then, \(e_{i}^{T}\Sigma^{-1}e_{i}=t_{i,i}\) and \({\bf x}^{T}\Sigma^{-1}e_{i}=\sum\limits_{j=1}^{n}t_{j,i}x_{j}\). This implies, \[f{\bf x},\theta_{1,i})=f({\bf x},\theta_{0,i})\exp(-(\frac{k^{2}}{2}t_{i,i}-k \sum\limits_{j=1}^{n}t_{j,i}x_{j}))(\exp(-k^{2}\theta_{0,i}^{T}\Sigma^{-1}e_{ i}))\] Observe that, \[\frac{\sum\limits_{\theta_{1,i}}f({\bf x},\theta_{1,i})h_{1}(\theta_{1,i})}{ \sum\limits_{\theta_{0,i}}f({\bf x},\theta_{0,i})h_{0}(\theta_{0,i})}=\frac{E_ {\theta_{1,i}}[f({\bf x},\theta_{1,i})]}{E_{\theta_{0,i}}[f({\bf x},\theta_{0,i })]}=\exp(-(\frac{k^{2}}{2}t_{i,i}-k\sum\limits_{j=1}^{n}t_{j,i}x_{j}))\frac{E_ {\theta_{0,i}}[f({\bf x},\theta_{0,i})\exp(-k^{2}\theta_{0,i}^{T}\Sigma^{-1}e _{i})]}{E_{\theta_{0,i}}[f({\bf x},\theta_{0,i})]}\] Note that, \(f({\bf x},\theta_{0,i})=g({\bf x}-\theta_{0,i})\) = A function of \(({\bf x}-k\theta_{0,i})\). As per our model \(({\bf X}-\theta_{0,i})\) is independent of \(\theta_{0,i}\) and hence, we can say that, \[E_{\theta_{0,i}}[f({\bf x},\theta_{0,i})\exp(-k^{2}\theta_{0,i}^{T}\Sigma^{-1 }e_{i})]=E_{\theta_{0,i}}[f({\bf x},\theta_{0,i})]C\] And thus, \[\frac{B}{A}=\exp(-(\frac{k^{2}}{2}t_{i,i}-k\sum\limits_{j=1}^{n}t_{j,i}x_{j})) E_{\theta_{0,i}}[\exp(-k^{2}\theta_{0,i}^{T}\Sigma^{-1}e_{i})]\] Note that, \(\theta_{0,i}^{T}\Sigma^{-1}e_{i}=\sum\limits_{j\neq i}\theta_{j}t_{j,i}\) and from the independence of \(\theta_{j}\)'s we can conclude that, \[E_{\theta_{0,i}}[\exp(-k^{2}\theta_{0,i}^{T}\Sigma^{-1}e_{i})]=\prod\limits_{j \neq i}E[e^{-t_{j}\theta_{j}}]=\prod\limits_{j\neq i}(pe^{-k^{2}t_{j,i}}+(1-p))\] Thus, we finally obtain a simplified expression of the optimal test statistic as the following \[P(\theta_{i}=0\mid\mathbf{X})\:=\:\frac{1}{1+\frac{pU_{i}}{1-p}}\] Where \(U_{i}=\exp(-(\frac{k^{2}}{2}t_{i,i}-k\sum\limits_{j=1}^{n}t_{j,i}x_{j}))(\prod \limits_{j\neq i}(pe^{-k^{2}t_{j,i}}+(1-p)))\) and \(\underset{\sim}{t_{i}=(t_{1,i},...,t_{n,i})}\) is the \(i\)-th column of \(\Sigma^{-1}\)
2305.01109
Leveraging covariate adjustments at scale in online A/B testing
Companies offering web services routinely run randomized online experiments to estimate the causal impact associated with the adoption of new features and policies on key performance metrics of interest. These experiments are used to estimate a variety of effects: the increase in click rate due to the repositioning of a banner, the impact on subscription rate as a consequence of a discount or special offer, etc. In these settings, even effects whose sizes are very small can have large downstream impacts. The simple difference in means estimator (Splawa-Neyman et al., 1990) is still the standard estimator of choice for many online A/B testing platforms due to its simplicity. This method, however, can fail to detect small effects, even when the experiment contains thousands or millions of observational units. As a by-product of these experiments, however, large amounts of additional data (covariates) are collected. In this paper, we discuss benefits, costs and risks of allowing experimenters to leverage more complicated estimators that make use of covariates when estimating causal effects of interest. We adapt a recently proposed general-purpose algorithm for the estimation of causal effects with covariates to the setting of online A/B tests. Through this paradigm, we implement several covariate-adjusted causal estimators. We thoroughly evaluate their performance at scale, highlighting benefits and shortcomings of different methods. We show on real experiments how "covariate-adjusted" estimators can (i) lead to more precise quantification of the causal effects of interest and (ii) fix issues related to imbalance across treatment arms - a practical concern often overlooked in the literature. In turn, (iii) these more precise estimates can reduce experimentation time, cutting cost and helping to streamline decision-making processes, allowing for faster adoption of beneficial interventions.
Lorenzo Masoero, Doug Hains, James McQueen
2023-05-01T22:28:18Z
http://arxiv.org/abs/2305.01109v1
# Leveraging covariate adjustments at scale in online A/B testing ###### Abstract Companies offering web services routinely run randomized online experiments to estimate the "causal impact" associated with the adoption of new features and policies on key performance metrics of interest. These experiments are used to estimate a variety of effects: the increase in click rate due to the repositioning of a banner, the impact on subscription rate as a consequence of a discount or special offer, etc. In these settings, even effects whose sizes are very small can have large downstream impacts. The simple difference in means estimator (Splawa-Neyman et al., 1990) is still the standard estimator of choice for many online A/B testing platforms due to its simplicity. This method, however, can fail to detect small effects, even when the experiment contains thousands or millions of observational units. As a byproduct of these experiments, however, large amounts of additional data (covariates) are collected. In this paper, we discuss benefits, costs and risks of allowing experimenters to leverage more complicated estimators that make use of covariates when estimating causal effects of interest. We adapt a recently proposed general-purpose algorithm for the estimation of causal effects with covariates to the setting of online A/B testing. Through this paradigm, we implement several covariate-adjusted causal estimators. We thoroughly evaluate their performance at scale, highlighting benefits and shortcomings of different methods. We show on real experiments how "covariate-adjusted" estimators can (i) lead to more precise quantification of the causal effects of interest and (ii) fix issues related to imbalance across treatment arms -- a practical concern often overlooked in the literature. In turn, (iii) these more precise estimates can reduce experimentation time, cutting cost and helping to streamline decision-making processes, allowing for faster adoption of beneficial interventions. ## 1 Introduction The continued growth and product improvement for online companies relies on efficiently finding new opportunities and accurately measuring the impact of decisions on customers. To estimate the causal impact of a change to a product or feature, online companies heavily rely on A/B tests (randomized controlled trials/online randomized experiments). Under minimal assumption, A/B tests are indeed guaranteed to produce unbiased estimates of the causal impact of the interventions that are being tested (Splawa-Neyman et al., 1990). In what follows, we will assume that the experimental units of interest are "customers". In practice, different experiments might be tracking different units (sellers, streamers, shopping missions, etc.). A/B tests work by randomly assigning some customers (usually half, the "treatment group") to see the new experience, while the other customers (the "control group") see the old, _status quo_ experience. Over a fixed experimental period, different relevant metrics of interest of these customers are measured and recorded. A simple way for experimenters to quantify how the change in the experience impacts customers with an A/B test is to compare the average value of a metric of interest or "key performance indicator" [KPI] across customers in the treatment group to the average value of the same metric in the control group. This "difference in means" [DIM] approach produces a simple estimate of the average causal effect of the change in the experience on the KPI in question. The simplicity and low marginal cost of running A/B tests with millions of customers has led them to be ubiquitous in the industry. A/B testing is used to evaluate front-end and back-end changes to search engines (Google, Bing, Yandex), online retailers (Amazon, eBay, Etsy), streaming media services (Netflix, Twitch, YouTube), social networks (Facebook, LinkedIn, Twitter), travel services (Lyft, Uber, Airbnb, Booking.com), etc. See Gupta et al. (2019) for a thorough discussion of the role and use of A/B tests in the industry. Due to the opportunity cost of experimentation time, small treatment effect sizes, and large heterogeneity amongst customers, the difference in means approach can often fail to detect effects of the intervention when they are present, even when the experiment contains thousands or millions of customers. However, large amounts of data often unrelated with the A/B test (the "covariates") are collected before and throughout the experiment about the experimental units. This abundance of data gives experimenters the potential to adopt more complex "covariate-adjusted" methods to form their estimates. In particular, any feature that is independent of the intervention (such as any measures taken prior to the experiment) can be leveraged to estimate the causal effect of interest. Resulting covariate-adjusted estimators can lead to improved, less variable, estimates of the "causal effects" of interest. It has been observed empirically that simple covariate adjusted approaches, such as the popular "CUPED" (Deng et al., 2013), can lead to significant variance reduction relative to the difference in means estimator. Furthermore, covariate adjustment has been used defensively to guard against an unlucky randomization, where the intervention may appear artificially better or worse due to luck (see Tukey (1991)). The literature related to covariate adjustment methods is continuously growing (see, e.g. Guo et al. (2021); Jin and Ba (2021) for recent contributions). In this paper, we explore benefits and costs of expanding the toolkit of experimenters by using larger sets of covariates and more complex estimators for the estimation of causal effects in the context of online A/B tests. We show in our experiments that covariate-adjusted methods can lead to non-trivial gains in terms of estimation accuracy and variance reduction. The rest of this paper is organized as follows: we introduce notation for the problem of interest in Section 2. Next, we describe in Section 3 the class of Generalized Oaxaca-Blinder Estimators [GOBEs] -- a flexible, general purpose method to produce estimators of the causal effects leveraging any arbitrary number of additional covariates. We discuss the potential use of these estimators for experimentation in Section 4, and present experimental results in Section 5. We conclude with a discussion and next steps in Section 6. ## 2 Potential outcomes, randomized experiments and causal effects The field of causal inference is a collection of theoretically sound tools, methodologies and procedures which can help practitioners answer questions about the impact of interventions they may want to implement. While making rigorous causal claims about interventions is appealing and desirable, this ability comes at the cost of collecting data through carefully designed experiments. In order for causal claims to be valid, experimenters have to make sure that the data is collected in such a way that no bias or flaw is introduced in the analysis. The standard approach to ensure that the data we are collecting will allow us to formulate causal claims, is to perform a _randomized controlled trial_ (RCT), or A/B test. In its simplest form, an A/B test is implemented by exposing each experimental unit to either the control (\(A\)) or treatment (\(B\)) experience _at random_. Randomization is the key technical device which allows experimenters to draw causal conclusions from the experiment. To make our discussion precise, we here adopt the causal model of potential outcomes (Splawa-Neyman et al., 1990; Rubin, 1977). In a nutshell, we assume that in an experiment in which we observe \(N\) units, every individual unit \(n\in[N]:=\{1,\ldots,N\}\) is exposed to one of \(T\geq 2\) different "treatments" or "policies". For example, the \(N\) units might be different customers in the experiment. For each _potential_ allocation of unit \(n\in[N]\) to policy \(t\), we assume that there exists a "potential" outcome \(Y_{n}(t)\). That is, each unit in the experiment is associated with a (latent) vector of potential outcomes, \([Y_{n}(0),\ldots,Y_{n}(T-1)]^{\top}\), of which only one coordinate is observed in an experiment. These outcomes are assumed to be fixed conditionally on the assignment. For simplicity in what follows we will consider the case of \(T=2\) alternative treatments, that we will simply call "control" (\(t=0\)) or "treatment" (\(t=1\)), even though our discussion naturally extends to any \(T>2\). Given these definitions, we can formally define what we mean by a "causal effect". The most important effect (or estimand) of interest, and the one we will focus on, is the average treatment effect [\(\mathrm{ATE}\)]. The \(\mathrm{ATE}\) is the (average) _causal effect_ in the population of exposing a unit to the treatment (\(t=1\)) instead of the alternative control (\(t=0\)). Often, for decision-making, we think of the treatment as an alternative policy to a standard baseline (potentially more expensive or riskier). The \(\mathrm{ATE}\) quantifies the impact on the outcome of interest of adopting this alternative strategy. Formally, \[\bar{Y}_{k}:=\sum_{n=1}^{N}\frac{Y_{n}(k)}{N},\text{ and }\mathrm{ATE}:=\bar{Y}_{1}- \bar{Y}_{0}=\sum_{n=1}^{N}\frac{[Y_{n}(1)-Y_{n}(0)]}{N}.\] The \(\mathrm{ATE}\) can not be directly computed or observed in practice, because units are either exposed to treatment or control, but never to both. RCTs or A/B tests are used to estimate the \(\mathrm{ATE}\). The fundamental mechanism underlying an A/B test is its random assignment mechanism (or triggering logic), which determines the experience to which each unit will be exposed. In the simplest case, each unit \(n\) is endowed with a binary random variable with mean \(\pi\in(0,1)\): \[J_{n}\sim\mathrm{Bernoulli}(\pi). \tag{1}\] If \(J_{n}=0\), then unit \(n\) is exposed to the control. Otherwise, if \(J_{n}=1\), the treatment experience is rendered. The \(\mathrm{ATE}\) is estimated by comparing the observed outcomes for the units in control and treatment. Denote with \(\mathcal{I}_{t}:=\{n\in[N]\ :\ J_{n}=t\}\) for the units in group \(t\in\{0,1\}\). The "difference-in-means" [DIM] estimator is simply the difference between the average outcome in each treatment group: \[\widehat{\mathrm{ATE}}_{\mathrm{DIM}}:=\left[\frac{1}{|\mathcal{I}_{1}|}\sum _{n\in\mathcal{I}_{1}}Y_{n}(1)\right]-\left[\frac{1}{|\mathcal{I}_{0}|}\sum_{ n\in\mathcal{I}_{0}}Y_{n}(0)\right]. \tag{2}\] The theoretical properties of this extremely simple estimator are well understood (Splawa-Neyman et al., 1990): it is unbiased, and under mild conditions it obeys a central limit theorem in large samples. See Li and Ding (2017) and the references therein for a thorough overview and discussion. ## 3 Leveraging covariates: Generalized Oaxaca-Blinder estimators Often, when collecting data from our experiment, we have access to additional covariates measured at the unit level, hereafter denoted as \(\mathbf{z}_{n}:=[z_{n,1},\ldots,z_{n,K}]^{\top}\in\mathbb{R}^{K}\), for \(n\in[N]\) and some fixed \(K\in\mathbb{N}\). For example, when running an experiment on the engagement of customers subscribing to a video streaming service, we might have access to previous measurements of the customer activity, the longevity of the customer's account, whether they have subscribed to for pay-per-view channels, etc. If these covariates are: 1. independent of the assignment variable \(J_{n}\) 2. correlated with the outcome variable of interest \(Y_{n}\) they can be leveraged to form covariate-adjusted estimators. See Imbens and Rubin (2015, Chapter 7) for a detailed discussion on the validity of regression adjustments in randomized experiments. In what follows, we describe a general recipe to build "covariate-adjusted" estimators. The key intuition underlying this approach is to view the estimation of the \(\mathrm{ATE}\) as a "missing data" or "imputation" problem. For each treatment \(t\), we can fit a regression model using the observed data within the group \(\mathcal{I}_{t}\), and use the regression to impute the "missing" values of units assigned to the other treatment group(s) -- \(n\in\mathcal{I}_{t}^{C}\). That is, we fit for \(t\in\{0,1\}\) a regression model \(Y_{n}\sim f_{t}(\mathbf{z}_{n};\mathbf{\theta}_{t})\) using covariates and outcomes in the corresponding treatment group, \(\mathcal{D}_{t}:=\{(Y_{n},\mathbf{z}_{n})\}_{n\in\mathcal{I}_{t}}\). Here \(\mathbf{\theta}_{t}\) is a finite dimensional parameter that characterizes the regression model (e.g., the slope and intercept of a linear regression model). Given \(\mathcal{D}_{t}\), we estimate \(\mathbf{\theta}\) by minimizing a loss function \(\mathcal{L}\) computed on \(\mathcal{D}_{t}\) and parametrized by \(\mathbf{\theta}_{t}\): \[\hat{\mathbf{\theta}}_{t}\in\arg\min_{\theta}\mathcal{L}\left\{\mathcal{D}_{t}; \mathbf{\theta}\right\}. \tag{3}\] This gives us the imputation operator: \[\hat{f}_{t}(Y_{n},\mathbf{z}_{n},J_{n};\hat{\mathbf{\theta}}_{t})=\begin{cases}Y_{n}& \text{if }J_{n}=t,\\ f(\mathbf{z}_{n};\hat{\mathbf{\theta}}_{t})&\text{otherwise.}\end{cases}\] This approach induces the large class of "Generalized Oaxaca-Blinder Estimators" (GOBEs, Guo and Basse (2021)) of the type \[\widehat{\mathrm{ATE}}_{\mathcal{M}}=\frac{1}{N}\sum_{n=1}^{N}\left\{\hat{Y}_{ n}(1)-\hat{Y}_{n}(0)\right\}, \tag{4}\] where \(\hat{Y}_{n}(t)=\hat{f}_{t}(Y_{n},\mathbf{z}_{n},J_{n};\hat{\mathbf{\theta}}_{t})\) and \(\mathcal{M}:=\{f_{0},f_{1}\}\) is used to emphasize the dependency of the estimator on the regression functions used. We summarize this procedure in Algorithm 1. ``` 1:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 2:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 3:for\(n\in[N]\)do 4:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 5:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 6:endfor 7:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 8:for\(n\in[N]\)do 9:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 10:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 11:endfor 12:\(\mathbf{z}_{n}\leftarrow\mathbf{z}_{n}\) 13:endfor 14:return\(\mathbf{z}_{n}\) 15:endfor 16:return\(\mathbf{z}_{n}\) 17:endfor 18:return\(\mathbf{z}_{n}\) 19:endfor 20:return\(\mathbf{z}_{n}\) 21:endfor 22:return\(\mathbf{z}_{n}\) 23:endfor 24:return\(\mathbf{z}_{n}\) 25:endfor 26:return\(\mathbf{z}_{n}\) 27:endfor 28:return\(\mathbf{z}_{n}\) 29:endfor 29:return\(\mathbf{z}_{n}\) 30:endfor 29:return\(\mathbf{z}_{n}\) 31:endfor 32:endfor 33:endfor 34:return\(\mathbf{z}_{n}\) 35:endfor 35:endfor 36:return\(\mathbf{z}_{n}\) 37:endfor 37:endfor 38:return\(\mathbf{z}_{n}\) 39:endfor 39:endfor 40:return\(\mathbf{z}_{n}\) 41:endfor 42:return\(\mathbf{z}_{n}\) 43:endfor 44:return\(\mathbf{z}_{n}\) 45:endfor 46:return\(\mathbf{z}_{n}\) 47:endfor 48:return\(\mathbf{z}_{n}\) 49:endfor 49:return\(\mathbf{z}_{n}\) 50:endfor 40:return\(\mathbf{z}_{n}\) [MISSING_PAGE_POST] **Difference-in-means as GOBE** Notice that the difference-in-mean estimator introduced in Equation (2) can be viewed as a generalized Oaxaca-Blinder estimator. Indeed \(\widehat{\mathrm{ATE}}_{\mathrm{DIM}}\) satisfies Equation (4) for the choice \(\hat{f}_{t}(Y_{n},\mathbf{z}_{n},J_{n};\hat{\mathbf{\theta}}_{t})=\frac{1}{|\mathcal{I }_{t}|}\sum_{n^{\prime}\in\mathcal{I}_{t}}Y_{n^{\prime}}\), where we impute the missing values with the group mean, irrespective of the value of the covariates. Linear regression as GOBEThe standard linear-regression adjusted estimator is a GOBE. This estimator is obtained by fitting via ordinary least squares [OLS] the following regression: \[Y_{n}\sim\beta_{0}+\beta_{1}J_{n}+\mathbf{\gamma}^{\top}\mathbf{z}_{n}+\mathbf{\delta}^{ \top}J_{n}(\mathbf{z}_{n}-\bar{\mathbf{z}}), \tag{5}\] and using the estimate \(\widehat{\mathrm{ATE}}_{\mathrm{LR}}:=\hat{\beta}_{1}\). Here \(\bar{\mathbf{z}}\in\mathbb{R}^{K}\) is the average for each of the \(K\) components, computed across the \(N\) units. Lin (2013) shows that asymptotically \(\widehat{\mathrm{ATE}}_{\mathrm{LR}}\) is unbiased and has smaller variance than \(\widehat{\mathrm{ATE}}_{\mathrm{DIM}}\). To see \(\widehat{\mathrm{ATE}}_{\mathrm{LR}}\) as a "GOBE", notice that it can be equivalently obtained by fitting via OLS in two separate linear regressions: for \(n\in\mathcal{I}_{t},t\in\{0,1\}\), fit \(Y_{n}\sim\mathbf{\theta}_{t}^{\top}(\mathbf{z}_{n}-\bar{\mathbf{z}})\). We recover the GOBE formulation of Equation (4) by letting \[\hat{f}_{t}(Y_{n},\mathbf{z}_{n},J_{n};\hat{\mathbf{\theta}}_{t})=\begin{cases}Y_{n}& \text{if }t=J_{n}\\ \hat{\mathbf{\theta}}_{t}^{\top}\mathbf{z}_{n}&\text{if }t\neq J_{n}.\end{cases} \tag{6}\] See, e.g. Lin (2013, Lemma 3) for a proof of why Equation (6) and Equation (5) lead to the same estimator. One can also employ general non-linear regression models to perform adjustments. Guo and Basse (2021) provide conditions under which regression models produce unbiased and asymptotically normal estimates, justifying the Gaussian approximation in Algorithm 1. Building on these results, Cohen and Fogarty (2020) propose a two-step GOBE. First a GOBE is fitted, and then a second GOBE with a linear regression model using imputed values \(x_{n}:=\hat{f}_{1-J_{n}}(Y_{n},\mathbf{z}_{n},J_{n};\hat{\mathbf{\theta}}_{1-J_{n}})\) as the only covariate for each outcome \(Y_{n}\) is used to produce the final estimate. This "two-step" GOBE is asymptotically unbiased, normally distributed, and more efficient than the difference-in-means estimator. See Guo et al. (2021); Jin and Ba (2021) for other recent approaches to develop flexible models in online A/B testing. ## 4 Implementing GOBEs at scale As already discussed in Section 1, drawing conclusions from online A/B tests can be challenging: experiments often consist of small changes related to details of the user experience. Consequently, associated effect sizes can be very small and hard to detect even in large samples. Despite being small, these can lead to large downstream impacts. Exactly because of this reason, employing models that allow for precise estimates of the causal effects is important: more precise estimates of the effects of the interventions can allow practitioners to detect smaller effect sizes and crucially shorten the experimentation time needed in order to obtain a conclusive answer about the effectiveness of a treatment. Ultimately, it would be desirable to have an end-to-end automated inference engine which produces, for each experiment, the "best" possible estimate for the causal effect under study, without requiring experimenters to specify which model and covariates should be employed for this task. In practice, assessing which estimator is best is far from being trivial. Indeed, while on the one hand the idea of developing ad-hoc large models with curated covariates for an individual experiment of interest seems appealing for variance reduction, on the other hand large-scale causal inference engines have to rely on estimators that perform well on average across all experiments. That is, the methods used need to be: * Scalable: companies typically run a very large number of experiments every year, and their computational resources are limited. It is undesirable for practitioners to have to wait for their results due to long analysis run times (e.g., to solve the minimization problem in Algorithm 1). * Reliable: the team maintaining the infrastructure is often small relative to the customer base it serves. The methods implemented need to rely on algorithmically sound routines that produce stable estimates of the causal effects of interest. * Interpretable: the results of the experiments are used by practitioners for policy-making. It is therefore imperative that the estimates produced are transparent, easy to interpret, and do not require specialized knowledge. Because of these reasons, in our experiments presented in Section 5 we only employ linear models, their regularized counterparts (LASSO, ElasticNet, Ridge and principal components regression), and one simple instance of a generalized linear model. Extending our analysis to more complicated models, and assessing their feasibility in a production setting is part of ongoing investigations. We here describe in detail the regression models we fit to experimental data to benchmark the performances of different GOBE estimators. We have already discussed the difference in means [DIM] and simple linear regression [LR] estimators, and their characterization as GOBEs in Section 3. Ridge regression, LASSO and elastic net are extremely popular "regularized" counterparts of simple linear regression model, in which the weight vector \(\mathbf{\theta}\) is "shrunk" using a penalty. Formally, given a regularization parameter \(\gamma>0\), we minimize with respect to \(\mathbf{\theta}\) the loss function \[\mathcal{L}(\mathcal{D}_{t};\mathbf{\theta}):=\sum_{n\in\mathcal{I}_{t}}\big{(}y_ {n}-\mathbf{\theta}^{\top}\mathbf{z}_{n}\big{)}^{2}+\gamma\|\mathbf{\theta}\|_{\ell}^{2},\] where \(\ell=1\) for LASSO and \(\ell=2\) for Ridge regression (i.e., regularize using the \(\ell\)-1 or \(\ell\)-2 norm). Elastic net regression is obtained by combining the \(\ell_{1}\) and \(\ell_{2}\) penalties on the regression coefficients. Formally, in this case, we minimize the loss function \[\mathcal{L}(\mathcal{D}_{t};\mathbf{\theta}):=\sum_{n\in\mathcal{I}_{t}}\big{(}y_ {n}-\mathbf{\theta}^{\top}\mathbf{z}_{n}\big{)}^{2}+\gamma\lambda\sum_{k=1}^{K}|\theta _{t,k}|+\frac{\gamma(1-\lambda)}{2}\sum_{k=1}^{K}\theta_{t,k}^{2}.\] Here \(\lambda\) trades off the importance of the \(\ell_{1}\) and \(\ell_{2}\) penalties. Differently from linear regression and the simple difference in means, these regularized models crucially depend on the tuning of some regularization hyperparameter. Towards the goal of having a streamlined, automated procedure to fit these models, we adopt a standard cross-validation approach. For each model, we repeatedly minimize the objective function across a predetermined number of different values of the regularization parameters. For each of these values, we split the data in \(5\) random folds, and fit the model \(5\) times by iteratively leaving out one fold of the data. For each fold, we compute the coefficient of determination on the left-out-data using the fitted coefficient and choose the optimal regularization level by picking the value that achieved the maximum average coefficient of determination across the folds, and re-fit the model using the full dataset. We also consider principal component regression [PCR] -- where we first reduce the dimensionality of the regressors using their projections onto principal components, and then use these as covariates in a linear regression, as well as an instance of a generalized linear model using a Tweedie distribution kernel. ## 5 Experiments ### Data description For our experiments, we consider a representative set of \(W=100\) A/B tests. These have been running in production over the course of the last two years, at different times of the year. Each experiment corresponds to a different intervention. For simplicity, in our analysis we only consider one pairwise comparison per experiment (\(T_{1}\) versus \(T_{0}\)) -- even though some experiments might have more than two treatment arms. For each experiment, we run our data analysis pipeline and compute estimates of the causal effects after collecting data for a total time of \(D\in\{7,14,21,28\}\) days. For any analysis duration, the sizes of the experiments (total number of customers in the \(T_{1}\) and \(T_{0}\) arms) varies considerably (Figure 1). For all these experiments, we track the same KPI of interest. Since the scale of this KPI varies across experiments, in our illustrations and analysis we focus on the percent ATE (or lift), which is defined as \(\mathrm{LIFT}(:)=\mathrm{ATE}/|\widehat{Y_{0}}|\). We use as covariates less than 10 pre-exposure values of customer metrics correlated with the KPI. ### Variance reduction Intuitively, a more precise estimator (with estimated lower variance) leads to better estimates, and directly translate in faster and better decision making (as we further discuss in Section 5.4). We estimate the performance of model \(\mathcal{M}\) in terms of precision by computing their estimated percentage variance reduction \(\widehat{\mathrm{VR}}_{\mathcal{M}}\) with respect to the baseline \(\mathrm{DIM}\). Recalling that \(\widehat{\mathrm{Var}}_{\mathcal{M}}\) is the estimated variance of the \(\mathrm{ATE}\) under model \(\mathcal{M}\) (as per Algorithm 1), we define: \[\widehat{\mathrm{VR}}_{\mathcal{M}}:=100\times\left\{1-\frac{\widehat{ \mathrm{Var}}_{\mathcal{M}}}{\widehat{\mathrm{Var}}_{\mathrm{DIM}}}\right\}.\] We plot in Figure 2 the variance reduction as a function of the duration of the analysis across all the experiments in the meta-analysis. Three main findings emerge: * As expected (see, e.g. Guo and Basse (2021, Theorem 4)), covariate adjusted estimates generally have smaller variances. * Larger variance reduction is observed in longer analyses, which are characterized by more stable customer behavior. Figure 1: Empirical distribution of the sample sizes of the experiments considered in the meta-analysis. * The performance observed across different covariate adjusted methods is similar. We analyze in Section 5.5 the performance and computation cost of these methods in relation with the number of (potentially noisy) regressors. Next, we further try to understand the relationship between analysis duration, sample size and model precision. For a given duration of the analyses (e.g., \(D=7\) days), let \(F_{N,D}:\mathbb{N}\rightarrow[0,1]\) be the empirical cumulative density function [CDF] of the sample size \(N\) of the experiments after \(D\) days, and let \(F_{N,D}^{-1}:[0,1]\rightarrow\mathbb{N}\) be its inverse. E.g., \(F_{N,7}^{-1}(0.6)\) is the sample size of the 60%-largest experiment amongst the 7-day analyses. We consider the variance reduction gains within the first quartile (experiments with sample size \(N\in[F_{N,D}^{-1}(0),F_{N,D}^{-1}(0.25))\) and in the last quartile (experiments with sample size \(N\in[F_{N,D}^{-1}(0.75),F_{N,D}^{-1}(1))\). We observe different behaviors at duration \(D=7\) and \(D=28\). Specifically, for the shorter analysis time (\(D=7\) days) variance reduction is particularly evident in _larger_ experiments. However, for \(D=28\) smaller experiments seem to be benefitting the most from covariate adjustments. More broadly, we expect the variance reduction induced by covariate adjustments can vary with the experiment size and duration, and the choice of covariates and model used. We advise practitioners to extensively analyze their data, before the experiment is run, prior to adopting a regression model. ### Robustness to chance imbalance In an A/B test, treatment arms should be _ex-ante_ comparable. That is, by virtue of the randomization, the distribution of the covariates for the units in the treatment and control group should coincide. This is not only a property of a correctly constructed A/B test, but also a fundamental requirement that such an experiment should satisfy to yield Figure 2: Boxplots of the estimated variance reduction across the experiments considered in the analysis. Each subplot in the figure corresponds to a different analysis duration time, and each row in the boxplot refers to a different model \(\mathcal{M}\). valid inferences. Consider an experiment in which the value of a given covariate \(x\) is predictive of the outcome \(y\) (e.g., units with higher \(x\) tend to have higher \(y\)). If the triggering logic systematically allocates with higher (or lower) probability units with higher value of \(x\) to the treatment, condition (C1) in Section 3 is violated. As a consequence, inferences obtained from the A/B test are going to be invalid. Even for experiments in which the triggering logic determining treatment assignments is correctly specified, however, it can be the case that in practice an experiment leads to imbalanced treatment arms. E.g., an experiment in which the triggering logic follows Equation (1) can result in an "unlucky" split of the data, in which the covariate values in treatment arms are not comparable. Concretely, assignment variables \(J_{1:N}:=\{J_{1},\ldots,J_{N}\}\) could define a control group \(\mathcal{I}_{0}\) containing units that have on average much higher values of the KPI of interest in the pre-experimental period with respect to \(\mathcal{I}_{1}\) (or _vice versa_). In the presence of high pre-experimental covariate imbalance, practitioners worry whether they can trust their findings. We here empirically show the following: * Under high imbalance, the difference in means estimator can systematically lead to wrong conclusions. In other terms: \(\widehat{\mathrm{ATE}}_{\mathrm{DIM}}\) is unbiased unconditionally on the covariate imbalance, but it can be conditionally biased. See Figure 4. * Covariate adjusted methods alleviate this concern, and are robust to pre-experimental chance imbalance. See Figure 5. Notice: in this section, we focus on simple linear regression -- qualitative findings for other methods are similar and omitted. To get us started, we need an operational definition of imbalance to quantify the pre-experimental comparability of the control and treatment arms. Let \(x_{n}\) denote the pre-experimental value of the KPI of interest Figure 3: Boxplots of the estimated variance reduction. First row: 7-day analyses. Second row: 28-day analyses. Left subplots (A, C): \(\widehat{\mathrm{VR}}_{\mathcal{M}}\) in the smallest 25 experiments. Right subplots (B, D) \(\widehat{\mathrm{VR}}_{\mathcal{M}}\) for the largest 25 experiments. Each row in each boxplot corresponds to a different model \(\mathcal{M}\). for unit \(n\), and define the imbalance parameter \(\zeta\): \[\zeta:=\zeta(x_{1:N},J_{1:N})=\bar{x}(1)-\bar{x}(0), \tag{7}\] where \(\bar{x}(t):=\sum_{n=1}^{N}x_{n}1(J_{n}=t)/(\sum_{n=1}^{N}1(J_{n}=t)\}\) is the average value of the KPI in the pre-experimental period for units later exposed to treatment arm \(t\). Intuitively, when \(|\zeta|\) is large, the two groups (control and treatment) are not ex-ante comparable. When the imbalance is sufficiently severe, experimenters worry that the estimates might not be trustworthy. In turn, this typically leads to the necessity of re-randomizing the experiment. This causes inefficiency in the experimentation pipeline: re-randomizing is expensive as it requires using additional computational resources, and postponing launch decisions. Methods whose inferences are less sensitive to randomization bias are therefore preferable. We now show on our real data that using a covariate adjusted estimator can lead to substantially better results than the unadjusted estimator, even in the presence of severe imbalance. In turn, this reduces the need to re-randomize and experimentation cost. #### 5.3.1 A/A analysis To study robustness of covariate adjustments to chance imbalance, we here adopt the following "A/A" approach. Given an experiment of interest, we restrict our attention to a single arm in the experiment (e.g., control). Namely, we only consider the subset of the units \(n\in\mathcal{I}_{t}\) exposed to policy \(t\), for a fixed \(t\), and discard all the other units, together with their covariates. For notation simplicity, in what follows we denote \(\mathcal{I}:=\mathcal{I}_{t}\) the set of units in treatment arm \(t\). After this pre-processing, we can treat the units in \(\mathcal{I}\) as if they were obtained from an A/A test. That is, if we were to randomly split them into two "fake" treatment arms, we would have by construction that the ground truth average causal effect is known and equal to \(0\). A similar experimental setup is adopted e.g. in Guo and Basse (2021, Section 4). #### 5.3.2 Monte Carlo simulation To assess robustness to imbalance, we adopt a Monte Carlo approach. We fix a large integer \(S\) and for each \(s=1,\ldots,S\), we randomly split \(\mathcal{I}\) into two groups, creating A/A re-randomization groups \(\mathcal{I}_{0}^{(s)},\mathcal{I}_{1}^{(s)}\) such that Figure 4: Comparing DIM and LR estimates for a single experiment for \(S=10,000\) re-randomizations. Each dot corresponds to a different \((\zeta^{(s)},\widetilde{\mathrm{ATE}}_{\mathcal{M}})\) combination. \(\mathcal{I}\) and \(\mathcal{I}_{0}^{(s)}\cap\mathcal{I}_{1}^{(s)}=\varnothing\). We then define the A/A arm indicator \(J_{n}^{(s)}:=1(n\in\mathcal{I}_{1}^{(s)})\), and use Algorithm 1 to fit \(\widehat{\mathrm{ATE}}_{\mathcal{M}}^{(s)}\) using data \(\mathcal{D}_{1}^{(s)}=\{Y_{\mathcal{I}},J_{\mathcal{I}}^{(s)},X_{\mathcal{I}}\}\) for all the estimators \(\mathcal{M}\) under consideration. Here \(Y_{\mathcal{I}}=\{Y_{n}\ :\ J_{n}\in\mathcal{I}\}\). That is, we fit \(\widehat{\mathrm{ATE}}\) assuming that the units in control are those with index in \(\mathcal{I}_{0}^{(s)}\), and the unit in treatment are indexed by \(\mathcal{I}_{1}^{(s)}\). Importantly, notice that for every re-randomization \(s\) we induce a split-specific level of imbalance \(\zeta^{(s)}\) as per Equation (7). Moreover, by construction the ATE is \(0\), since we here let the outcome \(Y_{n}\) be fixed, regardless of the value of \(J_{n}^{(s)}\). We summarize this procedure in Algorithm 2. ``` Data \(\mathcal{D}:=\{(y_{n},\mathbf{z}_{n})\}_{n\in\mathcal{I}}\), set of regression models \(\mathfrak{M}=\{\mathcal{M}_{1},\ldots,\mathcal{M}_{W}\}\), treatment arm \(t\). Let \(\mathcal{I}:=\{n\in[N]\ :\ J_{n}=t\}\). for\(s=1,\ldots,S\)do Split \(\mathcal{I}\) into \(\mathcal{I}_{0}^{(s)},\mathcal{I}_{1}^{(s)}\) at random such that \(\mathcal{I}_{0}^{(s)}\cup\mathcal{I}_{1}^{(s)}=\mathcal{I}\) and \(\mathcal{I}_{0}^{(s)}\cap\mathcal{I}_{1}^{(s)}=\varnothing\). Let \(J_{n}^{(s)}:=1(n\in\mathcal{I}_{1}^{(s)})\). Let \(\bar{x}(\mathcal{I}_{t}^{(s)})\ :=\ \sum_{n\in\mathcal{I}_{t}^{(s)}}x_{n}/| \mathcal{I}_{t}^{(s)}|\) and compute \[\zeta^{(s)}=\bar{x}(\mathcal{I}_{1}^{(s)})-\bar{x}(\mathcal{I}_{0}^{(s)}).\] for\(\mathcal{M}\in\mathfrak{M}\)do With \(\mathcal{D}^{(s)}\ :=\ \{Y_{\mathcal{I}},J_{\mathcal{I}}^{(s)},X_{\mathcal{I}}\}\), estimate \(\widehat{\mathrm{ATE}}_{\mathcal{M}}^{(s)},\widehat{\mathrm{CI}}_{\mathcal{M }}^{(s)}(\alpha)\) using Algorithm 1. endfor endfor ``` **Algorithm 2** A/A test #### 5.3.3 Validation Once we have performed the Monte Carlo procedure described above, we have access to an empirical bivariate distribution of the estimator \(\widehat{\mathrm{ATE}}_{\mathcal{M}}\) as a function of the imbalance level \(\zeta\). We provide a visualization of how different estimators perform in Figure 4, where Algorithm 2 has been run on the control arm of a single experiment in the meta-analysis for \(S=10{,}000\). We make a scatterplot of the imbalance level \(\zeta\) (horizontal axis) against \(\widehat{\mathrm{ATE}}_{\mathcal{M}}\) for \(\mathcal{M}\in\{\mathrm{DIM},\mathrm{LR1},\mathrm{LR}\}\) (vertical axis). Here \(\mathrm{LR1}\) signifies that we regress the KPI only against its pre-experimental value, while \(\mathrm{LR}\) uses all the available covariates. We plot on the top the marginal distribution of the imbalance level \(\zeta\) (which is as expected centered around \(0\)), and on the right the marginal density of each estimator \(\widehat{\mathrm{ATE}}_{\mathcal{M}}\). We also plot a solid color line that is the linear fit of \(\widehat{\mathrm{ATE}}_{\mathcal{M}}\) agains \(\zeta\). It is evident from Figure 4 that \(\widehat{\mathrm{ATE}}_{\mathrm{DIM}}\) is unconditionally unbiased (its expectation over re-randomization \(s=1,\ldots,S\) coincides with the true ATE \(0\)), but it is not unbiased conditionally on imbalance. Because the true value of the true underlying average causal effect is known, this information directly translates in a joint distribution for the accuracy of the estimator as a function of the imbalance. Estimators that are less sensitive to the imbalance allow experimenters to be confident about the results obtained even when such imbalance is present. _Vice versa_, an estimator that is sensitive to the imbalance level will make experimenters doubt their findings when the pre-experimental covariates are imbalanced. In turn, stable estimators will result in more efficient experimentation pipelines, in which data from "unlucky splits" are still useful to draw conclusions about the causal effect of interest. We now introduce a number of metrics that allow us to translate this intuition into a quantitative assessment of the quality of the estimator as a function of the imbalance level. Fix a value \(\kappa\in\mathbb{N}\), and create index sets \(\mathcal{S}_{1},\ldots,\mathcal{S}_{\kappa}\), where for each \(j=1,\ldots,\kappa\), the index set \(\mathcal{S}_{j}\subset\{1,\ldots,S\}\) contains the indices associated with the values of \(\zeta^{(s)}\) within the \(100\times\frac{j-1}{\kappa}\%\) to the \(100\times\frac{j}{\kappa}\%\) quantile of the empirical distribution of \(\zeta^{(s)}\). Then, we partition the values \((\widehat{\mathrm{ATE}}_{\mathcal{M}}^{(s)},\zeta^{(s)})\) into \(\kappa\) splits of equal size (\(\kappa\)-iles), according to the (sorted) value of \(\zeta^{(s)}\). For each \(j=1,\ldots,\kappa\), let \(G_{\mathcal{M},j}:\mathbb{R}\rightarrow[0,1]\) be the empirical cumulative density function of the estimates \(\widehat{\mathrm{ATE}}_{\mathcal{M}}^{(s)}\) falling into the \(j\)-th bucket, and let \(G_{\mathcal{M},j}^{-1}:[0,1]\rightarrow\mathbb{R}\) be its inverse (e.g., for \(\kappa=10\), we obtain the median value of the estimated ATEs which fall between the 30th and 40th percentile via \(G_{3,\mathcal{M}}^{-1}(0.5)\)). Within each of the \(\kappa\) buckets, we compute * the estimated MSE in the \(j\)-th bucket: \[\widehat{\mathrm{MSE}}_{\mathcal{M},j}=|\mathcal{S}_{j}|^{-1}\sum_{s\in \mathcal{S}_{j}}\left(\widehat{\mathrm{ATE}}_{\mathcal{M}}^{(s)}-\mathrm{ATE} \right)^{2}.\] (8) * the square distance of the median value of \(\widehat{\mathrm{ATE}}^{(s)}_{\mathcal{M}}\) in the \(j\)-th bucket to the true value \(\mathrm{ATE}\): \[\widehat{\mathrm{Mediandist}}_{\mathcal{M},j}=(G^{-1}_{\mathcal{M},j}(1/2)- \mathrm{ATE})^{2}.\] (9) * the excess fraction of \(\widehat{\mathrm{ATE}}^{(s)}_{\mathcal{M}}\) in the \(j\)-th bucket which underestimate or overestimate the true effect \(\mathrm{ATE}\): \[\widehat{\mathrm{Excessfrac}}_{\mathcal{M},j}=\frac{\max\{\hat{q}^{+}_{ \mathcal{M},j},1-\hat{q}^{-}_{\mathcal{M},j}\}-1/2}{1/2},\] (10) where \(\hat{q}^{+}_{\mathcal{M},j}=\arg\min_{\alpha}\{G^{-1}_{\mathcal{M},j}(\alpha )\geq\mathrm{ATE}\}\) is the quantile associated with the smallest value \(\widehat{\mathrm{ATE}}^{(s)}_{\mathcal{M}}\) in the \(j\)-th bucket to be above \(\mathrm{ATE}\) and \(\hat{q}^{-}_{\mathcal{M},j}=\arg\max_{\alpha}\{G^{-1}_{\mathcal{M},j}(\alpha )\leq\mathrm{ATE}\}\)) is the quantile associated with the largest value \(\widehat{\mathrm{ATE}}^{(s)}_{\mathcal{M}}\) in the \(j\)-th bucket to be below the true \(\mathrm{ATE}\). Notice: when the distribution of the estimates is centered around the true value, Equation (10) is close to \(0\). In the presence of large conditional bias, it approaches \(1\). Since these three metrics can all be regarded as notions of loss (the lower the value, the better), and since we're interested in assessing whether a covariate adjusted method \(\mathcal{M}\) achieves lower loss than the default unadjusted method \(\mathrm{DIM}\), we also define for a method \(\mathcal{M}\) their "relative" (to \(\mathrm{DIM}\)) counterpart as \[r(\widehat{\mathrm{Metric}}_{\mathcal{M},j}):=\frac{\widehat{\mathrm{Metric }}_{\mathrm{DIM},j}-\widehat{\mathrm{Metric}}_{\mathcal{M},j}}{\widehat{ \mathrm{Metric}}_{\mathrm{DIM},j}}, \tag{11}\] for \(\widehat{\mathrm{Metric}}\in\{\widehat{\mathrm{MSE}},\widehat{\mathrm{Mediandist }},\widehat{\mathrm{Excessfrac}}\}\). For these relative metrics, larger values indicate higher sensitivity of the difference in means estimator \(\widehat{\mathrm{ATE}}_{\mathrm{DIM}}\) with respect to an alternative covariate adjusted estimator \(\widehat{\mathrm{ATE}}_{\mathrm{\mathcal{M}}}\) to the imbalance level \(\zeta^{(s)}\). We report the value attained by these relative metrics across all the \(W=100\) experiments on the control arm at analysis day \(D=7\). Specifically, for each experiment we re-run Algorithm 2 for \(S=10{,}000\) re-randomizations and compute the relative metrics across \(\kappa=20\) buckets (Figure 5). We retain for each experiment and for each bucket the median value of the relative metric, and plot a solid line connecting the median (of these medians) across the \(W\) experiments, for all imbalance quantiles \(\mathcal{S}_{j}\), for \(j=1,\ldots,\kappa\), as well as a 50% centered empirical intervals through the shaded region. We find a similar behavior across the metrics: they are close to zero when the imbalance \(|\zeta^{(s)}|\) is small (i.e., around its median value across re-randomizations). As the absolute value of the imbalance level \(|\zeta^{(s)}|\) increases, however, the value of the relative metrics also sharply increases, indicating larger sensitivity to the imbalance of the difference in means estimator with respect to the linear adjusted estimators (either using one or many covariates). We conclude by checking estimators' calibration (Figure 6). Given width \(\alpha=0.95\) Figure 5: Summaries of the relative counterparts (Equation (11)) of the robustness metrics introduced in Equations (8) to (10) across the \(M\) experiments at day \(7\) of the analysis. Higher values correspond to higher sensitivity of \(\mathrm{DIM}\) relative to \(\mathrm{LR},\mathrm{LR}1\). Solid line: median across experiments; Shaded regions : (25% – 75%) percentiles. we compute the fraction of times that the estimated confidence interval spans the true value for each bucket \(\mathcal{S}_{j}\): \[\widehat{\mathrm{Coverage}}_{\mathcal{M},j}(\alpha)=\frac{\sum_{s\in\mathcal{S}_{ j}}\mathbb{1}\left(\mathrm{ATE}\in\widehat{\mathrm{CI}}_{\alpha}^{(s)}\right)}{| \mathcal{S}_{j}|}. \tag{12}\] Attaining the nominal coverage \(\alpha\) means that the confidence intervals are well calibrated. For this robustness metric the performance of the difference in means estimator is less less sensitive to pre-experimental imbalance than for the metrics considered in Figure 5. ### Impact on experimentation time We next illustrate how smaller estimated variances can lead to shorter experimentation time. We consider a hypothesis testing framework, where \(H_{0}\) is a null hypothesis of no effect of the treatment -- \(H_{0}:\{\mathrm{LIFT}=0\}\) -- and \(H_{1}\) is a fixed alternative the treatment has a fixed percent effect of size \(\delta\), \(H_{1}:\{\mathrm{LIFT}=\delta\}\). Based on the data collected so far (e.g., the first \(D=7\) days of the analysis), we form a prediction on the number of future units that are going to trigger in the experiment as it progresses (adapting recent sample-size prediction methods, see Masoero et al. (2022); Richardson et al. (2022); Camerlenghi et al. (2022)). Given the hypotheses, the predictions, and the estimated variances at \(D=7\), we compute the first future day \(D^{\prime}\) at which we expect to be able to reject the null hypothesis of no effect with at least \(\pi=80\%\) power under the alternative \(H_{1}\) with a given significance \(\alpha=0.05\). As seen in Figure 1, higher precision (smaller variance) directly translates in shorter experimentation time. We see, again, very similar performance across the different covariate adjusted methods considered. We emphasize that the predictions in Figure 7 depend on a number of factors: from the properties of the experiment (e.g., the observed means and variances of the KPI), to the choice of hypothesized fixed effect value \(\delta\). However, the trend displayed in Figure 7 -- whereby smaller estimated variances translate in higher power and hence shorter duration of the experiments -- is expected: smaller estimated variances directly translate into shorter experiments. ### Making tradeoffs at scale: computation, robustness, interpretability As already discussed in Section 4, large scale inference engines should be designed keeping in mind the constraints imposed by the scale at which they operate. We have already discussed how linear models and regularizations thereof are robust (e.g., to chance imbalance) and interpretable. We here analyze how the computation cost and the estimation accuracy scales with the number of (noisy) additional covariates. Specifically, we test how computation and accuracy are affected by augmenting the \(K\) covariates with additional spurious covariates. To do so, we compute for each covariate \(k\) the (empirical) first moment \(\hat{\mu}_{k}\) and second moment \(\hat{\sigma}_{k}^{2}\), and draw for each \(n=1,\dots,N\), a set of Figure 6: Summaries of \(\widehat{\mathrm{Coverage}}_{\mathcal{M},j}(\alpha)\) for \(\mathcal{M}\in\{\mathrm{DIM},\mathrm{LR}1,\mathrm{LR}\}\) (vertical axis, left to right), at \(\alpha=95\%\) across \(\kappa=5\) buckets (horizontal axis). The solid line tracks the median coverage across the \(M\) experiments, and the shaded regions cover 10% – 90% percentiles across these experiments. Results are relative to the analysis at day \(D=7\). The solid black line is the target nominal value 95%. spurious covariates \(\tilde{z}_{n,k}\sim\mathcal{N}(\hat{\mu}_{k},\hat{\sigma}_{k})\) i.i.d., for \(k=1,\ldots,K\). This produces an augmented set of covariates, \(\tilde{\mathbf{z}}_{n}:=[z_{n,1},\ldots,z_{n,K},\tilde{z}_{n,1},\ldots,\tilde{z}_{n,K}]^{\top}\). In our experiments, we also consider even larger sets of covariates, obtained by drawing \(L\) times from each kernel \(\mathcal{N}(\hat{\mu}_{k},\hat{\sigma}_{k})\) for every \(n=1,\ldots,N\). In the general case where we draw \(L\) spurious folds, the covariates used are: \[\begin{split}\tilde{\mathbf{z}}_{n}:=[\underbrace{z_{n,1},\ldots,z_ {n,K}}_{\begin{subarray}{c}\text{Real Covariates}\\ \end{subarray}}\underbrace{\widehat{z}_{n,1},\ldots,\widehat{z}_{n,K}}_{ \begin{subarray}{c}\text{First Spurious Fold}\\ \end{subarray}},\underbrace{\widehat{z}_{n,K+1},\ldots,\widehat{z}_{n,2K}}_{ \begin{subarray}{c}\text{Second Spurious Fold}\\ \end{subarray}},\\ \ldots,\underbrace{\widehat{z}_{n,(L-1)K+1},\ldots,\widehat{z}_{n,LK}}_{ \begin{subarray}{c}\text{$L-\text{th Spurious Fold}$}\end{subarray}}]^{\top} \in\mathbb{R}^{(L+1)\times K}.\end{split} \tag{13}\] Equation (13) simulates a setting in which we might be using a large set of not curated covariates, some of which are noisy and uncorrelated with the outcomes (violating condition (C2) in Section 3). We analyze in Figure 8 the computation cost of running different covariate adjusted methods, relative to the baseline \(\mathrm{DIM}\), as a function of the size of the experiment and the number of covariate used. In our experiments, even for larger ones, the computation cost of covariate adjusted methods is moderate. We run experiments using the popular scipy python library (Virtanen et al., 2020) on a 16-core Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz. Computation cost increases at faster rate with the sample size than with the covariates' dimensionality. We next assess the robustness of different regression-adjusted methods to the presence of noisy covariates. Because the true causal effect is unknown, we treat the estimate obtained using the linear regression model on the full data as the "ground truth" -- i.e., \(\mathrm{ATE}:=\widetilde{\mathrm{ATE}}_{\mathrm{LR}}\) with data \(\{y_{1:N},J_{1:N},\mathbf{z}_{1:N}\}\). For a fixed \(L\geq 1\) and a large number of Monte Carlo random draws \(S\), we (i) draw spurious covariates \(\tilde{\mathbf{z}}_{1:N}\) and (ii) compute the empirical distribution of the percentage absolute difference (or error) in the estimate in the presence of spurious covariates with respect to the ground truth: \[\widehat{\mathrm{err}}^{(s)}_{w,\mathcal{M},L}:=\frac{|\widetilde{\mathrm{ATE }}^{(s)}_{w,\mathcal{M},L}-\mathrm{ATE}_{w}|}{|\mathrm{ATE}_{w}|}.\] For experiment \(w\), \(\widetilde{\mathrm{ATE}}^{(s)}_{w,\mathcal{M},L}\) is the estimate of the true \(\mathrm{ATE}_{w}\) using model \(\mathcal{M}\) on the \(s\)-th random re-draw of covariates with \(L\) spurious folds. Results are displayed in Figure 9, where we break down the distribution of the error Figure 7: For a given number of additional experimentation days (horizontal axis), we plot the additional number of experiments (vertical axis) for which \(H_{0}\) can be rejected at significance \(\alpha=0.05\) with \(80\%\) power given \(H_{1}\) when the estimates are obtained using a model \(\mathcal{M}\) as opposed to the default \(\mathrm{DIM}\). Different subplots refer to different experimental durations. by clustering experiments in the meta-analysis according to their sample size. Specifically, we divide experiments into four quartiles according to \(F_{N,7}\) ((A)-(D)). Within each quartile, we compute for each experiment \(w\) and for each method \(\mathcal{M}\) and fold \(L\) the median error across \(s=1,\ldots,S\) (\(\operatorname{\mathrm{med}}_{1:S}(\operatorname{\mathrm{\widehat{err}}}_{w, \mathcal{M},L}^{(1:S)})\)). For each quartile of the sample size distribution, this procedure gives us a list of \(25\) values for each \(\mathcal{M},L\). We plot in Figure 9 the median across these 25 median errors (vertical axis) as a function of the number of spurious folds (horizontal axis) across different methods. The estimators considered are extremely robust to noise in terms of point estimates, even in the presence of several noisy covariates. Adding spurious noisy covariates has also limited impact on variance reduction gains, as showed in Figure 10. This shows us that the performance of covariate adjusted methods is reliable in the presence of noisy covariates, and the computation cost -- for a well optimized library -- is not prohibitive, even in the presence of large sample sizes. ## 6 Discussion In this paper, we have discussed the value and potential of adopting a large class of covariate adjusted models (Generalized Oaxaca-Blinder Estimators) for the estimation of causal effects in online A/B testing. GOBEs rely on a simple but very general procedure, discussed in Algorithm 1. By leveraging additional covariates and adopting linear and non-linear regression models, we showed in Section 5 on extensive experiments on real data that these estimators lead to precise (Section 5.2) and robust (Section 5.3) estimates of the causal effects of interest, which vastly outperform the simple difference in means estimator. Adoption of these estimators can help practitioners understand the effectiveness of the intervention being tested in shorter periods of time, cutting experimentation cost and streamlining the adoption of beneficial innovations (Section 5.4). The upfront cost to be paid in order to obtain these more precise estimates is both computational and statistical. We discuss these drawbacks in Section 5.5, in which we analyse the performance and computation cost incurred by linear models, and regularized versions thereof on our meta-analysis. In particular, we focus on how such performance scales with the sizes of experiments and dimensionality of covariates. We find that for the models considered, computation cost is not prohibitive even for larger experiments, and inferences are reliable even in the presence of several spurious covariates. In light of the practical concerns and desiderata outlined in Section 4, we choose to only consider interpretable and simple linear models, and their regularized versions. We emphasize, however, that the generalized Oaxaca-Blinder framework for the estimation of causal effects introduced in Section 3 can be straightforwardly applied to complicated, non-linear regression functions (e.g., neural networks). Fitting flexible, nonlinear regression models typically involves solving a complicated, non-convex optimization problem (like the minimization problem of Equation (3)), Figure 8: Log computation time (vertical axis) on four representative datasets having sample size \(N=F_{N,7}^{-1}(\alpha)\) for \(\alpha\in\{0.15,0.4,.06,0.85\}\) (horizontal axis). In the left plot we use the default set of \(K\) covariates. In the center plot we add one fold of \(K\) spurious covariates, in the right plot we add five folds of spurious covariates as per Equation (13). In both these last cases, we repeat the fit \(N_{MC}=100\) times on \(100\) different randomly drawn noisy sets of covariates. Figure 10: Median of median variance reduction \(\widehat{\mathrm{VR}}_{\mathcal{M}}\) (vertical axis) as a function of the number of spurious folds (horizontal axis) at day \(D=7\) of the analysis. Each subplot refers to a different quartile experiments according to their sample size as per \(F_{N,7}\). Figure 9: Median of median errors (vertical axis) as a function of the number of spurious folds (horizontal axis) at day \(D=7\) of the analysis. Each subplot refers to a different quartile experiments according to their sample size as per \(F_{N,7}\). and might require to employ cross-fitting approaches like the ones discussed in Section 4 in order to tune regularization parameters. The design of paradigms to automate these procedure, and related cost-benefit analyses is an active research area. In settings different from the one we considered, flexible non-linear methods have the potential to vastly outperform the simple linear methods here considered. See, e.g., the discussion in Guo et al. (2021). We envision a number of exciting avenues for future research. On the methodological side, Guo and Basse (2021) laid the foundations of a framework to provide provable guarantees for a large class of regression models for the estimation of the causal effects. Enlarging the class of models for which these guarantees hold is an exciting avenue for future work (Cohen and Fogarty, 2020; List et al., 2022). Additionally, simplifying the conditions necessary for these guarantees to hold, and strengthening their characterization could further increase the popularity of these approaches. On the applied side, the development of pipelines to automate the identification of the optimal regression functions in the presence of large and heterogeneous datasets is a crucial step towards the adoption of these methods at scale. Towards this goal, thorough investigation of the benefits, costs and risks of adopting large, flexible covariate adjusted regression methods are exciting challenges for practitioners in the upcoming years.
2304.07889
Ontology for Healthcare Artificial Intelligence Privacy in Brazil
This article details the creation of a novel domain ontology at the intersection of epidemiology, medicine, statistics, and computer science. Using the terminology defined by current legislation, the article outlines a systematic approach to handling hospital data anonymously in preparation for its use in Artificial Intelligence (AI) applications in healthcare. The development process consisted of 7 pragmatic steps, including defining scope, selecting knowledge, reviewing important terms, constructing classes that describe designs used in epidemiological studies, machine learning paradigms, types of data and attributes, risks that anonymized data may be exposed to, privacy attacks, techniques to mitigate re-identification, privacy models, and metrics for measuring the effects of anonymization. The article concludes by demonstrating the practical implementation of this ontology in hospital settings for the development and validation of AI.
Tiago Andres Vaz, José Miguel Silva Dora, Luís da Cunha Lamb, Suzi Alves Camey
2023-04-16T21:05:46Z
http://arxiv.org/abs/2304.07889v2
# Ontology for Healthcare Artificial Intelligence Privacy in Brazil ###### Abstract This article details the creation of a novel domain ontology at the intersection of epidemiology, medicine, statistics, and computer science. Using the terminology defined by current legislation, the article outlines a systematic approach to handling hospital data anonymously in preparation for its use in Artificial Intelligence (AI) applications in healthcare. The development process consisted of 7 pragmatic steps, including defining scope, selecting knowledge, reviewing important terms, constructing classes that describe designs used in epidemiological studies, machine learning paradigms, types of data and attributes, risks that anonymized data may be exposed to, privacy attacks, techniques to mitigate re-identification, privacy models, and metrics for measuring the effects of anonymization. The article concludes by demonstrating the practical implementation of this ontology in hospital settings for the development and validation of AI. ## 1 Introduction The anonymity of hospital data records is a critical issue for health researchers who use data to develop Artificial Intelligence (AI), among other studies focused on data analysis. Within the scope of research carried out with databases containing records of hospitals, data considered anonymous may not present as a statistical property necessary to ensure anonymity (SWEENEY, 2015). This is a problem for health researchers who use data for the development of Artificial Intelligence (AI), among other studies, focused on data analysis (ROCHER, 2019). To use this data in the search for health, we need to understand how to treat them while preserving the privacy of people involved in mitigating the risk of re-identification per the legislation's current relationship(SPENGLER, 2019). In the world, nations have data protection laws that determine the mechanisms that need to be adopted so that the works developed use processing personal data and sensitive personal data is conducted. With these laws, the need for information security, governance, education, auditing, and data processing supervision increases the costs inherent in the use of data (SULLIVAN, 2004). However, some privacy laws, including the Brazilian General Data Protection Law (LGPD) and the General Data Protection Regulation (GDPR) adopted by the European Union, define that data, when they have or acquire anonymity in properties, are no longer objects of the regulation (BRAZIL, 2018; EU, 2016). The LGPD defines anonymized data as: "data relating to a holder who cannot be identified, considering the use of reasonable technical means available at the time of its treatment." The GDPR defines anonymization as "information that does not relate to an identified or identifiable natural person or other personal data that has been anonymized." With the urgency of anonymizing hospital data, opportunities arise as data processing supervision of the analyzed data with other researchers. This is done to promote the reproducibility of the experiments and to provide transparency in the methods used and the results achieved, and the anonymous data allow this (MARK, 2016). But to treat this discourse logically, it becomes necessary to semantically represent data from anonymized hospital records through a specific ontology (ABHYANKAR, 2012). This work presents the Ontology of Brazilian Hospital Records (ORHBR). It connects multidisciplinary concepts about privacy and data sciences from epidemiology, medicine, statistics, and computer sciences, forming the understanding of the necessary structures to describe the thinking about anonymizing hospital data. ## 2 Methodology To develop the ORHBR, we propose the methodology proposed for constructing new ontologies in 7 steps. Step one defines the domain and scope, highlighting what is not part of the scope. In step 2, knowledge selection occurs, including reviewing similar ontologies and determining the extension and adaptation proposal. In step 3, the essential terms of the scope and other standardizations in the health area are presented outside of this specific topic. In steps 4, 5, and 6, the work process for creating classes, properties, and relations occurs, defining them individually in a specialized tool for creating ontologies. The last step is step 7 when we define an instance of the ontology (NOY, 2001). ## 3 Development To develop the ORBHR following the proposed methodology, we performed the following steps using the protege.standford.edu tool (Figure 1) available on the internet. ### Definition of domain and scope The scope includes what is needed to assess whether anonymizing a given hospital dataset (an instance of this ontology) is being represented semantically appropriately. We define anonymization as preparing a data set to make it impossible to re-identify the data subject, considering the use of reasonable technical means available during processing (BRAZIL, 2018). However, other methods to protect privacy are not considered anonymization methods. They, therefore, are outside the scope of this work, including de-identification through the removal of pre-defined items (SULLIVAN, 2004), pseudonymization by replacing the holder's identifiers with an alternative key (OLIVEIRA, 2020) and the use of cryptography (including hash function, blockchain, and others) to make the set of data secret, but which can be decrypted and identified (KRAWIEC, 2016). This ontology intends to support researchers in answering qualitative questions to characterize different clinical and epidemiological observational studies that use anonymized hospital records. The scope of patients' health is separate from the scope of this ontology, as it is not a new vocabulary of medical terms. ORHBR is also not an ontology for organizing information from computerized electronic medical record systems. It is a domain-specific ontology of anonymization methods for research that uses AI in hospitals. ORHBR was built agnostic to be used independently of other standards and conventions researchers may use. ### Knowledge Selection We use the work of QUEIROZ (2016) as a reference, which presents a domain ontology for preserving privacy in data published by the Brazilian federal government to control access to information and propose the main classes necessary to address the topic. BATET (2011) established an ontology that defines metrics to compute semantic similarity in biomedicine, which is essential to define comparable results in this topic. LLUIS (2011) presents, in his doctoral thesis, an ontology for the statistical properties of anonymization within answering out data disturbance, proposing definitions of the types of possible data processing and the types of threats to which the data are exposed. A generic type of data called OntoDT established the concept of data typing (PANOV, 2016). The ORHBR proposal is born within the scope of the LGPD - General Data Protection Law (BRAZIL, 2018), while the reference selected for this work had its origin aiming at the protection of public documents of the federal government provided to the general public within the scope of the law Brazilian law that regulates access to information. ### Important Terms According to current legislation in Brazil, personal data is a term that defines whether the information is related to an identified or identifiable natural person. Sensitive personal data is a person's genetic or biometric data, information about health, racial or ethnic origin, and religious, philosophical, or political organization affiliations. Data processing defines all operations performed with personal data, including collection, reception, classification, access to data, all forms of processing, storage in different media, and disposal (BRAZIL, 2018). Identifiers are all data that allow pointing directly without using other information about the data holder. Indirect identifiers are data about a patient that can be found in other public information sources. The following common terms are used to designate identifying data and indirect identifiers: full name, first name, last name, address (all geographic subdivisions smaller than state, including address, county, and zip code), all data elements relating to an individual (including date of birth, date of admission, date of discharge, date of death and exact age, contacts (phone numbers, fax numbers, e-mail addresses, social networks), identification code (social security, health plan, medical record number, bank account, credit card, certificates, invoices and serial numbers of hospital products and medical devices), URL, cookie, IP number, username, biometric identifiers (fingerprint, retina or voice), images and sounds of people (including medical diagnostic images - not limited to photographic images of the face), biological samples collected from people and stored in biobanks and serv personalized medicine, with the potential to identify through DNA, or another method, the origin of the material (SULLIVAN, 2004). Other important terms for understanding and using this ontology are established in international standards and dictionaries. We used as a reference for this work the dictionary of epidemiology written by Prof. Miguel Porta, sponsored by the International Association of Epidemiology (PORTA, 2014). In medicine, we use the concepts of SNOMED-CT, which determines international standards for medical terms. In computer science, we adopted the terms proposed by KERR (2016). In health informatics, we used the dictionary of terms, acronyms, and health organizations published by the Health Informatics and Management Systems Society (HIMSS, 2013). Class Creation Classes represented the central structures of an ontology and were defined according to the needs identified during the requirements analysis to create the first instance. Next, we present the description of each class, conceptualizing the values that belong to each of their properties. ### Study Designs Within the scope of this ontology, we identified three types of designs used in epidemiological research studies that can be performed using anonymized secondary records. We will indicate references for details on the main epidemiological designs (NUNES; CAMEY; GUIMARAES, 2013). * Cross-sectional: an observational study examining data set at a given time to estimate the frequency of a given event. Exposure and outcome are collected from the database without information on dates associated with events. They analyze the prevalence and association between exposure (exposed and unexposed) and the outcome class, which is usually binary, but can present different data types. * Case-control: an observational study to compare people with a positive outcome of interest and a control group of people who do not have the outcome. It usually uses extended periods of data collection systematically. It allows for identifying risk factors in rare diseases, studying their etiology, and analyzing the odds ratio. * Cohort: an observational study carried out to follow a group of people over time to assess the risks and benefits of using a given intervention or medication and to study the evolution and prognosis of diseases. Compares the disease incidence in the population using a ratio (relative risk) or a difference (attributable risk). These types of studies have purposes and limitations that need to be addressed by computer scientists during experiments with AI applied to health. For example, the health data observed in an Electronic Health Record (EHR) do not record the patient's health events completeness and require specific analyses to circumvent possible research biases. Furthermore, these data are subject to different types of systematic errors that reflect the quality of how hospital staff uses the EHR. The other designs of epidemiological studies generally do not allow the systematic use of anonymization. Case reports describe isolated cases in detail and do not involve a data-centric analysis. Experimental studies (for example, randomized clinical trials) involve prospective follow-up of research subjects. Even if measures are taken to preserve the privacy using data preparation, for security reasons related to the participants' health, most of these studies need ways of re-identifying the data subjects, making anonymization an unfeasible alternative. ### Data Type Within the scope of anonymization, we use the expression dataset (free translation for datasets) to designate the structures containing rows and columns of data, also called tables, spreadsheets, or tabular data. We will call all columns variables, also called covariates or features. In order to understand the types of data in a hospital record, we classified the types of data into subclasses to represent different topics. * Nature: Qualitative (ordinal and nominal), Quantitative (discrete and continuous). (YADAV, 2019) * Structure: Structured, semi-structured, unstructured, raw. (KIMBALL, 2011) * Computation: int, float, char (C++ language examples) (IBM, 1993). * Metadata: Taxonomy (labels) and dictionaries. (CHUTE, 2010) * Dataset Digital Format: Plaintext (UTF-8, ISO-9071 and others), Proprietary, Encrypted. (IBM, 1993) * Localization: Portuguese-BR, English-USA and others, defines the language, time zone and reference values that may vary depending on the location. (IBM, 1993) Having presented the types of data that can be used, we can now classify a variable according to its perspective for anonymization and which we will call attribute types. ### Attribute Type The attributes define how each of the variables will be treated during the anonymization of the information. * Identifiers: data that directly link the natural person who is the data subject. It includes, but is not limited to, the full name, patient's medical record, personal document number, professional license record, among others, which need to be removed during treatment for anonymization. (BRAZIL, 2018) * Indirect identifiers: when combined can reveal the identity of personal data, but can be treated with a privacy algorithm to implement k-anonymity. (BRAZIL, 2018;) * Sensitive Personal Information: All other information about a person's health. For anonymization purposes, they can be treated with the l-diversity and t-approximation algorithms. (MACHANAVAJJHALA, 2007) From the definitions of the types of attributes, we can analyze the types of risk, the attacks and the privacy models that can be used to mitigate the risks against privacy. ### Types of Risk The top three privacy-related risks that datasets are threatened with are: * Identity disclosure (re-identification): this is a high privacy impact risk, as when an attacker successfully re-identifies he learns all sensitive information about the data subject contained in the dataset. (SWEENEY, 2002) * Disclosure of attributes: this is a risk of intermediary impact on privacy, because when an attacker is successful, only the value of some variables in the set is disclosed, which may allow inferring who are the holders of the data contained in the set, but not pointing out who and who. (PRASSER, 2016) * Disclosure o association: this is a risk of lesser impact since the attacker does not directly disclose any information from the data set itself, but allows determining whether or not the holder is within the data set. (PRASSER, 2016) ### Attack Type There are three attack models that can be used to try to identify data considered anonymous. * Journalist Model: attacks to disclose the identity of a specific data subject. It uses the data linkage technique to relate indirect identifiers contained in the dataset with other public information on the internet (SWEENEY, 2015). * Prosecutor Model: attacks to disclose the identity of the data subject or a specific attribute, using as prior knowledge or background knowledge whether or not the data of interest are contained in the data set. (PRASSER, 2016) * Merchant Model: when there is no specific target, but aims to identify a large number of data subjects existing in a set. (PRASSER, 2016) ### Privacy Model An electronic hospital record can have re-identification risks mitigated with a privacy model: * k-Anonymity: uses grouping changes and suppression of indirect identifiers to ensure that an individual's data is indistinguishable from k-1 other individuals (SWEENEY, 2002). * l-Diversity is an extension to k-anonymity insofar as it uses the same protection given to indirect identifiers and the set's sensitive personal data, increasing the computational complexity insofar as new categories need to be defined for each current value in the health datasets. (MACHANVA-JJHALA, 2007) * t-Approximation: proposes to overcome the limitation of l-diversity by redefining the distribution of the values of each variable instead of grouping. (RAJENDRAN, 2017) * O-Presence: can be used to protect data from membership disclosure, where a dataset reveals the probability that an individual from the population is contained in the dataset (RAJENDRAN, 2017). Other models adapt the presented methods, which this work will not treat. (KHAN, 2021). With the privacy models defined, we can apply the related preparation techniques. ### Preparation Technique The decisive action to preserve the privacy of a data set is its preparation with one or more specific techniques used according to the selected privacy model: * Suppression: Deletion of data that may indirectly identify a person. For this, algorithms are used that can entirely suppress or partially mask the observations, variables, or specific values within a data set (SAMARATI, 2012) * Grouping: Classification of patients into categories (also called generalization). The grouping of values allows all patients with the same category to be classified in the same groupings. (SAMARATI, 2012) * Disturbance: Inclusion of noise or dirt, intentionally modifying data to make re-identification difficult. It is used to implement the differential privacy method in large databases. Due to the treatment that uses the modification of actual data, we will not deal with these models during this work (DWORK, 2008). ### Information Metric By using the preparation techniques, we reduce the risk of re-identification and the loss of usefulness of the information. These metrics generally use original data (input) and anonymized data (output) to be computed. We will use the following basic definitions: * Indirect Identifiers: These are all variables the researcher defines as attributes that can be combined to re-identify a record. Usually, these attributes can be found publicly, such as age, gender, marital status, skin color, and level of education. * Equivalence class: records with the same indirect identifiers have the same equivalence class (Ej). Therefore, D = E1 U... U Ej, where j is the number of equivalence classes in the set D. Class Equivalence Size (CES): is the number of records i that belong to the same equivalence class. * Hyperparameter k: the value k is defined a priori by the researcher and is used by the anonymization algorithms to limit the minimum size of the CES in an anonymized set. The greater the value of k, the greater the privacy of a set, while k=1 means that the records are not anonymized. For an original set D(\(A_{1}\),...,\(A_{p}\)) containing the indirect identifiers A = \(A_{1}\),...,\(A_{p}\) suppressed or grouped into the anonymized set Dz(Az1,...,Azp) where CES(Dz, Az, i) >=k, for all i. Thus, we defined the metrics that measure the effect of anonymization. * Individual Re-identification Risk (RR) Is the individual re-identification risk of each record i in dataset D containing indirect identifiers A, depends on the CES value and is calculated using the formula: (SWEENEY, 2002) \[RR(D,A,i)=\frac{1}{CES(D,A,i)}*100\] (1) * Average Re-identification Risk(Avg RR) It is the average re-identification risk of a set D containing n records i. The Average RR of the entire set is important, as the anonymization algorithms tend to preserve the Individual RR with values close to the Maximum RR, regardless of the techniques used in its preparation. However, the Average RR tends to be more susceptible to identifying improvements in the method of preparing indirect indicators and thus pointing out improvements in the risk of preserving the privacy of a set. Its value depends on the value of the Individual RR and is calculated with the formula: \[AverageRR(D,A)=\frac{RR(D,A,i)+...+RR(D,A,n)}{n}*100\] (2) * Maximum Re-identification Risk (Maximum RR), which is the chance of success that an attack against privacy can have and re-identify at least one of the holders present in a data set, being: \[MaximumRR(D)=\frac{1}{k}*100\] (3) * Non-Uniform Entropy (NUE): compares the difference before and after the anonymization of the equivalence size of the classes in the whole dataset and individually for each of the attributes. The formula assumes that the quotient is always less than or equal to 100, as the CES of i can only increase during preparation. Thus, the negative logarithm of the ratio is always a positive number and the sum of all of them shows the entropy of the attribute. The closer the value of NUE is to 1, the smaller the loss of variable information during the anonymization from D to Dz. (PRASSER; BILD; KUHN, 2016) \[NUE(D,Ap,Dz,Azp)=(1-(\sum_{i=1}^{n}-log\frac{CES(D,Ap,i)}{CES(D,Ap,i)})*100\] (4) * Intensity of Generalization (IG): identifies the loss of information between the original set and the anonymized set from the sum of the amount of values that were modified during anonymization (SWEENEY, 2002). Where: I is the indicator function, that is, when AijAzij it assumes value 1, otherwise it assumes value 0. n is the total number of lines, p is the total number of attributes, D is the original set and Dz is the anonymous set. Therefore, IG(D, Dz) = 1 if there is no modification of the indirect identifiers. The more categorizations and deletions occur, the loss of information increases and the GI approaches zero. If all the values of the indirect indicators undergo suppression or categorization, we have IG(D, Dz) = 0. We use the GI to compute the amount of values that were modified in the preparation, comparing the equality of the values before and after in all the rows i and columns j of both sets, with the following formula: \[IG(D,Dz)=(1-\frac{\sum_{i=1}^{n}\sum_{j=1}^{n}I(Aij\neq Azij)}{n*p})*100\] (5) * General Granularity (GG): compares the distinct amount of existing values in a variable before and after anonymization to show the loss of information. Where QAzn is the distinct number of values existing in Az after preparation and QAn is the distinct number of values before preparation. The closer the value of GG is to 100, the smaller the loss of quality of information from A during anonymization. (IYENGAR, 2002) \[GG(Ap,Azp)=\frac{QAzp}{QAp}*100\] (6) * Specific purpose method: compares models derived from different ways of preparing the same data set. For example, developing a classifier with machine learning and using the results of accuracy, sensitivity, specificity, among others to measure the effects of anonymization. (Pnumber, 2016) ### Type of Use Anonymizing a dataset impacts information loss indicators and, consequently, the results of using AI techniques. Therefore, the preparation needs to be done according to these indicators and considering the different types of use, as proposed by LEFEVRE (2008): * Linear regression analysis: involves finding a linear model that describes or predicts the value of a quantitative dependent variable as a function of the other variables in the set. Linear regression analyses can be implemented with AI using supervised machine learning, including linear regression, neural networks, regression trees, and more. (JAMES, 2014) * Classification: is the attribution of qualitative variables that represent classes with predetermined values (also called targets, categories, dependent variables, targets, or labels) using a systematic procedure based on the observed variables. It is a task performed in cross-sectional studies and is characteristic of not using dates. AI classifier algorithms can use different learning approaches, including supervised, semi-supervised, and unsupervised. The main methods for classification include logistic regression, methods based on decision trees, neural networks, linear discriminant analysis, clustering, boosting, and support vector machines (SVM). * Information Retrieval: A selection (or query) involves a set of criteria used to filter data and define groups for a population (subpopulation). Combinations using logical operators allow the formulation of complex queries usually implemented with Structured Query Languages (SQL). The use of natural language processing (NLP), Neural Networks (NN), Large Language Models (LLM), and other AI techniques allows the selection of information in free text and images as well. Selection tasks are required during different times of a study. (LEFEVRE; DEWITT; RAMAKRISHNAN, 2008) * Clustering: involves recognizing, differentiating, and understanding how data can be grouped into categories. Clustering is also a type of staging used to implement anonymization, and for this reason, clustered data is also often considered anonymous data. However, some tasks derived from clustering that can be performed analyses the specific demands for anonymization are topic classification, assignment of taxonomies, and clustering (which can be done using unsupervised machine learning) (IBM, 1993). There are many ways to use an anonymized dataset. We focus on those that use Machine Learning (ML) to develop algorithms that can learn from their experiences and thus improve their performance in specific tasks necessary to solve problems. These tasks can be implemented with AI in different computer programs and via programming using libraries and machine learning packages Scikit Learn and Stats for Python and R, respectively. (KUHN, 2008; LORENZONI, 2019; PEDREGOSA, 2011; SANCHES, 2003) ### Creating properties With the definition of the classes and their subclasses, we can create the properties that indicate the existing relationships between the classes, answering questions using important terms for anonymizing hospital records, such as: What type of risk is the dataset exposed? What data type are the attributes that are indirect identifiers in the set? What privacy models mitigate the risk against the anticipated types of attacks? What are the risks of re-identifying the set in case of a specific type of attack? What metrics can identify information loss in data prepared with suppression? Answering these questions, we can define properties as functions that connect classes, for example: * has-preparation (Privacy Model, Preparation Technique): defines that a given privacy model can use certain data preparation techniques in its implementation, for example: k-Anonymity <has-preparation> Suppression, Grouping * has-measure (Data Type, Information Metric): defines, for a data type, which metric shows the loss of information during anonymization: Nominal Data Type <has-measure> NUE * has-impact (Information Metric, Task Type): defines which type of task performed by the AI can have the anonymization impact evaluated by a metric: NUE <has-impact> Classification Once the properties of the classes are defined, we can define the relationships between them and an individual (or instance) in the real world. ### Creating relationships Individuals (or objects) in real life can use the terms of this ontology and thus create the necessary relationships to handle anonymization according to the purpose of use that the datasets will have (NOY, 2001). Let us use, as an example, a researcher who will study how to develop an AI application to predict hospital mortality, classifying which patients have the highest risk of dying in the first hours of hospitalization. The first question to start this relationship between the researcher and the anonymization methods must be asked to understand the study design and thus identify its properties in the ontology. To start this conversation, what is the study design? From this answer, we use the properties that connect the classes and the properties in the anonymization domain to define a sequence of relationships to treat the subject in the context of a specific study. Thus we can define an ontology instance. ### Instance Creation The instances represent real ontology applications (NOY, 2001). It is possible to define a hypothetical project from the ontology to illustrate its operation. In our case the new instance consists of 1) a cross-sectional study on hospital mortality, 2) the AI technique used to classify patients at risk of death, 3) the types of data used are structured, of a qualitative and quantitative nature, 4) the types of data computational computations of a qualitative nature are strings and of a quantitative nature are floats, 5) the medical record variable is a direct identifying attribute, 6) the age and sex attribute variables are indirect identifying attributes, 7) the objective is to mitigate the risk of re-identification, 8) defending of journalist-type attacks, 9) adopting the k-anonymity privacy model, 10) using suppression and grouping preparation techniques, 11) with the risk of re-identification measured by the metrics of RR, Average RR, and Maximum RR. ## 5 Discussion In their review of the topic, CHEVRIER (2019) deals with the variability observed in the terms that define the methods of preparation and in the way in which the terms de-identification and anonymization are used, emphasizing the need for objective definitions centered on the legislation to improve education and dissemination of information on the subject (CHEVRIER, 2019). This ontology defines a formal structure to understand the risk of re-identification of hospital records, often translating different concepts between different areas, and this required making choices. Our expectation is not to change the culture of consolidated research areas but to help form a new community of researchers who will use hospital datasets not only to train and use AI but to enhance the use of data in different types of epidemiological studies, hospital management and innovation in health as a whole. There are legal provisions for the use of hospital data; one of them is the patient's informed consent, authorizing the use of their data in research, for example (BRAZIL, 2018). In order to obtain informed consent from patients, it is necessary to interact with them. How this consent is obtained can introduce more than one type of bias in clinical research (EMAM, 2013). Anonymizing datasets at the first opportunity after collection is an alternative to not needing to obtain this consent. However, the data processing for anonymization can interfere with the usefulness of the data and not allow for achieving the objective research search. Thus, ORHBR also describes the semantics to explain why a given research needs to work with identified data. Our study could have been more extensive in elaborating on only one instance used to exemplify the ontology. We understand that this is the beginning of a new field in health research, and creating other types of instances may demand new requirements. We believe that an ontology such as ORHBR can be adopted by the national authority for the protection of personal data and deposited in the Electronic Government Vocabulary and Ontology Repository, enhancing the use by other researchers and maintenance of the ontology, aiming to extend its classes by including new types of data (for example image and sound) and types of design (for example counterfactual studies), supporting the maintenance of an anonymization policy that enhances the use of data. Conclusion We defined an ontology to leverage the culture of privacy in research with hospital records while maintaining sight of the opportunities to solve problems of different types using AI. By adopting ORHBR, researchers from different areas can share and reuse technical knowledge about anonymizing hospital records using the same vocabulary. Using this ontology, we can quantitatively and qualitatively compare different privacy models that inform the risks and loss of information during the anonymization process.
2306.11892
Exploring New Frontiers in Agricultural NLP: Investigating the Potential of Large Language Models for Food Applications
This paper explores new frontiers in agricultural natural language processing by investigating the effectiveness of using food-related text corpora for pretraining transformer-based language models. In particular, we focus on the task of semantic matching, which involves establishing mappings between food descriptions and nutrition data. To accomplish this, we fine-tune a pre-trained transformer-based language model, AgriBERT, on this task, utilizing an external source of knowledge, such as the FoodOn ontology. To advance the field of agricultural NLP, we propose two new avenues of exploration: (1) utilizing GPT-based models as a baseline and (2) leveraging ChatGPT as an external source of knowledge. ChatGPT has shown to be a strong baseline in many NLP tasks, and we believe it has the potential to improve our model in the task of semantic matching and enhance our model's understanding of food-related concepts and relationships. Additionally, we experiment with other applications, such as cuisine prediction based on food ingredients, and expand the scope of our research to include other NLP tasks beyond semantic matching. Overall, this paper provides promising avenues for future research in this field, with potential implications for improving the performance of agricultural NLP applications.
Saed Rezayi, Zhengliang Liu, Zihao Wu, Chandra Dhakal, Bao Ge, Haixing Dai, Gengchen Mai, Ninghao Liu, Chen Zhen, Tianming Liu, Sheng Li
2023-06-20T21:12:16Z
http://arxiv.org/abs/2306.11892v1
Exploring New Frontiers in Agricultural NLP: Investigating the Potential of Large Language Models for Food Applications ###### Abstract This paper explores new frontiers in agricultural natural language processing by investigating the effectiveness of using food-related text corpora for pretraining transformer-based language models. In particular, we focus on the task of semantic matching, which involves establishing mappings between food descriptions and nutrition data. To accomplish this, we fine-tune a pre-trained transformer-based language model, AgriBERT, on this task, utilizing an external source of knowledge, such as the FoodOn ontology. To advance the field of agricultural NLP, we propose two new avenues of exploration: (1) utilizing GPT-based models as a baseline and (2) leveraging ChatGPT as an external source of knowledge. ChatGPT has shown to be a strong baseline in many NLP tasks, and we believe it has the potential to improve our model in the task of semantic matching and enhance our model's understanding of food-related concepts and relationships. Additionally, we experiment with other applications, such as cuisine prediction based on food ingredients, and expand the scope of our research to include other NLP tasks beyond semantic matching. Overall, this paper provides promising avenues for future research in this field, with potential implications for improving the performance of agricultural NLP applications. Natural Language Processing, Language Models, ChatGPT, Food Applications, Semantic Matching ## 1 Introduction The United States Department of Agriculture (USDA) maintains the Food and Nutrient Database for Dietary Studies (FNDDS), a repository of nutrient values for foods and beverages consumed within the United States1. Additionally, extensive food policy research leverages household and retail scanner data on grocery purchases, such as the Nielsen data available through the Kilts Center for Marketing2. The integration of these databases, i.e., bridging food description from retail scanner data with the nutritional information database, is of paramount importance. This linkage can shed light on the relationship between retail food purchase patterns and community health, revealing differences between the diets of low and higher-income households across the entire diet spectrum. Consequently, it has the potential to shape future funding policies aimed at facilitating healthier food choices for low-income households. Footnote 1: [https://www.ars.usda.gov/northeast-area/beltsville-md-bhnrc/beltsville-human-nutrition-research-center/food-surveys-research-group/docs/finds/](https://www.ars.usda.gov/northeast-area/beltsville-md-bhnrc/beltsville-human-nutrition-research-center/food-surveys-research-group/docs/finds/) Footnote 2: Researcher(s)’ own analyses calculated (or derived) based in part on data from Nielsen Consumer LLC and marketing databases provided through the NielsenIQ Datasets at the Kilts Center for Marketing Data Center at The University of Chicago Booth School of Business. The conclusions drawn from the NielsenIQ data are those of the researcher(s), and do not reflect the views of NielsenIQ. NielsenIQ is not responsible for, had no role in, and was not involved in analyzing and preparing the results reported herein. In this extended version of our previous paper [1], we seek to advance and apply Natural Language Processing (NLP) techniques to facilitate a more robust linkage between these two databases. One common strategy to address such challenges is semantic matching, a task that identifies whether two or more elements share similar meanings. Bi-encoders, which encode two input strings in the embedding space and subsequently calculate their similarity in a supervised manner, are frequently used for this purpose. Word embedding techniques serve as excellent candidates for this task, given their extensive use and recent advancements in semantic matching [2]. Particularly, the advent of contextual word embeddings, wherein each word receives a vector representation based on its context, has led to considerable progress in numerous NLP tasks, including semantic matching. However, while word embeddings have demonstrated utility in many tasks, they are not always optimal for more complex semantic matching tasks as they may struggle with subtleties in domain-specific language and may not effectively capture broader contextual information. Transformer-based language models, e.g., BERT [3], have been widely used in research and practice to study computational linguistics and they have shown superior performance in a variety of applications including text clas sification [4, 5], question answering [6], named entity recognition [7], and many more. However, these models may not always generalize across domains when applied with default objectives, i.e., pre-training on generic corpora like Wikipedia. To address this issue, previous work has attempted to incorporate domain-specific knowledge into the language model through different strategies. One of the prominent approaches in the biomedical domain is BioBERT [8], a BERT-based language model pretrained on a large corpus of biomedical literature. Motivated by the impressive performance of BioBERT, we use a large corpus of agricultural literature to train a language model for agricultural applications from scratch. The trained model will be further fine-tuned by the downstream tasks. Another method to incorporate domain knowledge into the language model is to use an external source of knowledge bases such as a knowledge graph (KG). Knowledge graphs [9, 10, 11, 12, 13, 14] are densely interconnecting (Web-scale) directed multi-graphs containing data across various domains. It includes rich sources of information that are carefully curated around objects called entities and their relations. A basic building block of a KG is called a triple which consists of a subject, a predicate, and an object. Previous work has attempted to inject triples into the sentences [15] for language model pretraining. However, injecting triples can introduce noise to the sentences which will mislead the underlying text encoder. To address this issue, we propose to add \(n\) entities from an external knowledge source (i.e., a knowledge graph) based on similarity that can be obtained by various methods such as entity linking. This not only enhances the semantic space but also confines the vocabulary within the domain. In our extended study, we provide empirical evidence showing how modifying \(n\) can impact the performance of the downstream task both quantitatively and qualitatively. In this paper, we map retail scanner data, referred to as Nielsen data, to USDA descriptions by utilizing semantic matching. This process is conceptualized as an "answer selection" problem. In our unique framing, we treat Nielsen product descriptions as questions and USDA descriptions as potential answers, seeking to link each Nielsen product with the most suitable USDA description. This is different from traditional answer selection, where each question has a unique, limited set of answers. Here, we have a vast, shared pool of USDA descriptions serving as potential answers. To support our answer selection system, we use a pre-trained language model, enriching both the Nielsen and USDA descriptions by incorporating external knowledge during the fine-tuning phase. In this extended version, we further enhance our approach by leveraging the power of GPT-based large language models (LLMs). We utilize these LLMs to augment the answer selection process, similar to the concept of entity linking. Additionally, we explore the utilization of GPT-based LLMs as an independent baseline for comparison. By incorporating these advancements, we aim to explore alternative approaches and methodologies to further investigate the mapping process of food descriptions in retail scanner data to the nutritional information database. By incorporating GPT-based large language models into our analysis, we seek to provide valuable insights and novel perspectives that can contribute to the understanding of this mapping task. We thus make the following key contributions in this paper: * We collect a large-scale corpus of agricultural literature with more than 300 million tokens. This domain corpus has been instrumental to fine-tune generic BERT into AgriBERT. * We propose a knowledge graph-guided approach to augment the dataset for the answer selection component. We inject related entities into the sentences before the fine-tuning step. * We investigate the integration of GPT-based large language models to gain new insights in the mapping of food descriptions in retail scanner data to the nutritional information database. * AgriBERT substantially outperforms existing language models on USDA datasets in the task of answer selection. We plan to release our datasets to the community upon publication. The rest of the paper is organized as follows: in the next section, we discuss related works in language modeling in specific domains. Next, in Section 3.3, we describe our proposed approach to train a language model in the agricultural domain and discuss how we inject external knowledge in the fine-tuning step. In Section 5.7, we introduce different datasets including our corpus for training a language model, the external sources of knowledge, and finally the food dataset to evaluate our language model. We conclude our paper in Section 6. ## 2 Related Works ### _Pre-trained Language Models_ In NLP, Pre-trained language models learn from large text corpora and build representations beneficial for various downstream tasks. In recent years, there are two successive generations of language models. Earlier models, such as Skip-Gram [16] and GloVe [17], primarily focus on learning word embeddings from statistical patterns, semantic similarities, and syntactic relationships at the word level. With this first group of language embedding methods, polysemous words are mapped to the same representation, regardless of word contexts. For example, the word "bear" in "I see a bear" and "Rising car sales bear witness to population increase in this area" will not be distinguishable in the vector space. A later group of models, however, recognizes the importance of textual contexts and aims to learn context-dependent representations at the sentence level or higher. For example, CoVe [18] utilizes an LSTM model trained for machine translation to encode contextualized word vectors. Another popular model, Bidirectional Encoder Representations from Transformers (BERT) [3] is based on bidirectional transformers and pre-trained with Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks, both ideal training tasks for learning effective contextual representations from unlabelled data. It delivers exceptional performance and can be easily fine-tuned for downstream tasks. BERT has enjoyed wide acceptance from the NLP community and practitioners from other domains [19, 20, 21]. In particular, domain experts can build domain-specific BERT models [22, 23, 24] that cater to specific environments and task scenarios [25, 26, 27]. ### _Large Language Models_ The NLP field has witnessed the very recent rise of large language models (LLM), boasting parameter counts exceeding 100 billion. GPT-3 [28], a large scale model with 175 billion parameters, introduced a new paradigm for NLP. Although LLMs are still based on the Transformer architecture, there is an enormous increase in the training data, parameter count and overall model dimensions. The increased scale of LLMs enable powerful zero-shot and few-shot learning capabilities [28, 29] through in-context learning [28] that does not involve gradient updates. In addition, LLMs demonstrate emergent abilities (e.g., reasoning, mathematical capabilities) [30, 31] that are not available in smaller pre-trained models such as BERT and its variants. For example, GPT-4 delivers expert level performance on a wide range of examinations and benchmarks including USMLE [32, 33], LSAT [32] or highly specialized domain evaluations such as radiation oncology physics exams [34]. Some prominent examples of LLMs include Bloom [35], OPT [36], LLAMA [37], ChatGPT [29], GPT-4 [32] and Palm 2 [38]. Indeed, the game of ChatGPT has revolutionized and popularized NLP, leading to new research [39, 40, 41, 42] and applications [43, 44] with LLMs as foundational models [45]. ### _Domain-Specific Language Models_ BERT has become a fundamental building block for training task-specific models. It can be further extended with domain-specific pre-training to achieve additional gains over general-purpose language models. Prior work has shown that language models perform better when the source and target domains are highly relevant [45, 46, 29, 24, 47, 8, 29]. In other words, pre-training BERT models with in-domain corpora can significantly improve overall performance on a wide variety of downstream tasks [24]. There is also a correlation between a model's performance and the extent of domain-specific training [24]. In particular, Gu et al. [24] note that training models from scratch (i.e., not importing pre-trained weights from the original BERT model [3] or any other existing BERT-based models) is more effective than simply fine-tuning an existing BERT model with domain-specific data. In this paper, agricultural text such as food-related research papers is considered in-domain while other sources such as Wikipedia and news corpus are regarded as out-domain or general domain. Our primary approach is in line with training-from-scratch with in-domain data. ### _Augmenting Pre-trained Language Models_ Data augmentation refers to the practice of increasing training data size and diversity without collecting new data [48]. Data augmentation aims to address practical data challenges related to model training. It is applicable to scenarios such as training with low-resource languages [49], rectifying class imbalance [50], mitigating gender bias [51], and few-shot learning [52, 41]. Some data augmentation methods incorporate knowledge infusion. For example, Feng et al. [53] used WordNet [54] as the knowledge base to replace words with synonyms, hyponyms and hypernyms. Another study [55] extracts confusion sets from the Aspell spell checker to perform synthetic data generation in an effort to enhance the training data, which consists of erroneous sentences used for training a neural grammar correction model. However, there is limited research on the efficacy of applying data augmentation to large pre-trained language models [48]. In fact, some data augmentation methods have been found to have limited benefit for large language models [56, 48]. For example, EDA [57], which consists of 4 operations (synonym replacement, random insertion, random swap, and random deletion), provides minimal performance enhancement for BERT [3] and RoBERTa. Nonetheless, researchers [48] advocate for more work to explore scenarios in which data augmentation is effective for large pre-trained language models because some studies [58] demonstrate results contrary to the claims of [56]. Newly released language models, such as ChatGPT and GPT-4, demonstrate impressive language understanding and reasoning abilities, effectively executing tasks such as grammatical corrections, rephrasing, and text generation. Capitalizing on these advancements, Dai et al. [41] proposed a ChatGPT-based text data augmentation method that generates high-quality augmented data by rephrasing the original text into multiple semantically similar variants with different representations and styles for training the BERT model on a downstream classification task. Experiments conducted on Amazon customer reviews, medical symptom descriptions, and the PubMed 20K [59] datasets have shown significant improvements in text classification. However, as ChatGPT is pre-trained solely on general domain data, it lacks pre-training knowledge in the agriculture domain. Therefore, it may encounter difficulties generating high-quality augmented text data specific to the agriculture domain. In this study, we investigate the effectiveness of data augmentation with knowledge infusion and apply our method to the answer selection task scenario. We find that our method significantly improves semantic matching performance. \begin{table} \begin{tabular}{l l l} \hline \hline Product Description & USDA Description & Label \\ \hline domino white sugar granulated 1lb & salsa, red, commercially-prepared & False \\ domino white sugar granulated 1lb & cookie-cisp & False \\ domino white sugar granulated 1lb & sugar, white, granulated or lump & True \\ \hline \hline \end{tabular} \end{table} TABLE I: An example of how we propose to extend the dataset. ### _Answer Selection_ Answer selection refers to the task of finding the correct answer among a set of candidate answers for a specific question. For example, given the question "What is the capital of France?", a solution to this task is required to select the correct answer among the following choices: * A) Paris is the capital of France. * B) Paris is the most populous city in France. * C) London and Paris are financial hubs in Europe. In this case, the first answer should be selected. It is clear that matching words or phrases is not sufficient for this task. A common approach is to formulate this problem as a ranking problem such that candidate answers are assigned ranking scores based on their relevance to the question. Earlier work primarily relies on feature engineering and linguistic information [60]. However, the advancement of deep learning introduces powerful models [61, 62] that outperform traditional methods without the need of manual efforts or feature engineering. In this study, our goal is to establish valid mappings between food descriptions and nutrition data. We formulate this task as an answer selection problem and demonstrate the superiority of our method over baselines. ## 3 AgriBERT: Training and Methodology ### _Domain Specific Language Model_ Language models are powerful tools for a variety of NLP applications. Given a particular task in a specific domain, a language model becomes more effective if it is trained on corpora that contain a large amount of text in the same domain. Such practices have already been adopted by various domains in the literature, where BioBERT [8] and FinBERT [63] are successful examples of training domain-specific language models in biomedical and financial domains respectively. Motivated by the fact that there is a lack of corpora or pre-trained models in the agricultural domain, in order to produce vocabulary and word embeddings better suited for this domain than the original BERT, we build upon previous research and collect 46,446 articles related to food and agriculture which contains more than 300 million tokens. We then use this corpus to train a BERT model from scratch (more details about the dataset are provided in Section 5.1.1). We train our model using the standard procedure of masked language modeling, which involves masking a certain fraction of words in a sentence and requiring the model to predict the masked words based on other words in the sentence. This approach enables the model to learn meaningful sentence representations and can be used for various NLP applications in the agricultural field. ### _Answer Selection Problem Definition_ The evaluation of trained language models requires downstream tasks in the domain. For instance, most biomedical language models are evaluated on named entity recognition tasks with medical datasets. Semantic matching, as a technique to determine whether two sentences have similar meanings, has important practical values in various domains, e.g., agriculture. Due to the lack of benchmark NLP datasets in the agriculture domain, we construct our own agriculture benchmark dataset and evaluate our model on the semantic matching task. More specifically, in this work, we formulate this task as an answer selection problem, where we assign all USDA descriptions (answers) to each Nielsen product description (question) and select \(R\) random incorrect USDA description per product description as negative samples where \(R<<D\), and \(D\) is the total number of USDA descriptions. Table I provides an example of how we extend the dataset when \(R=2\). Let \(q\) denote a question and \(a\) denote an answer. We can formulate the answer selection task as follows: \[S(q,a)=\text{AgriBERT}(q,a), \tag{1}\] where the function \(S\) outputs a score (a probability) for each pair of question and answer. Then, the selected answer \(a^{*}\) for a question \(q\) is: \[a^{*}=\arg\max_{a\in R}S(q,a). \tag{2}\] ### _Knowledge Infused Finetuning_ As discussed in Section 2.5, prior research has demonstrated successful integration of relevant information from external knowledge sources to enhance the performance of downstream tasks, such as incorporating facts (i.e., a curated triple extracted from a knowledge graph in the form of (entity,relation,entity)) from knowledge graphs [15], or injecting refined entities extracted from text to a knowledge graph [64]. In our setting, where answer selection is the task at hand and the training dataset is Fig. 1: The overall framework of AgriBERT which is trained on agriculture literature from scratch. AgriBERT is evaluated on the answer selection task. The answer selection component has two inputs: a question and an answer, and before we input them into the framework, we add new entities to them from an external source of knowledge such as Wikidata or FoodOn. The output of the framework is a score (a probability) which is used for ranking the answers. limited in size, we propose augmenting both the questions and answers with external knowledge to improve the performance of the answer selection module. Discovering relevant external knowledge can be accomplished via different mechanisms, such as entity linking [65, 66, 67, 68], querying [69], calculating similarity [70], etc. In this paper, we propose utilizing entity linking and querying for obtaining pertinent external knowledge. In entity linking, the goal is to identify and associate all the named entities in a text with corresponding entities in a knowledge graph. There exist many mature entity linking tools such as Stanford-UBC Entity Linking tool [71], DBpedia Spotlight [72], TAGME [73], etc. This practice is proved to be an effective solution in our scenario. However, one drawback of this approach is that the entity recognition and entity linking algorithms are typically trained on general text corpora like Wikipedia, which may not align well with our specific requirements. So it is essential to reconfigure these tools to suit our purposes rather than relying on them as they are. Hence, we consider a domain-specific knowledge graph, e.g., FoodOn [74], and obtain new knowledge by querying it using the keywords in the text of question-answer pairs. We simply append new entities to the end of questions or answers. More details about this knowledge graph will be provided in Section 5.1.1. Figure 1 illustrates our proposed framework. We can modify Equation 1 as follows to account for the new entities as follows: \[S(q,a)=\text{AgriBERT}(q\parallel E_{q},a\parallel E_{a}), \tag{3}\] where \(E_{q}\) and \(E_{a}\) represent the entities enriched from external sources such as Wikidata or FoodOn for the question and answer, respectively, and \(\parallel\) is the concatenation operator. Please refer to Algorithm 1 for the detailed procedure of mapping retail scanner data to USDA descriptions using AgriBERT and external knowledge. ``` 0: Nielsen product descriptions \(Q\), 0: USDA descriptions \(A\), 0: number of incorrect descriptions per product \(S\), 0: pre-trained AgriBERT model, 0: external knowledge sources. 1:for each product description \(q\in Q\)do 2: Select \(R\) samples from \(A\) as negative samples 3: for each random USDA description \(a\in R\)do 4:\(E_{q}\leftarrow\) obtain entities from external sources for \(q\) 5:\(E_{a}\leftarrow\) obtain entities from external sources for \(a\) 6:\(S(q,a)\leftarrow\) AgriBERT(\(q\parallel E_{q},a\parallel E_{a}\)) 7:endfor 8:\(a^{*}\leftarrow\arg\max_{a\in A}S(q,a)\) 9:endfor ``` **Algorithm 1** Knowledge Infused Answer Selection ## 4 Enhancing AgriBERT We introduce an innovative approach aimed at further improving the performance of our AgriBERT model. Recognizing that effective language model training relies heavily on the quality and diversity of training data, we turn to Generative Pre-trained Transformers (GPT) [75] to augment our dataset. Specifically, we engineer creative and diverse prompts to guide GPT in generating text samples that closely align with the agricultural domain. This process not only enhances the variety of our training data but also provides AgriBERT with more nuanced and domain-specific contexts to learn from. Through this data augmentation technique, we aim to make AgriBERT more robust and versatile, improving its ability to generalize across various agricultural text processing tasks. To enhance the quality and diversity of our training data, we utilize ChatGPT (gpt-3.5-turbo) [76] to guide the enrichment process of our dataset through two carefully designed prompts. The first prompt instructs ChatGPT to "Expand the semantic space of the query \(q\) by generating \(d\) related words." The aim here is to explore and expand the semantic field around a given query to include a broader range of related concepts and keywords. This strategy not only enhances the diversity of our data, but also enriches the context around each query, enabling AgriBERT to better learn and understand the agricultural terminology. The generation of related words effectively provides a more expansive representation of the query. The second prompt instructs ChatGPT to "Rephrase \(q\)". This approach provides a valuable method for data augmentation, as it encourages linguistic diversity and challenges the model to understand the same query expressed in various ways. It can help the model to become more resilient to a range of phrasing and wordings, thus improving its robustness and versatility. By learning to understand the same question posed in multiple different ways, AgriBERT becomes more adept at generalizing its knowledge to new, unseen queries, which is a crucial aspect of performance in various agricultural text processing tasks. Together, these two prompts provide an effective method for diversifying and enriching our training dataset, and we hypothesize that their application will result in improved performance of our AgriBERT model. Table II illustrates several examples to show how these two prompts are used to expand the semantic space and rephrase a given input sentence. ## 5 Experiment ### _Datasets_ We employ several datasets for training the language models, evaluating the trained language model, and augmenting the downstream dataset. In this section, we briefly introduce these datasets. #### 5.1.1 Language Model Pre-Training Datasets Our main dataset is a collection of 46,446 food- and agricultural-related journal papers. We downloaded published articles from 26 journals and converted the pdf files to text format to be used in the masked language modeling task. We also cleaned the dataset by removing URLs, emails, references, and non-ASCII characters. In order to compare the contributions of different components of our model, we consider a secondary dataset for training (WikiText-103) that contains articles extracted from Wikipedia. This dataset also retains numbers, case, and punctuation, which is similar to our dataset described above. Statistics of the two datasets are provided in Table III. We also include Penn Treebank dataset [77], which is another common dataset for the task of language modeling, as a reference. #### 5.1.2 Answer Selection Dataset We use two different data sources for this part. First, we use the consumer panel product from the Nielsen Homescan data. Nielsen provides very granular data on the food purchases from the stores at the product barcode or Universal Product Code (UPC) with detailed attributes for each UPC, including UPC description. While scanner data come with some nutrition-related product attribute variables, this information is not sufficient to examine the nutritional quality. To address this issue, we link product-level data from Nielsen with the USDA Food Acquisition and Purchase Survey (FoodAPS) which supplements scanner data with detailed nutritional information. The survey contains detailed information about the food purchased or otherwise acquired for consumption during a seven-day period by a nationally representative sample of 4826 US households. The FoodAPS matched 32000+ barcodes with the Food and Nutrient Database for Dietary Studies (FNDDS) food codes of high quality. The linked data set has UPC descriptions for each product and the corresponding FNDDS food code. In addition, the final data set has full information needed to construct diet quality indexes to evaluate the healthfulness of overall purchases. #### 5.1.3 External Source of Knowledge We use the FoodOn knowledge graph for the question-answer augmentation purposes. FoodOn is formatted in the form of Web Ontology Language (OWL) [78]. The OWL ontology provides a globally unique identifier (URI) for each concept which is used for lookup services and facilitates the query processing system. Most of FoodOn's core vocabulary comes from transforming Langual, a mature and popular food indexing thesaurus [74]. That is why FoodOn is a unique and valuable resource for enhancing our language model. ### _Metrics_ Since the output of the evaluation task on the answer selection dataset is a ranked list of answers per question, we require metrics that take into account the order of results. That is why we propose to use precision@1 (P@1) and Mean Average Precision (MAP). **P@1** answers the following question: "Is the top-ranked item in the returned list a relevant item?" If the top-ranked item is relevant, \(P@1\) is 1 and if it is not relevant, \(P@1\) is 0. TO caluclate \(P@1\), we sort the selected answers based on the final similarity scores and we count how many times the top answer is correctly selected. \[P@1=\sum_{i=1}^{|N|}1\,\text{if}\ rank_{a_{i}}==1\] where N is the set of all questions. Note that, while this metric provides valuable insights into the performance of the ranking system at the top of the list, it does not consider the relevance of lower-ranked items. Thus we also calculate and report Mean Average Precision (MAP). **MAP** measures the percentage of relevant selected answers, it takes into account the precision at every rank in the list where a relevant item is found. This is in contrast to \(P@1\), which only considers the precision of the top-ranked item. By doing so, MAP values the ranking system's ability to rank relevant items higher. Given a ranked list of selected answers per question we mark them as relevant if they are correctly selected and calculate AP as follows: \[\text{AP}=\frac{1}{n}\sum_{i=1}^{n}(P(i)\times\text{rel}(k)),\] where n is the set of all selected answers, \(\text{rel}(k)\in\{0,1\}\) indicates if the answer is relevant or not, and \(P(i)\) is the precision at \(i\) in the ranked list. Once we obtain AP for each question we can average across all questions to find MAP: \[\text{MAP}=\frac{1}{|N|}\sum_{q=1}^{|N|}\text{AP(q)},\] where \(N\) is the set of all questions. ### _Baselines_ To study the impact of different language model pre-training datasets and strategies, different answer-select datasets, we conduct a series of ablation studies. We consider the following scenarios: \begin{table} \begin{tabular}{l l l} \hline \hline **Input** & **Prompt 1:** Expand the semantic space & **Prompt 2:** Rephrase \\ \hline nestle 100 grand milk choc caramel bar & sweet, chocolate, candy & The Nestle 100 Grand Milk Chocolate Caramel Bar measures 1.5oz and contains caramel. \\ \hline three musketeers milk chocolat nougat plastic & confectionery, packaging, snack & A 6-count package of 12.78oz milk chocolate nougat bars from Three Musketeers, contained in a plastic bag. \\ \hline twisted tea hard iced tea malt beverage malt & brewing, alcohol, carbonation & This box contains a long-necked bottle of Twisted Tea, which is a 5\% alcohol malt beverage with a total volume of 144 ounces. \\ \hline \hline \end{tabular} \end{table} TABLE II: Sample inputs and their outputs for both prompts in data augmentation \begin{table} \begin{tabular}{l l l l l} \hline \hline Dataset & Articles & Tokens & Words & Size \\ \hline Penn Treebank & - & 887,521 & 10,000 & 10MB \\ WikText-103 & 28,475 & 103,227,021 & 267,735 & 0.5GB \\ Agriculture corpus & 46,446 & 311,101,592 & 2,394,343 & 4.0GB \\ \hline \hline \end{tabular} \end{table} TABLE III: Basic statistics of our dataset compared with two benchmark datasets in the standard language modeling field. * **kNN:** We compute the embeddings3 of the Nielsen product descriptions and USDA descriptions. For each vector belonging to the product description embedding space, we find the most similar vector from the USDA description embedding space. This naive approach is effective if the number of unique USDA descriptions is small. However, this does not hold in our case (Row 1). Footnote 3: We use sentence-transformer library for this task: [https://github.com/UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) * **BERTP:** We use the pre-trained BERT model without any modification (Row 2). * **BERTP:** We further finetune the pre-trained BERT using WikiText-103 (Row 3) and Agricultural Corpus (Row 6). * **BERT* on WikiText-103:** We train the BERT model from scratch using WikiText-103 with additional enhancements by connecting the existing entities to Wikidata (Row 4) and FoodOn knowledge graph (Rows 5). * **BERT* on Agricultural Corpus:** We train the pre-trained BERT model from scratch using our Agriculture corpus (Rows 7). * **BERT* + Entity Linking:** We train the BERT model from scratch using Agricultural Corpus with additional enhancement by connecting the existing entities to Wikidata (Row 8), and Food on considering different numbers of entities (Rows 9-11). * **BERT* + ChatGPT:** We train the BERT model from scratch using Agricultural Corpus with additional enhancement by manually defined prompts p1 and p2 (Rows 12-13). ### _Experimental Settings_ Once the extended dataset is generated, we can apply any answer selection method to the dataset. There are a number of studies in the literature on this topic, including COALA [61], CETE [62], MTQA [79], and many more, among which CETE is considered state-of-the-art in the answer selection task by the ACL community4. CETE implements a transformer-based encoder (e.g., BERT) to encode the question and answer pair into a single vector and calculates the probability that a pair of question/answer should match or not5. Footnote 4: Reported here: [https://aclweb.org/aclwiki/Question_Answering_State_of_the_art](https://aclweb.org/aclwiki/Question_Answering_State_of_the_art)) Footnote 5: The code for this study is open source and available for public use: [https://github.com/tahmedge/CETE-LREC](https://github.com/tahmedge/CETE-LREC) For the entity linking process, we use the implementation proposed by Wu et al. [80], called BLINK. BLINK is an entity linking Python library that uses Wikipedia as the target knowledge base. Moreover, in order to send efficient SPARQL queries to the FoodOn knowledge graph, we use ROBOT, a tool for working with Open Biomedical Ontologies6. Additionally, since we aim at simulating a setting where the number of labeled training samples is small, we use 20% of the dataset for training and the remaining 80% for the test. We believe this is a more realistic scenario in real-world applications. Footnote 6: Find it here: [https://github.com/ontology/robot](https://github.com/ontology/robot) ### _Results_ As Table V presents, not surprisingly, the best performance is obtained when the language model is trained on the agricultural corpus. We summarize the main observations as follows: In terms of Mean Average Precision (MAP), the BERT model trained from scratch (BERT*) on the Agricultural Corpus outperforms all other models, achieving the highest MAP of 44.21 (entry 7). This suggests that the BERT model significantly benefits from domain-specific training data. It is also notable that BERT* with Entity Linking (EL) using Wikidata performed nearly as well with a MAP of \begin{table} \begin{tabular}{l l l} \hline \hline Sentence & Wikidata entity & FoodOn entity \\ \hline \hline nestle nido powder infant formula & nestle & rice powder \\ aunt jemima frozen french toast breakfast entree & aunt jemima & frozen dairy dessert \\ woody hickory barbecue cooking sauce & woody’s chicago style & hickory nut \\ sour punch sour watermelon fruit chew straw & sour punch & sour milk beverage \\ philly steak frozen beef sandwich steak & philly steaks & wagyu steak \\ yoptal original rfg harvest peach yogurt low fat & yoptalit & creamy sald dressing \\ \hline \hline \end{tabular} \end{table} TABLE IV: Examples to demonstrate the quality of added entities to the text of product descriptions. For Wikidata we use entity linking and we present here the top linked entity (highest confidence score). For FoodOn we use SPARQL to query the ontology and the first outcome is listed here. \begin{table} \begin{tabular}{l|l l l l} \hline \hline \# & Training Dataset & Model & MAP & P@1 \\ \hline 1 & - & kNN & 26.70 & 14.49 \\ 2 & - & BERTP & 27.77 & 10.88 \\ \hline 3 & WikiText-103 & BERTP & 28.03 & 11.12 \\ 4 & WikiText-103 & BERT+EL (Wikidata) & 27.36 & 10.09 \\ 5 & WikiText-103 & BERT+FoodOn (n=1) & 28.78 & 24.83 \\ \hline 6 & Agricultural Corpus & BERTP & 29.72 & 12.71 \\ 7 & Agricultural Corpus & BERTP & **44.21** & 22.72 \\ 8 & Agricultural Corpus & BERT+EL (Wikidata) & 42.33 & 21.52 \\ 9 & Agricultural Corpus & BERT+FoodOn (n=1) & 31.54 & 47.89 \\ 10 & Agricultural Corpus & BERT+FoodOn (n=3) & 30.65 & 49.80 \\ 11 & Agricultural Corpus & BERT+FoodOn (n=5) & 29.91 & **49.98** \\ 12 & Agricultural Corpus & BERT+ChatGPT (p1) & 32.03 & 48.78 \\ 13 & Agricultural Corpus & BERT+ChatGPT (p2) & 30.19 & 46.91 \\ \hline \hline \end{tabular} \end{table} TABLE V: Test performances of all models trained on different datasets for the task of answer selection. for the kNN model we use sentence-transformers to compute embeddings. EL stands for Entity Linking and bold numbers indicate the best performance. BERTP indicates a pre-trained BERTP means training a BERT model from scratch. \(n\) indicates the number of external entities we include in the text of questions and answer when using FoodOn KG. p1 and p2 indicate which prompt we use to instruct ChatGPT to enrich the training data. 42.33 (entry 8). The least-performing model in terms of MAP is the kNN model with a MAP of 26.70 (entry 1). Regarding the Precision at rank 1 (P@1), the BERT model trained from scratch with data augmentation using FoodOn (with n=5) and ChatGPT prompt 1 (p1) show high performance with P@1 scores of 49.98 (entry 11) and 48.78 (entry 12), respectively. This shows that enhancing the training data with additional semantic information from FoodOn and GPT-generated prompts improves the model's ability to select the correct answer at the top rank. Interestingly, even though the BERT model trained from scratch on the Agricultural Corpus achieved the highest MAP, it did not achieve the highest P@1. This suggests that while this model is good at ranking relevant answers highly across all retrieved answers, it may not always select the correct answer as the top-ranked answer. By comparing rows 5 and 9 we can see that training on the Agricultural Corpus led to a significant improvement in both the MAP and P@1 scores. This suggests that the Agricultural Corpus is a more suitable training dataset for this task compared to WikiText-103. We also investigate the number of external entities that we include in the text of questions and answers as shown by entries 9, 10, and 11 in Table V. We can see that the growth of the number of external entities \(n\) will lead to a decrease in the MAP score and an increment in the P@1. This suggests that incorporating related entities from a relevant knowledge source helps to find the correct match in 50% of the times but sometimes it misleads the answer selection module and ranks the correct match lower which means it decreases the MAP score. Table IV provides some examples of augmented sentences by Wikidata and FoodOn knowledge sources. As this table presents, linking the food description to Wikidata entities can easily go wrong. For instance, in the first three rows, the food descriptions are linked to brand names7. In contrast, this does not happen when we query the FoodOn KG, as the entities are purely food related. Additionally, FoodOn entities contain food-related adjectives such as frozen, creamy, etc. which help in matching the food descriptions to nutrition data. Footnote 7: [https://en.wikipedia.org/wiki/Aunt_Jemima](https://en.wikipedia.org/wiki/Aunt_Jemima) Additionally, data augmentation using either FoodOn (a domain-specific knowledge graph) or ChatGPT (a powerful language model) yields comparable results. This resemblance reinforces the recent discourse highlighting the conceptual similarities between knowledge graphs and language models. Knowledge graphs, like FoodOn, provide structured and explicit knowledge which can be selectively used to augment the data, adding in particular tasks. On the other hand, language models, such as ChatGPT, implicitly capture a vast amount of knowledge from the data they are trained on, effectively serving as a form of knowledge base themselves. This connection is further explored by Petroni et al. [81], which investigates the feasibility of language models as a source of knowledge. The authors suggest that the knowledge present in the parameters of a language model can be effectively queried, much like a traditional knowledge graph, underlining the potential of language models to serve as knowledge bases. In our case, the performance similarity of FoodOn and ChatGPT data augmentation techniques indicates that both explicit, structured knowledge and implicit, context-rich knowledge can contribute to the enhancement of domain-specific language model performance. This suggests that combining structured knowledge graphs with richly trained language models could unlock further advancements in model performance and capability. ### _GPT-based Baseline_ In this subsection, we present an exploratory investigation that focuses on the task of answer selection utilizing the capabilities of the GPT-3.5 language model. Given a query and a set of candidate documents, the objective of the task is to identify the document that is most relevant to the query. For the purpose of this study, we have chosen a subset of 100 queries to analyze the performance of GPT-3.5 in this context. It is important to note that the scale of evaluation is limited when compared to other baselines, such as AgriBERT language model, which was tested against the entire dataset. While our analysis may not be as comprehensive as those conducted with other models, it offers valuable insights into the model's potential for this task. Our intention is not to evaluate the efficacy of GPT-3.5, but rather to o highlight the potential of GPT-3.5 in scenarios where it could be deployed independently. To facilitate this investigation, we design a specific prompt aimed at guiding the GPT-3.5 model's response. The prompt first presents a 'Given' section, which includes a specific query and a list of documents. The 'Query' in this context is a product description. The 'Documents' section lists ten potential options, from USDA descriptions (see Table I). The 'Task' section provides a clear instruction to the model, asking it to "Rank the documents in order of relevance to the given query". This simple, direct instruction enables the model to understand the requested operation - that is, to assess each document in terms of its relevance to the query and provide a ranking based on this assessment. The structure of the prompt is as follows: Given: Query: domino white sugar granulated llb Documents: 1: salsa, red, commercially-prepared 2: cookie-crisp 10: sugar, white, granulated or lump Task: Rank the documents in order of relevance to the given query (no explanation required). Here is a sample output: 9, 1, 10, 2, 6, 4, 7, 8, 5, 3 We can easily parse the response and obtain MAP and P@1 for 1,000 randomly selected queries as follows: MAP=73.33, P@1=60.12. GPT-3.5, even without being fine-tuned for the agricultural domain, outperforms the AgriBERT model, suggesting the strength of large, general-purpose language models in the answer selection task. The higher MAP score of 0.73 for GPT-3.5 compared to AgriBERT's 29.91 suggests that GPT-3.5 is consistently ranking the correct document higher in the list across all queries. This could potentially be attributed to the larger size and diverse training data of GPT-3.5. On the other hand, the smaller difference in P@1 scores suggests that both models are fairly competitive when it comes to ranking the most relevant document at the top. This could mean that both models are capable of identifying the most relevant document, but GPT-3.5 may be better at providing a more accurate overall ranking. ### _Experiment: Cuisine Prediction_ In this section, we investigate the performance of AgriBERT in the task of cuisine prediction. The experimental results are based on a unique dataset provided by Yummly8, which was originally featured in a Kaggle competition9. The task aims at classifying recipes into different cuisines such as Greek, Indian, Italian, and more, using their ingredient lists. Essentially, we treat this task as a sentence classification task in which we use different sentence representation learning methods to generate feature representations of different recipes and then feed these representations into a classifier to classify the given recipes into different cuisines. Footnote 8: [https://www.yummly.com/](https://www.yummly.com/) Footnote 9: [https://www.kaggle.com/competitions/whats-cooking/overview/description](https://www.kaggle.com/competitions/whats-cooking/overview/description) However, it is important to note that we were unable to use the Kaggle test set for evaluation purposes due to the unavailability of label information. Therefore, we split the provided training set, reserving 20% of it as our test set. This approach means that our results are not directly comparable with the Kaggle leaderboard since we used a different test set. In the experiment, we leverage the domain-specific knowledge and contextual understanding of agricultural text offered by AgriBERT to predict the cuisine category of recipes. We then compare its performance with other feature representations and classifiers commonly used in natural language processing. Table V.7 compares the performances of different combinations of feature representations and classifiers. We use four feature representations: 1. **TF-IDF**: a traditional and simple approach for text vectorization based on term frequency and inverse document frequency. 2. **agri-sentence-transformer**: agri-sentence-transformer is a BERT-based language model further pre-trained from the checkpoint of SciBERT. This model was trained on a balanced dataset composed of both scientific and general works in the agriculture domain, encompassing knowledge from different areas of agriculture research and practical knowledge. The corpus contains 1.2 million paragraphs from the National Agricultural Library (NAL) in the US, and 5.3 million paragraphs from books and common literature from the Agriculture Domain. 3. **all-mpnet-base-v2**: a variant of the MPNet architecture, a transformer-based model developed by Microsoft Research. The 'all-MPNet-base-v2' configuration includes 12 layers, with a hidden size of 768, and 12 attention heads. 4. **AgriBERT**: the version of AgriBERT that trains a BERT architecture from scratch (Row 7). We evaluate the performance of these feature representations on five different classifiers such as logistic regression, support vector machine (SVM), decision tree, random forest, and multilayer perceptron (MLP). Although the choice of classifier is not our focus, it's interesting to see how different feature representations perform with the same classifier. To begin, the TF-IDF feature representation, a traditional and simple approach for text vectorization, appears to perform reasonably well with the MLP Classifier, yielding an F1 score of 70.76%. However, its performance seems to drop significantly with the other classifiers, suggesting that it may not be the most robust choice of feature representations across various classification methods. In terms of agri-sentence-transformer, despite this domain-specific knowledge, the model performs best with the Support Vector Machine (SVM) classifier, achieving an F1 score of 65.87%. Interestingly, it does not outperform the general-purpose PLM, all-mpnet-base-v2, in the task of cuisine prediction. Finally, the most effective among all feature representations is AgriBERT. Regardless of the choice of classifier, AgriBERT consistently exhibits superior performance compared to the other models, achieving its highest F1 score of 79.63% with the SVM classifier. This implies that AgriBERT is notably adept at identifying the essential features needed for cuisine prediction, thereby substantiating the effectiveness of its design and training approach. Building on the demonstrated effectiveness of our AgriBERT model in both cuisine prediction and semantic matching tasks, we propose that AgriBERT is well-suited for a \begin{table} \begin{tabular}{l c c c c c} \hline \hline Feature & Logistic Regression & SVM & Decision Tree & Random Forest & MLP \\ \hline TF-IDF & 57.67\% & 47.64\% & 28.25\% & 41.93\% & 70.76\% \\ agri-sentence-transformer & 64.34\% & 65.87\% & 27.71\% & 42.55\% & 61.37\% \\ all-mpnet-base-v2 & 66.29\% & 71.10\% & 43.00\% & 57.77\% & 73.51\% \\ AgriBERT & **77.52\%** & **79.63\%** & **61.99\%** & **73.18\%** & **75.79\%** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Performance comparison of various feature representations and classifiers in the cuisine prediction task. The values in the table represent the F1 scores achieved by each combination. It is observed that AgriBERT consistently outperforms other models, demonstrating its effectiveness in this task. wider range of applications within the food and agriculture domains. Its ability to capture and understand nuanced relationships within agricultural text positions it as a robust tool for various tasks. These could encompass areas such as ingredient substitution, recipe recommendation, food-related sentiment analysis, and more. The strong performance of AgriBERT on diverse tasks indicates its generalizability, making it a promising resource for future research and application development in the food and agriculture sectors. ## 6 Conclusion In this paper, we present a language model called AgriBERT that can facilitate the NLP tasks in the food and agricultural domain. AgriBERT is a BERT model trained from scratch with a large corpus of academic journals in the agriculture field. We evaluate our language model on two tasks: semantic matching and cuisine prediction. The semantic matching aims at matching two databases of food description (Nielsen database and USDA database). We reformulate the problem as an answer selection task and used our language model as a backbone of a generic answer selection module to find the best match. Before feeding the pairs of questions and answers to the model we augmented them with external entities obtained from the FoodOn knowledge graph, a domain-specific ontology in the field of food. We showed that the inclusion of external knowledge can help boost the performance in terms of the more strict P@1 measure but it lowers the performance in terms of mean average precision. The cuisine prediction is a task to classify recipes into different cuisines. We show that compare with various existing language representation learning methods including the state-of-the-art all-mpnet-base-v2 model, our AgriBERT can achieve the best performance with different classifiers. As a future direction, we plan to investigate more sophisticated approaches for incorporating external knowledge such as refining the knowledge before including it in the text. Another future research direction is to fine-tune the existing large language foundation models such as InstructGPT [82], ChatGPT, StableLM10, and so on on our collected agriculture corpus and investigate their performances on the given agriculture tasks [47]. Footnote 10: [https://github.com/Stability-AI/StableLM](https://github.com/Stability-AI/StableLM)
2305.16313
Hybrid symmetry class topological insulators
Traditional topological materials belong to different Altland-Zirnbauer symmetry classes (AZSCs) depending on their non-spatial symmetries. Here we introduce the notion of hybrid symmetry class topological insulators (HSCTIs): A fusion of two different AZSC topological insulators (TIs) such that they occupy orthogonal Cartesian hyperplanes and their universal massive Dirac Hamiltonian mutually anticommute. The boundaries of HSCTIs can also harbor TIs, typically affiliated with an AZSC different from the parent ones. As such, a fusion between planar quantum spin Hall and vertical Su-Schrieffer-Heeger insulators gives birth to a three-dimensional HSCTI, accommodating quantum anomalous Hall insulators and quantized Hall conductivity on the top and bottom surfaces. We extend this construction to encompass crystalline HSCTI and topological superconductors, and beyond three dimensions. Possible (meta)material platforms to harness HSCTIs are discussed.
Sanjib Kumar Das, Bitan Roy
2023-05-25T17:59:22Z
http://arxiv.org/abs/2305.16313v1
# Hybrid symmetry class topological insulators ###### Abstract Traditional topological materials belong to different Altland-Zirnbauer symmetry classes (AZSCs) depending on their non-spatial symmetries. Here we introduce the notion of hybrid symmetry class topological insulators (HSTIs): A fusion of two different AZSC topological insulators (TIs) such that they occupy orthogonal Cartesian hyperplanes and their universal massive Dirac Hamiltonian mutually anticommute. The boundaries of HSCTIs can also harbor TIs, typically affiliated with an AZSC different from the parent ones. As such, a fusion between planar quantum spin Hall and vertical Su-Schrieffer-Heeger insulators gives birth to a three-dimensional HSCTI, accommodating quantum anomalous Hall insulators and quantized Hall conductivity on the top and bottom surfaces. We extend this construction to encompass crystalline HSCTI and topological superconductors, and beyond three dimensions. Possible (meta)material platforms to harness HSCTIs are discussed. _Introduction._ Twenty first century physics thus far is heavily influenced by the topological classification of quantum materials [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. It roots back in the discovery of quantum Hall states in the 1980s [11; 12; 13]. Over the time, topological insulators (TIs), featuring building bulk but gapless boundaries via a bulk-boundary correspondence, emerged as the most prominent representative of topological phases of matter. It turned out that they can be grouped into ten Altland-Zirnbauer symmetry classes (AZSCs), depending on their non-spatial symmetries (time-reversal, particle-hole and sublattice or chiral) [14; 15; 16]. AZSCs also encompass thermal topological insulators or superconductors [17; 18; 19]. Subsequent inclusion of crystal symmetries in the classification scheme immensely diversified the landscape of topological phases that ultimately gave birth to topological quantum chemistry, nowadays routinely employed to identify topological crystals in nature [20; 21; 22; 23; 24; 25; 26; 27; 28]. In this realm, the following quest fuels our current venture. _Can a hybridization_ (defined shortly) _between two topological insulators from different AZSCs foster_ (possibly new) _topology?_ We offer an affirmative answer to this question by introducing the notion of hybrid symmetry class topological insulators (HSCTIs). Notice that all TIs (electrical or thermal) from AZSCs can be modeled by Dirac Hamiltonian with a momentum-dependent Wilson-Dirac mass [15; 16], manifesting band inversion around a time reversal invariant momentum (TRIM) point in the Brillouin zone (BZ). A hybridization between two TIs from distinct AZSCs occurs when they occupy orthogonal Cartesian hyperplanes and their band-inverted massive Dirac Hamiltonian mutually anticommute. We show that the resulting HSCTI can nurture emergent topology. The construction of HSCTIs is showcased here from its simplest possible incarnation in three dimensions, stemming from the hybridization between a two-dimensional (2D) \(xy\) planar quantum spin Hall insulator (QSHI) and a one-dimensional (1D) \(z\)-directional Su-Schrieffer-Heeger insulator (SSHI). The top and the bottom surfaces of the resulting three-dimensional (3D) HSCTI then support 1D edge states of opposite chiralities, producing integer quantized Hall conductivity (in units of \(e^{2}/h\)) of opposite signs on these two surfaces. Thus, a 3D HSCTI is distinct from the presently known strong \(Z_{2}\) and higher-order TIs respectively supporting gapless surface states on six faces of a cube (first-order) [29] and \(z\)-directional hinge modes along with \(xy\) surface states (second-order) [30] or eight corner modes (third-order) [31; 32; 33]. See Fig. 1. It is also distinct from 3D axion insulators [34; 35; 36], displaying a non-quantized surface Hall conductivity. _3D HSCTI._ To arrive at the model Hamiltonian for the 3D HSCTI, consider first the Bloch Hamiltonian [5] \[h^{xy}_{\text{QSHI}}=d_{1}(\mathbf{k})\Gamma_{1}+d_{2}(\mathbf{k})\Gamma_{2}+d_{3}( \mathbf{k})\Gamma_{3}. \tag{1}\] The components of the \(\mathbf{d}\)-vector for now are chosen to be \(d_{1}(\mathbf{k})=t\sin(k_{x}a)\), \(d_{2}(\mathbf{k})=t\sin(k_{y}a)\) and \(d_{3}(\mathbf{k})=m_{0}+t_{0}[\cos(k_{x}a)+\cos(k_{y}a)]\). Here \(a\) is the lattice spacing. The hopping parameter \(t\) is set to be unity. Mutually anti-commuting Hermitian \(\Gamma\) matrices are \(\Gamma_{j}=\sigma_{3}\tau_{j}\) for \(j=1,2,3\). The Pauli matrices \(\{\tau_{\mu}\}\) (\(\{\sigma_{\mu}\}\)) operate on the orbital (spin) degrees of freedom with \(\mu=0,\cdots,3\). Then the above model describes a QSHI in the \(xy\) plane within the parameter regime \(-2<m_{0}/t_{0}<2\), featuring counter-propagating helical edge modes for opposite spin projections (class AII). When topological \(h^{xy}_{\text{QSHI}}\) is Figure 1: Normalized local density of states for (a) surface localized chiral edge states of HSCTI, (b) surface states of a first-order TI [29], (c) \(z\)-directional hinge and \(xy\) surface state of a second-order TI [30], and (d) eight corner modes of a third-order TI [31; 32]. The lattice models for (b), (c) and (d) are discussed in the Supplemental Material [33]. implemented on a 3D cubic lattice without any tunneling in the \(z\) direction, it supports a column of edge modes occupying the \(xz\) and \(yz\) planes. See Fig. 2(ai). Next consider a second Bloch Hamiltonian [37; 38; 39] \[h^{z}_{\rm SSHI}=d_{4}(\mathbf{k})\Gamma_{4}+d_{5}(\mathbf{k})\Gamma_{5}, \tag{2}\] where \(d_{4}(\mathbf{k})=t_{1}\sin(k_{z}a)\) and \(d_{5}(\mathbf{k})=m_{z}+t_{z}\cos(k_{z}a)\). We set \(t_{1}=1\). If \(\Gamma_{4}\) and \(\Gamma_{5}\) are anticommuting Pauli matrices, \(h^{z}_{\rm SSHI}\) describes a \(z\)-directional SSHI (class BDI). Within the parameter range \(|m_{z}/t_{z}|<1\), it supports topological zero energy modes, localized at its two ends. If we place such \(z\)-directional topological SSHIs on the sites of a square lattice on the \(xy\) plane without any coupling between them, the resulting system features a collection of end point zero energy modes that occupies the entire top and bottom \(xy\) surfaces. See Fig. 2(aii). With the ingredients in hand, we now announce the Bloch Hamiltonian for a 3D HSCTI, given by \[h^{\rm 3D}_{\rm HSCTI}=h^{xy}_{\rm QSHI}+h^{z}_{\rm SSHI}, \tag{3}\] where now \(\Gamma_{4}=\sigma_{1}\tau_{0}\) and \(\Gamma_{5}=\sigma_{2}\tau_{0}\), that together with \(\Gamma_{1}\), \(\Gamma_{2}\) and \(\Gamma_{3}\) constitute a set of five four-component Hermitian matrices, satisfying the Clifford algebra \(\{\Gamma_{j},\Gamma_{k}\}=2\delta_{jk}\). Here \(\delta_{jk}\) is the Kronecker delta function. Thus, \(h^{xy}_{\rm QSHI}\) and \(h^{z}_{\rm SSHI}\) anticommute with each other (hybridization). The \(z\)-directional SSHI acts as a mass for the edge modes of the \(xy\) planar QSHI and vice versa. Then one component of \(h^{\rm 3D}_{\rm HSCTI}\) gaps out the topological modes of the other, except where both of them support topological gapless modes, namely along the edges on the top and bottom surfaces. See Fig. 2(aiii). But, the bulk is an insulator. Therefore, we realize a 3D TI by hybridizing two TIs, living on orthogonal Cartesian hyperplanes and belonging to different AZSCs, that manifests a bulk-boundary correspondence: a HSCTI. _Symmetries._ The model Hamiltonian for a 3D HSCTI \(h^{\rm 3D}_{\rm HSCTI}\) breaks (1) the time-reversal (\(\mathcal{T}\)) symmetry generated by \(\mathcal{T}=\sigma_{2}\tau_{1}\mathcal{K}\), where \(\mathcal{K}\) is the complex conjugation, and (2) the parity (\(\mathcal{P}\)) symmetry, generated by \(\Gamma_{3}\) with \(\mathcal{P}:\mathbf{k}\rightarrow-\mathbf{k}\). But, it preserves the composite \(\mathcal{PT}\) symmetry that guarantees a two-fold degeneracy of the conduction and valence bands of \(h^{\rm 3D}_{\rm HSCTI}\), respectively determined by the eigenspectra \(\pm E(\mathbf{k})\), where \(E(\mathbf{k})=[\alpha(\mathbf{k})]^{1/2}\) and \(\alpha(\mathbf{k})=d_{1}^{2}(\mathbf{k})+\cdots+d_{5}^{2}(\mathbf{k})\). Notice that \(h^{\rm 3D}_{\rm HSCTI}\), involving all _five_ mutually anticommuting four-component Hermitian \(\Gamma\) matrices, does not possess the sublattice or chiral symmetry, generated by a unitary operator that anticommutes with it. Rather it enjoys an anti-unitary particle-hole symmetry, generated by \(\mathcal{A}=\sigma_{0}\tau_{1}\mathcal{K}\), such that \(\{h^{\rm 3D}_{\rm HSCTI},\mathcal{A}\}=0\)[40]. _Phase diagram._ In the \((m_{0}/t_{0},m_{z}/t_{z})\) plane, a 3D HSCTI occupies a rectangular region bounded by \(|m_{0}/t_{0}|<2\) and \(|m_{z}/t_{z}|<1\), where both the parent insulators are topological. See Fig. 2(b). Furthermore, this topological regime fragments into two sectors for \(-2<m_{0}/t_{0}<0\) and \(0<m_{0}/t_{0}<2\), when the band Figure 2: (a) Normalized local density of states for the (i) edge modes of decoupled QSHIs for \(m_{0}/t_{0}=1\), (ii) endpoint modes for decoupled SSHIs for \(m_{z}/t_{z}=0\), and (iii) chiral edge modes of HSCTI for \(m_{0}/t_{0}=1\) and \(m_{z}/t_{z}=0\) [same as Fig. 1(a)]. (b) Phase diagram of HSCTI in the \((m_{0}/t_{0},m_{z}/t_{z})\) plane. The translationally active (inert) phase supports (is devoid of) dislocation defect modes. (c) Melting of (i) chiral edge modes [same as (aiii)] by tuning (ii) \(m_{0}/t_{0}\) to \(1.75\) for a fixed \(m_{z}/t_{z}=0\) and (iii) \(m_{z}/t_{z}\) to \(0.75\) for fixed \(m_{0}/t_{0}=1.0\). Figure 3: Layer resolved (in the \(z\) direction) band structure of a 3D HSCTI with \(k_{y}\) as a good quantum number and \(L_{x}=L_{z}=20\) for \(m_{z}/t_{z}=0\), and \(m_{0}/t_{0}=-1.0\) (upper panel) and \(m_{0}/t_{0}=1.0\) (lower panel). Here blue (brown) and green indicate states localized near the left (right) edge and in the bulk of the system. Therefore, the top and bottom layers support counter-propagating chiral edge modes, while other layers are devoid of gapless states (such as the middle one). inversion of the underlying QSHI takes place near the \(\Gamma=(0,0)\) point (\(\Gamma\) phase) and \(\mathrm{M}=(1,1)\pi/a\) point (M phase) of a 2D square lattice BZ, respectively [41]. Discolation lattice defects are instrumental in distinguishing these two regimes about which more in a moment. A HSCTI can be pushed out of the topological regime by tuning \(m_{0}/t_{0}\) or \(m_{z}/t_{z}\) or both. As the ratio \(m_{0}/t_{0}\) is tuned from the topological toward trivial regime, the edge modes living on the opposite sides of the top or bottom surfaces start to hybridize, as shown in Fig. 2(cii). By contrast, as we tune \(m_{z}/t_{z}\) out of the topological regime the edge modes residing on the top and bottom surfaces mix through four side surfaces of the cube, as shown in Fig. 2(ciii). Once the system becomes a trivial insulator, there is no topological boundary modes. _Bulk-boundary correspondence._ The nature of the edge modes of the 3D HSCTI on the top and bottom surfaces can be anchored from the effective surface Hamiltonian. For simplicity, we consider a semi-infinite system with a hard-wall boundary at \(z=0\). When the region \(z<0\) (\(z>0\)) is occupied by HSCTI (vacuum), the surface at \(z=0\) represents the top one. By contrast, when the region \(z>0\) (\(z<0\)) is occupied by HSCTI (vacuum) the \(z=0\) surface corresponds to the bottom one. A straightforward calculation, shown in the Supplemental Material [33], leads to the following surface Hamiltonian \[H_{\mathrm{surface}}^{\mathrm{top/bottom}}=d_{1}(\mathbf{k})\beta_{1}+d_{2}(\mathbf{k })\beta_{1}\mp d_{3}(\mathbf{k})\beta_{3}, \tag{4}\] where \(\mathbf{k}=(k_{x},k_{y})\). The newly introduced Pauli matrices \(\{\beta_{\mu}\}\) operate on the space of two zero energy top/bottom surface states. With the chosen form of the \(\mathbf{d}\)-vector, this Hamiltonian mimics the Qi-Wu-Zhang model for a square lattice quantum anomalous Hall insulator (QAHI) [42]. Therefore, the top and bottom surfaces of the 3D HSCTI harbor two-dimensional QAHIs with opposite first Chern numbers. On each surface the \(\mathcal{T}\) symmetry is broken. In addition, they also break the \(\mathcal{P}\) symmetry, under which the top and bottom surfaces switch, as they foster QAHIs of opposite Chern numbers. Boundaries of a 3D HSCTI this way manifest the conserved composite \(\mathcal{PT}\) symmetry of its bulk. Finally notice that \(H_{\mathrm{surface}}^{\mathrm{top/bottom}}\) belongs to class A, a distinct AZSC from its parent QSHI (class AII) and SSHI (class BDI). As the top and bottom surfaces host QAHIs of opposite Chern numbers, they feature counter-propagating chiral edge states. See Fig. 3. We consider a semi-infinite system with \(k_{y}\) as good quantum number, and finite extensions in the \(x\) and \(z\) directions with open boundary conditions. For every \(z\), we compute the band structure of a 3D HSCTI. Inside the topological regime of HSCTI, the top and bottom surfaces indeed feature counter-propagating edge modes crossing the zero energy at \(k_{y}=0\) (\(\pm\pi/a\)), when the underlying QSHI is in the \(\Gamma\) (M) phase. On the other hand, the middle layer is devoid of any chiral edge state. Surface localized chiral edge states also manifest through the layer-resolved integer quantized charge Hall conductivity (\(\sigma_{xy}\)), which we discuss next. _Hall effect._ To compute the layer-resolved Hall conductivity in a 3D HSCTI, we consider a six-terminal Hall bar geometry. See Fig. 4(a). All the voltage and current leads are one-layer thick. An electrical current \(I_{\mathrm{el}}\) is passed between the leads \(\mathrm{L}_{1}\) and \(\mathrm{L}_{4}\). Then a transverse or Hall voltage develops between the leads \(\mathrm{L}_{2}\) and \(\mathrm{L}_{6}\), and \(\mathrm{L}_{3}\) and \(\mathrm{L}_{5}\). We numerically compute the Hall resistance \(R_{xy}^{\mathrm{el}}=(V_{2}+V_{3}-V_{5}-V_{6})/(2I_{\mathrm{el}})\) using Kwant by attaching all the leads to a specific layer [43; 44]. Here \(V_{j}\) is the voltage at the \(j\)th lead (\(\mathrm{L}_{j}\)). The Hall conductivity is given by \(\sigma_{xy}=\left(e^{2}/h\right)\left(R_{xy}^{\mathrm{el}}\right)^{-1}\). The results are shown in Fig. 4(b). It shows that \(\sigma_{xy}\) is quantized (in units of \(e^{2}/h\)) on the top and bottom surfaces, where they have opposite signs. It can be anchored by computing the first Chern number (\(C\)) of the surface Hamiltonian [Eq. (4)] as \(\sigma_{xy}=Ce^{2}/h\). On any other layer \(\sigma_{xy}=0\). The overall sign of \(\sigma_{xy}\) flips between the \(\Gamma\) and M phases of the parent QSHI. Additional details of this computation are presented in the Supplemental Material [33]. _Topological invariant._ At the TRIM points \(d_{1}(\mathbf{k})=d_{2}(\mathbf{k})=d_{4}(\mathbf{k})=0\), and \(h_{\mathrm{HSCTI}}^{\mathrm{3D}}=\Gamma_{3}d_{3}(\mathbf{k})+\Gamma_{5}d_{5}(\mathbf{ k})\) can be brought into a block diagonal form after a suitable unitary rotation with \(\Gamma_{4}=\sigma_{3}\tau_{1}\) and \(\Gamma_{5}=\sigma_{3}\tau_{2}\). We then define a quantity \(\hat{\phi}_{\mathbf{k}}=\phi_{\mathbf{k}}/|\phi_{\mathbf{k}}|\), where \(\phi_{\mathbf{k}}=\tan^{-1}[d_{5}(\mathbf{k})/d_{3}(\mathbf{k})]\). By construction, \(\hat{\phi}_{\mathbf{k}}=\pm 1\). The system then describes a TI only when \[\Phi_{z}=\hat{\phi}_{j,k_{z,1}^{\star}}\,\hat{\phi}_{j,k_{z,2}^{\star}}=-1\text { and }\Phi_{xy}=\prod_{j}\hat{\phi}_{j,k_{z,i}^{\star}}=-1, \tag{5}\] Figure 4: (a) Six-terminal setup for the layer-resolved Hall conductivity. (b) Layer-resolved electrical Hall conductivity (\(\sigma_{xy}\)) for HSCTI, showing its integer quantization (in units of \(e^{2}/h\)) on the top and bottom surfaces with opposite signs. (c) Same as (b), but for crystalline HSCTI with a parent QSHI featuring band inversion at the X and Y points of a 2D BZ. (d) Thermal Hall conductivity (\(\kappa_{xy}\)) for thermal HSCTI (superconductor), showing its half-integer quantization (in units of \(\kappa_{0}=\pi^{2}k_{B}^{2}T/(3h)\)) at temperature \(T=0.01\) only the top and bottom surfaces with opposite signs. for \(i=1\) and \(2\), where \(\mathbf{k}_{z}^{\star}=(0,\pi/a)\), \(j=\Gamma,\mathrm{M},\mathrm{X},\mathrm{Y}\) are the TRIM points of a 2D BZ with \(\mathrm{X}=(1,0)\pi/a\) and \(\mathrm{Y}=(0,1)\pi/a\). When \(\Phi_{z}=-1\), the \(z\)-directional SSHI features band inversion along \(k_{z}\) at all the TRIM points on the 2D BZ. On the other hand, when \(\Phi_{xy}=-1\) the planar QSHI features band inversion at odd number of TRIM points on the 2D BZ for \(k_{z}=0\) and \(\pi/a\). The TRIM band inversion point of the 2D BZ can be identified from the Pfaffian invariant [6]. On the other hand, if \(\Phi_{z}\) or \(\Phi_{xy}\) becomes \(+1\), the system is a trivial insulator or crystalline HSCTI, which we discuss next. _Crystalline HSCTI._ With \(d_{1}(\mathbf{k})=S_{x}+C_{x}S_{y}\), \(d_{2}(\mathbf{k})=S_{y}+S_{x}C_{y}\) and \(d_{3}(\mathbf{k})=m_{0}-2t^{\prime}+t_{0}(C_{x}+C_{y})+2t^{\prime}(C_{x}C_{y})\), where \(S_{j}=\sin(k_{j}a)\) and \(C_{j}=\cos(k_{j}a)\), the band inversion occurs at the X and Y points of the 2D BZ for \(|m_{0}/t_{0}|>2\) and \(t^{\prime}/t_{0}>m_{0}/(4t_{0})\), yielding a crystalline QSHI protected by the four-fold rotational (\(C_{4}\)) symmetry [21]. The resulting HSCTI is then also protected by the \(C_{4}\) symmetry. The layer-resolved Hall conductivity \(\sigma_{xy}=\pm 2e^{2}/h\) on the top and bottom surfaces, respectively, as they host two counter-propagating chiral edge states. See Fig. 3(c). In this phase \(\Phi_{xy}=+1\) as the band inversion for the QSHI occurs at an even number of TRIM points in the 2D BZ. But, the Pfaffian at the X and Y points are \(-1\), protected by the \(C_{4}\) symmetry [21]. _Lattice defects._ When the band inversion of the underlying QSHI occurs at a finite TRIM point (\(\mathbf{K}_{\mathrm{inv}}^{\mathrm{QSHI}}\)), the 3D HSCTI becomes translationally active. Dislocation lattice defects, created by breaking the local translational symmetry in the bulk of a crystal, are instrumental to identify them in terms of topological modes bound to their cores. A screw dislocation fosters gapless modes only when the associated Burgers vector (\(\mathbf{b}\)) pierces gapless surfaces [45; 46; 47]. As all the surfaces of a 3D HSCTI are gapped, screw dislocations do not host any metallic defect modes. A line of edge dislocation is characterized by \(\mathbf{b}\) and the stacking direction (\(\hat{\mathbf{s}}\)). Only when \(\hat{\mathbf{s}}=\hat{z}\) and \(\mathbf{b}=a\hat{x}\) or \(a\hat{y}\), the Burgers vector points toward gapless chiral edge states on the top and bottom surfaces. Once a line of atoms is removed, counter-propagating chiral edge states appear at the newly created edges on these two surfaces. See Fig. 5(a). Upon reconnecting these edges a 3D edge dislocation is created through the Volterra cut-and-paste procedure, and the edge modes hybridize. When \(\mathbf{K}_{\mathrm{inv}}^{\mathrm{QSHI}}\cdot\mathbf{b}=\pi\) (modulo \(2\pi\)) [41; 45; 46; 47; 48; 49; 50; 51; 52], as is the case when the QSHI resides in the M or XY phase, the nontrivial \(\pi\) hopping phase around the defect core binds surface localized zero energy modes. See Fig. 5(b). _Superconductivity._ With a suitable \(\Gamma\) matrix representation, \(h_{\mathrm{HSCTI}}^{\mathrm{3D}}\) can describe a hybrid symmetry class topological superconductor. For example, when \(\Gamma_{1}=\eta_{1}\sigma_{0}\), \(\Gamma_{2}=\eta_{2}\sigma_{3}\) and \(\Gamma_{3}=\eta_{3}\sigma_{0}\), where the set of Pauli matrices \(\{\eta_{\mu}\}\) operates on the Nambu or particle-hole indices, \(H_{\mathrm{QSHI}}^{\mathrm{xy}}\) describes a \(p_{x}\pm ip_{y}\) paired state (class DIII), occupying the \(xy\) plane and stacked in the \(z\) direction. Now \(t\) represents the pairing amplitude and \(d_{3}(\mathbf{k})\) gives rise to a cylindrical Fermi surface when \(|m_{0}/t_{0}|<2\). In this basis \(H_{\mathrm{SSHI}}^{z}\) describes a \(z\)-directional Kitaev chain of Majorana Fermions (class D or BDI) that couples the layers of \(p_{x}\pm ip_{y}\) superconductors. Physically, \(d_{4}(\mathbf{k})\) (\(d_{5}(\mathbf{k})\)) describes a \(p_{z}\)-wave (\(\mathcal{PT}\) symmetry breaking extended \(s\)-wave) pairing for \(\Gamma_{4}=\eta_{2}\sigma_{1}\) and \(\Gamma_{5}=\eta_{2}\sigma_{2}\). On the top and bottom surfaces, such a topological paired state supports 2D thermal Hall insulators of opposite Chern numbers. Their edge modes are constituted by counter propagating chiral Majorana fermions on opposite surfaces, each of which yields a half-quantized thermal Hall conductivity (\(\kappa_{xy}\)) in units of \(\kappa_{0}\) at small temperature (\(T\to 0\)), where \(\kappa_{0}=\pi^{2}k_{B}^{2}T/(3h)\). Layer resolved numerical computation of \(\kappa_{xy}\) in the six-terminal Hall bar geometry confirms this outcome and shows that it is indeed of opposite signs on the top and bottom surfaces. See Fig. 4(d). Details of this computation is shown in the Supplemental Material [33]. The edge dislocations with \(\mathbf{b}=a\hat{x}\) or \(a\hat{y}\) and \(\hat{\mathbf{s}}=\hat{z}\), in such a paired state support surface localized endpoint Majorana modes. _Discussions & outlooks._ We outline a general principle of realizing HSCTs from two distinct parent TIs that occupy orthogonal Cartesian hyperplanes and belong to different AZSCs. Explicitly discussed HSCTI, obtained via a hybridization between \(xy\) planar QSHIs (class AII) and \(z\)-directional SSHIs (class BDI), possesses a bulk topological invariant and manifests bulk-boundary correspondence by harboring surface QAHIs (class A), leaving its fingerprint on chiral edge states and layer-resolved quantized Hall effect. Our proposal thereby offers a unique approach to vision TIs at the boundaries of an even higher-dimensional HSCTI. For example, following the same principle a 3D class AII TI can be found on the boundary of a four-dimensional HSCTI, built from 3D class CII and 1D class BDI TIs, which we show in the Sup Figure 5: (a) A \(z\)-directional Volterra cut of a line of atoms in a 3D HSCTI crystal that creates new edges on \(xy\) plane, supporting counter-propagating chiral edge modes only on the top and bottom surfaces. When these edges are pasted to create a line of edge dislocations, the edge modes hybridize and produce zero energy surface bound defect modes when the parent QSHI is in the M phase, for example. (b) Normalized local density of states for such defect modes with a edge (anti)dislocation pair and periodic boundary conditions in the \(x\) and \(y\) directions, for \(m_{0}/t_{0}=1\) and \(m_{z}/t_{z}=0\). plemental Material [33]. This route is distinct from the "dimensional reduction" of constructing a \(d\)-dimensional TI from a fixed AZSC \(d+1\)-dimensional one [53]. We also extend the jurisdiction of this proposal to systems, where at least one of the constituting parent TIs is protected by crystalline symmetry and to encompass superconducting states, featuring half-quantized surface thermal Hall conductivity. Existence of a plethora of strong and crystalline topological phases of matter (insulators and superconductors) of various dimensions and symmetries in nature [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28] should therefore open an unexplored territory of HSCTIs (electrical and thermal) that in principle can be realized in quantum crystals and engineered in classical metamaterials. As \(d_{5}(\mathbf{k})\) breaks both \(\mathcal{P}\) and \(\mathcal{T}\) symmetries, layered magnetic materials with columnar antiferromagnetic order in the stacking direction constitute an ideal platform to harness the candidate 3D HSCTI. With the recent discovery of (anti)ferromagnetic TI MnBi\({}_{2}\)Te\({}_{4}\)[54; 55; 56; 57] (possibly axionic), we are optimistic that HSCTI can be found in some available or newly synthesized quantum materials using the existing vast dictionary of magnetic materials [58], guided by topological quantum chemistry [24; 25; 26; 27]. When such materials (once found) are doped, they can harbor thermal HSCTI or superconductors from local or on-site pairings, as by now it is well established that the local paired states often (if not always) inherit topology from parent normal state electronic bands, even when it is a trivial insulator in the presence of a Fermi surface, but in terms of neutral Majorana fermions [59; 60; 61; 62; 63]. We leave this topic for a future investigation. As the model Hamiltonian for HSCTI is described in terms of only nearest-neighbor hopping amplitudes, it can be emulated in classical metamaterials, among which topoelectric circuits [64; 65; 66] and mechanical lattices [67; 68; 69] are the two most prominent ones. In both setups existence of chiral edge modes of 2D TIs has been experimentally demonstrated from the unidirectional propagation of a weak (low energy) disturbance only along their edges [65; 67; 68; 69]. Finally, we note that topological defect modes have been experimentally observed in quantum crystals [70; 71; 72] and mechanical lattices [73]. Thus, our predicted counter-propagating chiral edge modes on the opposite surfaces of the 3D HSCTI and surface localized dislocation bound states should be within the reach of currently available experimental facilities. _Acknowledgments_. S.K.D. was supported by a Startup grant of B.R. from Lehigh University. B.R. was supported by NSF CAREER Grant No. DMR- 2238679. We thank Daniel J. Salib for useful discussion.
2304.01546
Nuclear mass table in density functional approach inspired by neutron-star observations
Background: Nuclear energy-density functional (EDF) approach has been widely used to describe nuclear-matter equations of state (EoS) and properties of finite nuclei. Recent advancements in neutron-star (NS) observations have put constraints on the nuclear EoS. The Korea-IBS-Daegu-SKKU (KIDS) functional has been then developed to satisfy the NS observations and applied to homogeneous nuclear matter and spherical nuclei. Purpose: We examine the performance of the KIDS functional by calculating the masses and charge radii of even-even nuclei towards the drip lines. Method: The Kohn-Sham-Bogoliubov equation is solved by taking into account the axial deformation. Results: The root-mean-square deviation of the binding energy and the charge radius for the KIDS functional is 4.5--5.1 MeV and 0.03--0.04 fm, which is comparable to that for existing EDFs. The emergence and development of nuclear deformation in open-shell nuclei are well described. The location of the neutron drip line is according to the nuclear-matter parameter characterizing the low-mass NS. Conclusions: The NS-observation-inspired EDF offers a reasonable reproduction of the structures of finite nuclei. A future global optimization including more nuclear data will give better accuracy and high predictive power of neutron-rich nuclei.
Hana Gil, Nobuo Hinohara, Chang Ho Hyun, Kenichi Yoshida
2023-04-04T05:45:26Z
http://arxiv.org/abs/2304.01546v1
# Nuclear mass table in density functional approach inspired by neutron-star observations ###### Abstract **Background:** Nuclear energy-density functional (EDF) approach has been widely used to describe nuclear-matter equations of state (EoS) and properties of finite nuclei. Recent advancements in neutron-star (NS) observations have put constraints on the nuclear EoS. The Korea-IBS-Daegu-SKKU (KIDS) functional has been then developed to satisfy the NS observations and applied to homogeneous nuclear matter and spherical nuclei. **Purpose:** We examine the performance of the KIDS functional by calculating the masses and charge radii of even-even nuclei towards the drip lines. **Method:** The Kohn-Sham-Bogoliubov equation is solved by taking into account the axial deformation. **Results:** The root-mean-square deviation of the binding energy and the charge radius for the KIDS functional is 4.5-5.1 MeV and 0.03-0.04 fm, which is comparable to that for existing EDFs. The emergence and development of nuclear deformation in open-shell nuclei are well described. The location of the neutron drip line is according to the nuclear-matter parameter characterizing the low-mass NS. **Conclusions:** The NS-observation-inspired EDF offers a reasonable reproduction of the structures of finite nuclei. A future global optimization including more nuclear data will give better accuracy and high predictive power of neutron-rich nuclei. ## I Introduction A microscopic construction of the nuclear equation of state (EoS) and its determination from nuclear experiments have been an interdisciplinary issue between nuclear physics and astrophysics [1; 2; 3]. Thanks to the advance in astronomical observations, data on the properties of a neutron star (NS) have become more rich, diverse, and precise [4; 5; 6; 7; 8; 9; 10]. One can see some attempts to determine the EoS directly from the observations by using machine learning [11; 12; 13; 14]. Bridging such recent data and nuclear structure properties is a new challenge in nuclear physics. A desired model, which can describe both infinite nuclear matter and finite nuclei, must be able not only to reproduce the existing data accurately but also to be systematic at improving its predictive power and flexible to the addition of residual forces. As an intermediate step, the KIDS (Korea-IBS-Daegu-SKKU) density functional was applied to constrain the EoS by using state-of-the-art data on X-ray sources, low-mass X-ray binaries, and gravitational waves [15; 16; 17]. The symmetry energy thus determined is consistent with the results in the literature, and the ranges of the uncertainty could be reduced non-negligibly. In subsequent work, an effect of the symmetry energy has been explored in finite nuclei by considering the Nd isotopes in the neutron-rich region [18]. Used in the work were four models, KIDS-A, B, C, and D, which have distinctive stiffness of the symmetry energy. The models agree well with the data on binding energy, charge radius, and quadrupole deformation. On the other hand, predictions in the neutron-rich regions such as the neutron skin thickness and the neutron drip line show a strong dependence on the symmetry energy. For a better understanding of the behavior of the model and the effect of the symmetry energy, extension of the analysis to the whole nuclear landscape is in due order. In the present work, we investigate the properties of even-even nuclei for the atomic number \(Z\) from 8 to 110, and for the neutron number \(N\) from the proton drip line to \(N=3Z\). Major concerns are 1) How well the KIDS model thus constructed by using the NS data works across the entire range of the nuclear chart, and 2) To explore the dependence on the symmetry energy and the ranges of its uncertainty of the KIDS model constrained by the modern NS data. We analyze the results for the binding energy, charge radius, quadrupole deformation, and neutron drip line. The mean accuracy of the model is similar to that of existing Skyrme models, but it is obviously inferior to the globally fitted mass models. The result opens a challenge to the KIDS density functional whether it can achieve an accuracy comparable to the globally fitted mass models. The paper is organized in the following order. In Sec. II, we briefly introduce the models. Some details about the numerical calculations are also summarized. In Sec. III, we present the results and discuss them. We summarize the work in Sec. IV. ## II Model In this work, we use four KIDS functionals, KIDS-A, KIDS-B, KIDS-C, and KIDS-D. There are nine parameters in the functional; seven parameters of the functional are adjusted to three nuclear saturation properties (the saturation density \(\rho_{0}=0.16\,\mathrm{fm}^{-3}\), the binding energy per nucleon 16 MeV, and the incompressibility \(K_{0}=230\)-260 MeV) and \(R_{1.4}=11.8\)-12.5 km where \(R_{1.4}\) denotes the radius of a NS with the mass \(1.4M_{\odot}\); see Table 1 for the nuclear-matter parameters. The remaining two are fitted to six nuclear data (the energy per particle and the charge radius of \({}^{40}\)Ca, \({}^{48}\)Ca, and \({}^{208}\)Pb). Two more parameters are added in the pairing functional for neutrons and protons. The pairing parameters are adjusted to the three-point formula for the odd-even staggering centered at \({}^{156}\)Dy. The total number of parameters in each model is 11, and their numerical values can be found in Ref. [18]. The KIDS models are implemented in the hfbtho code [19] to solve the Kohn-Sham-Bogoliubov equation taking the axial deformation into account. The calculations are performed in the \(N_{\mathrm{max}}=20\) full spherical oscillator shells. To find the global-minimum solution, we start calculations from the initial configurations with the mass quadrupole deformation \(\beta_{2}=0\) and \(\pm 0.3\). The quasiparticle (qp) states are truncated according to the equivalent single-particle energy cutoff at 60 MeV. The KIDS functionals have been applied to the quasielastic scattering of an electron and a neutrino off a nucleus [20; 21; 22]. It is shown that the cross-section is sensitive to and depends critically on the in-medium effective mass of the nucleon. The result demonstrates that the lepton-nucleus reactions can provide a unique channel to probe the nuclear dynamics and nuclear matter properties. The model gives the neutron skin thickness of \({}^{48}\)Ca and \({}^{208}\)Pb consistent with the measurement from the dipole polarizability and interaction cross section [23]. ## III Results and discussion ### Binding energy Figure 1 shows the calculated binding energies subtracted by the experimental data [24]. Here, the binding energy in Fig. 1 is given by \[\mathrm{BE}(N,Z)=NM_{n}+ZM_{p}-M(N,Z), \tag{1}\] where \(M_{n}\) and \(M_{p}\) are neutron and proton masses, and \(M(N,Z)\) is the nuclear mass. The models KIDS-A-D show very similar patterns. Black lines denote the isotopic chains of O, Ca, Ni, Sn, and Pb. Since the models are fitted to the binding energy of \({}^{40}\)Ca, \({}^{48}\)Ca, and \({}^{208}\)Pb, two downward spikes in the line of Ca and \(N=126\) in the line of Pb are very close to zero. It is notable that decreasing and increasing behaviors are mixed in an isotopic chain for \(N\lesssim 30\), but for \(N\gtrsim 30\) each line shows a simply-increasing behavior. In the light \(N=Z\) nuclei between Ne and Ar, the calculation underestimates the binding energy, which results in the cusp. In heavier nuclei with \(N>30\), the cusp appears at the magic numbers. Slopes of the isotopic lines for a given nuclide look similar among the models KIDS-A-D. In Ref. [18], we investigated the origin of the increasing behavior of the residual as the neutron number increases in the isotopic chain of Nd. We found that by controlling the slope parameter \(L\) of the symmetry energy, it is possible to make the line of the Nd isotopes either flat or stiff. We thus expect that we can construct a mass model based on a KIDS-EDF with appropriate parameters. We estimate the accuracy of the model in two ways. One is to evaluate the standard root-mean-square deviation (RMSD) defined by \[\mathrm{RMSD}(O)=\sqrt{\langle(O_{\mathrm{th}}-O_{\mathrm{exp}})^{2}\rangle}, \tag{2}\] where \(O\) denotes an observable. Another measure is a comparison in the logarithmic scale defined by \[R(O)=100\times\ln\left(\frac{O_{\mathrm{th}}}{O_{\mathrm{exp}}}\right). \tag{3}\] Table 2 summarizes the RMSD and \(R\) values of the KIDS model. The number of data included in the evaluation is 632, 630, 628, and 623 for the KIDS-A, KIDS-B, \begin{table} \begin{tabular}{c c c c c} Model & KIDS-A & KIDS-B & KIDS-C & KIDS-D \\ \hline RMSD(\(E\)) & 4.50 & 4.86 & 4.70 & 5.08 \\ \(\langle R(E)\rangle\) & 0.011 & \(-\)0.073 & \(-\)0.127 & \(-\)0.211 \\ RMSD(\(R_{c}\)) & 0.0323 & 0.0356 & 0.0345 & 0.0384 \\ \(\langle R(R_{c})\rangle\) & \(-\)0.232 & \(-\)0.411 & \(-\)0.457 & \(-\)0.602 \\ \end{tabular} \end{table} Table 2: RMSD and \(R\) value of the KIDS models. RMSD(\(E\)) and RMSD(\(R_{c}\)) are in the unit of MeV and fm, and \(\langle R(E)\rangle\) and \(\langle R(R_{c})\rangle\) are dimensionless. \begin{table} \begin{tabular}{c c c c c} Model & KIDS-A & KIDS-B & KIDS-C & KIDS-D \\ \hline RMSD(\(E\)) & 4.50 & 4.86 & 4.70 & 5.08 \\ \(\langle R(E)\rangle\) & 0.011 & \(-\)0.073 & \(-\)0.127 & \(-\)0.211 \\ RMSD(\(R_{c}\)) & 0.0323 & 0.0356 & 0.0345 & 0.0384 \\ \(\langle R(R_{c})\rangle\) & \(-\)0.232 & \(-\)0.411 & \(-\)0.457 & \(-\)0.602 \\ \end{tabular} \end{table} Table 2: RMSD and \(R\) value of the KIDS models. RMSD(\(E\)) and RMSD(\(R_{c}\)) are in the unit of MeV and fm, and \(\langle R(E)\rangle\) and \(\langle R(R_{c})\rangle\) and \(\langle R(R_{c})\rangle\) are dimensionless. KIDS-C, and KIDS-D models, respectively. RMSD(\(E\)) values of the KIDS models are in the range 4.5-5.1 MeV. The accuracy is similar to that of the SLy4 model (4.8 MeV), but larger than globally fitted mass models, e.g., 1.45 MeV of UNEDF0 [25]. We evaluate the average of \(R\) values over the results. Mean values of \(R(E)\) in Table 2 indicate that the calculated binding energy deviates from the experimental data by 0.011%, \(-0.073\)%, \(-0.127\)% and \(-0.211\)% on the average for the KIDS-A, KIDS-B, KIDS-C, and KIDS-D models, respectively. ### Charge radius Figure 2 shows the difference of charge radii \(r_{\rm ch}^{\rm th}-r_{\rm ch}^{\rm exp}\). The charge radius is evaluated as \[r_{\rm ch}=\sqrt{r_{p}^{2}+0.64\ {\rm fm}^{2}}, \tag{4}\] where \(r_{p}^{2}\) is the expectation value on the HFB vacuum of a point proton radius, and the contribution from the finite proton size is included [19]. Table 2 summarizes the RMSD and \(R\) values for the charge radius. The number of data included in the evaluation is 344, 344, 343, and 343 for the KIDS-A, KIDS-B, KIDS-C, and KIDS-D models, respectively. The experimental data are given in Refs. [26, 27]. The deviation 0.032-0.038 fm is comparable to 0.032 fm obtained with the DRHBc calculation [28]. We find notable deviations in several regions. Except for light \(N\sim Z\) nuclei, one sees an appreciable discrepancy in the \(Z\sim 40,84\), and 96 isotopes. The charge radius of the Sr, Zr, and Mo isotopes around \(N=60\) is calculated to be smaller than the measured one. This is because the calculations produce a spherical or weakly-deformed configuration for the ground state, while a largely-deformed configuration is suggested ex Figure 1: Binding energy residuals between the KIDS results and experiment for 632, 630, 628, and 623 even-even nuclei for the KIDS-A, KIDS-B, KIDS-C, and KIDS-D models, respectively. The residuals are calculated only for the bound nuclei in each model among the available experimental data. Black lines denote the results for O, Ca, Ni, Sn, and Pb isotopes, respectively. perimentally [29; 30]. The calculated charge radius of the Po isotopes with \(N=108\) and 110 is also smaller than the measured value. The calculation produces a tiny oblate deformation with \(|\beta_{2}|<0.1\) whereas the SLy4 model predicts the oblate deformation with \(|\beta_{2}|\sim 0.2\)[31]. On the other hand, the calculated charge radius of the Cm isotopes with \(N=146\)-152 is larger than the measured one. The deviation is similar to the calculation with the PC-PK1 model [28]. ### Deformation Stepping away from the magic numbers, the deformation appears and then develops by increasing the neutron or proton number. Existing EDF calculations have described well the evolution of deformation [31; 32; 28]. The present KIDS functionals, where deformed nuclei are not included in determining the coupling constants, also describe the onset and the development of deformation including the sign of quadrupole deformation, as shown in Fig. 3. The deformation parameter here is defined as \[\beta_{2,p}=\sqrt{\frac{\pi}{5}}\frac{Q_{2,p}}{r_{p}^{2}}, \tag{5}\] where \(Q_{2,p}\) is the quadrupole moment of protons. A gradual change of deformation is produced not only in the Nd isotopes, which we have already discussed in our previous study [18], but in other lanthanides. The disappearance of the magic numbers and the appearance of new magic numbers in exotic nuclei have been discussed both experimentally and theoretically [33]. The KIDS models give a spherical configuration for the neutron-rich nuclei with \(N=20\) as other EDF models do [31; 28; 32] in contrast to the measurements. For \(N=28\), the KIDS models describe well the evolution of shape from the oblate to prolate deformations toward a neutron drip line as in Refs. [34; 35]. In a neutron-deficient side, an oblate configuration appears at \(N=28\), and the KIDS-A and B models produce the oblate deformation in \({}^{56}\)Ni. A possible deformation of the Sn isotopes in the very neutron-rich region around \(N=100\) is predicted by some Figure 2: Calculated charge radius difference from the experimental value for 344, 344, 343, and 343 even-even nuclei for the KIDS-A, KIDS-B, KIDS-C, and KIDS-D models, respectively. calculations [31; 32]. However, the present calculation using the KIDS models predicts the spherical shape for all the Sn isotopes. KIDS-A-D models predict the breaking of the \(N=50\) and \(82\) spherical magic numbers near the drip line, while the SLy4 gives the spherical configuration [31]. An oblate configuration appears in the ground state at \(N\sim 50\), and a prolate configuration shows up at \(N\sim 82\). The magnitude of deformation is the largest, and the deformed region is wide in the KIDS-A model. As an example, we show the potential energy surface (PES) of a doubly-magic nucleus \({}^{78}\)Ni in Fig. 4. The ground state is soft against the quadrupole deformation with the KIDS models compared with the SLy4 functional. The KIDS-A model gives the softest PES. ### Drip line According to Fig. 3, the rare-earth nuclei with \(60\leq Z\leq 70\) and \(90\leq N\leq 108\) are prolately deformed. Thus, the structure change is gradual, and one can expect to see a global isospin dependence. Then, we show in Fig. 5 two-neutron separation energies \(S_{2n}\). The KIDS models overestimate the experimental values of \(S_{2n}\) and show a smooth decrease as the neutron number increases. Note Figure 4: Calculated PES of \({}^{78}\)Ni using the KIDS-A–D models. The result obtained by using the SLy4 functional is included. Figure 3: Calculated quadrupole deformation \(\beta_{2,p}\) for bound nuclei obtained by employing the KIDS-A–D models. that the drip line has been confirmed up to the Ne isotopes [38]. The present KIDS models overestimate the position of the drip line even in the light region. Therefore, the neutron drip line of the rare-earth isotopes predicted by the KIDS models would be located on a more neutron-rich side than where in nature. In our previous analysis of the Nd isotopes, we found that the isotopic dependence of \(S_{2n}\) is grouped into two: Group 1 and Group 2 [18]; the KIDS-A-D models belong to Group 2. For \(Z\sim 70\) isotopes, they show a similar isotopic dependence among Group 2. As decreasing the proton number, one can see that KIDS-C and D give a lower \(S_{2n}\) value than KIDS-A and B do in neutron-rich nuclei. Figure 6 depicts the predictions for the neutron and proton drip lines. The drip line is determined by looking at the sign change in the chemical potential. In spite of the uncertainty of the symmetry energy, proton drip lines are similarly predicted by the models. When the drip positions differ between the models, the neutron numbers differ only by two at most. Therefore, models practically predict identical results for the proton drip line. The reason for the proton drip being insensitive to the symmetry energy is analyzed well with the semi-empirical mass formula in Ref. [39]. The KIDS models also agree well with the experimental data. The drip line is located in a more neutron-rich region with the order KIDS-A \(\gtrsim\) KIDS-B \(>\) KIDS-C \(\gtrsim\) KIDS-D. The order is the same as the magnitude of the parameter, which has been proposed for characterizing the structure of low-mass neutron stars: \(\eta_{\tau}=(-K_{\tau}L^{5})^{1/6}\)[40]. The \(\eta_{\tau}\) value is 89.8, 80.7, 78.6, and 66.0 MeV for the KIDS-A, B, C, and D, respectively. Looking into more details, one can see some features specific to each model. As a general trend, the difference in the position of the neutron drip line between models becomes large with the increase of the proton number. However, in a certain interval, the difference in the neutron between the models becomes four or less. Those small uncertainty regions are located at \(Z=\) (8, 10, 12, 14), (22, 24), (44, 46, 48), and (68, 70, 72, 74), corresponding to \(N\sim 28\), 64, 126, and 184. Small uncertainty regions are also obtained in the other EDFs. In Ref. [41], very small uncertainty happens at \(N=126\) and 184, and the reason is attributed to the spherical shell closure at these magic numbers. We obtain similar small uncertainties around \(N=28\) and 64, as well as \(N=126\) and 184. ## IV Summary The KIDS-A-D functionals were constructed based on the NS observations. It is noted that both nuclear matter properties and nuclear data are used in the decision of the model parameters, and the number of nuclear data used is much smaller than in the conventional models. The motivation for this different fitting scheme is to obtain a framework accurate and applicable to both infinite nuclear matter and finite nuclei. We have employed these functionals augmented by the mixed-type pairing-density functional to describe the properties of even-even nuclei across the nuclear chart. We have analyzed the results for the total binding energy, charge radius, quadrupole deformation, and the neutron drip line. The root-mean-square deviation for about 600 nuclei from the AME2020 data is as large as \(\sim 5\) MeV, which is compatible with widely-used nuclear EDFs but is larger than recently developed mass models. We have obtained a similar accuracy for the charge radii to the existing functionals. The appearance and development of nuclear deformation in open-shell nuclei are well described in spite of no deformed nuclei being considered in the construction of the KIDS functional. We have found that the location of the neutron drip line is according to the nuclear-matter parameter, \(\eta_{\tau}=(-K_{\tau}L^{5})^{1/6}\), characterizing the low-mass NS. The similarity of RMSD values to the widely-used models indicates that a unified description of nuclear matter and finite nuclei is accessible in the KIDS framework. The results open a challenge to the KIDS functional whether it can achieve an accuracy comparable to the existing globally-fitted EDF models. ###### Acknowledgements. This work was supported by the NRF research Grants (No. 2018R1A5A1025563 and No. 2023R1A2C1003177), JSPS KAKENHI (Grants No. JP19K03824, No. JP19K03872, No. JP19KK0343, and No. JP20K03964), and the JSPS/NRF/NSFC A3 Foresight Program "Nuclear Physics in the 21st Century." Figure 5: Two-neutron separation energy \(S_{2n}\) of the rare earth isotopes with \(Z=\) 60–70. The experimental data for JYFLTRAP and CARIBU are obtained from [36; 37], respectively.
2301.02849
Temperature and density effects on the two-nucleon momentum correlation function from excited single nuclei
Two-nucleon momentum correlation functions are investigated for different single thermal sources at given initial temperature $(T)$ and density $(\rho)$. To this end, the space-time evolutions of various single excited nuclei at $T$ $= 1 - 20$ $MeV$ and $\rho$ = 0.2 - 1.2 $\rho_0$ are simulated by using the thermal isospin-dependent quantum molecular dynamics $(ThIQMD)$ model. Momentum correlation functions of identical proton-pairs ($C_{pp}(q)$) or neutron-pairs ($C_{nn}(q)$) at small relative momenta are calculated by $Lednick\acute{y}$ and $Lyuboshitz$ analytical method. The results illustrate that $C_{pp}(q)$ and $C_{nn}(q)$ are sensitive to the source size ($A$) at lower $T$ or higher $\rho$, but almost not at higher $T$ or lower $\rho$. And the sensitivities become stronger for smaller source. Moreover, the $T$, $\rho$ and $A$ dependencies of the Gaussian source radii are also extracted by fitting the two-proton momentum correlation functions, and the results are consistent with the above conclusions.
Ting-Ting Wang, Yu-Gang Ma, De-Qing Fang, Huan-Ling Liu
2023-01-07T13:30:58Z
http://arxiv.org/abs/2301.02849v1
Temperature and density effects on the two-nucleon momentum correlation function from excited single nuclei ###### Abstract Two-nucleon momentum correlation functions are investigated for different single thermal sources at given initial temperature (\(T\)) and density (\(\rho\)). To this end, the space-time evolutions of various single excited nuclei at \(T=1-20~{}MeV\) and \(\rho=0.2\) - \(1.2~{}\rho_{0}\) are simulated by using the thermal isospin-dependent quantum molecular dynamics (\(ThIQMD\)) model. Momentum correlation functions of identical proton-pairs (\(C_{pp}(q)\)) or neutron-pairs (\(C_{nn}(q)\)) at small relative momenta are calculated by \(Lednicky\) and \(Lyuboshitz\) analytical method. The results illustrate that \(C_{pp}(q)\) and \(C_{nn}(q)\) are sensitive to the source size (\(A\)) at lower \(T\) or higher \(\rho\), but almost not at higher \(T\) or lower \(\rho\). And the sensitivities become stronger for smaller source. Moreover, the \(T\), \(\rho\) and \(A\) dependencies of the Gaussian source radii are also extracted by fitting the two-proton momentum correlation functions, and the results are consistent with the above conclusions. ## I Introduction Properties of nuclear matter is one of the most interesting topics in heavy-ion physics [1; 2; 3; 4] and lots of works have been done around zero temperature, including the nuclear equation of state (\(EOS\)). However, the studies on properties of nuclear matter at finite temperatures are relatively limited. Many previous works mainly focus on the temperature dependence of hot nuclear matter and the nuclear liquid-gas phase transition (\(LGPT\)) [5; 6; 7; 8; 9; 10; 11; 12; 13; 14], the ratio between shear viscosity over entropy density (\(\eta/s\)) [15; 16; 17; 18; 19], as well as the nuclear giant dipole resonance [20; 21; 22] etc. Among above works, the relationship between the phase transition temperature and the source size has been investigated [5]. In Ref. [5], the finite-size scaling effects on nuclear liquid-gas phase transition probes are investigated by studying de-excitation processes of the thermal sources by the isospin-dependent quantum molecular dynamics model (IQMD). Several probes, including the total multiplicity derivative, second moment parameter, intermediate mass fragment multiplicity, \(Fisher\cdot s\) power-law exponent as well as nuclear \(Zipf\cdot s\) law exponent of Ma [9] were explored, and the phase transition temperatures were then obtained. Recently, the deep neural network has also been used to determine the nuclear liquid gas phase transition [23] and to estimate the temperature of excited nuclei by the charge multiplicity distribution of emitted fragments [24]. The latter work proposed that the charge multiplicity distribution can be used as a thermometer of heavy-ion collisions. Considering that the intermediate-state at high temperature and density in the evolution process of nuclear reactions cannot be directly measured, one always explore properties of nuclear matter and the dynamical description of heavy-ion collisions through the analysis of the final-state products. As well known, the two-particle momentum correlation function in the final-state has been extensively used as a probe of the space-time properties and characteristics of the emission source [25; 26; 27]. The two-proton momentum correlation function has been explored systematically by a lot of experiments as well as different models, several reviews can be found in Refs. [28; 29; 30; 31]. In various studies on the momentum correlation function, impacts of the impact parameter, the total momentum of nucleon pairs, the isospin of the emission source, the nuclear symmetry energy, the nuclear equation of state (\(EOS\)) as well as the in-medium nucleon-nucleon cross section have been discussed in literature [32; 33; 34; 35; 36; 37; 38]. Even more, nuclear structure effects were also carefully investigated, such as the effects from binding energy and separation energy of the nucleus [39], density distribution of valence neutrons in neutron-rich nuclei [40], as well as high momentum tail of the nucleon-momentum distribution [41] etc. Two-proton momentum correlation function was also constructed in few-body reactions as well as \(\alpha\)-clustered nucleus induced collisions [42; 43; 44; 45; 46]. In addition, momentum correlation function between two light charged particles also offers a unique tool to investigate dynamical expansion of the reaction zone [38]. Here we extend the momentum correlation method of final-state interaction to study the time-spatial information of the finite-temperature nuclear systems which have different initial density. The purpose of the present paper is to systematically investigate the relationship between two-particle momentum correlation functions and system parameters, such as the source-temperature, density as well as system-size in a framework of the thermal isospin-dependent quantum molecular dynamics (\(ThIQMD\)) model [14; 17; 5]. In addition, the Gaussian source radii are quantitatively extracted by assumption of Gaussian source fits to the momentum correlation function distributions. In this article, the evolution process of excited nuclear sources at given initial tem peratures varying from \(1\)\(MeV\) to \(20\)\(MeV\) are studied. The present work selects six different nuclear systems with similar ratio of neutron to proton numbers, \(i.e.\), \(N/Z\sim 1.3\), which include \((A,Z)=(36,15)\), \((52,24)\), \((80,33)\), \((100,45)\), \((112,50)\), and \((129,54)\) nuclei. Then, Lednicky-Lyuboshitz theoretical approach [47] is applied for calculating two-particle momentum correlation functions which are constructed based on phase-space information from the evolution process of single excited nuclear sources by the \(ThIQMD\) model. The rest of this article is organized as follows. In Section \(II\), we firstly describe the thermal isospin-dependent quantum molecular dynamics model [14; 17], then briefly introduce the momentum correlation technique using \(Lednicky\) and \(Lyuboshitz\) analytical formalism. In Section \(III\), we show the results of the \(ThIQMD\) plus the \(LL\) method for the source-temperature dependence of two-particle momentum correlation function. The two-particle momentum correlation functions of different system-sizes at different initial densities are systematically discussed. The detailed analysis of the extracted Gaussian source radii are presented under different source-temperature and density. Furthermore, the momentum correlation function of two-neutron is also analyzed. Finally, Section \(IV\) gives a summary of the paper. ## II Models and Formalism ### The ThIQMD Model In this paper, the thermal isospin-dependent Quantum Molecular Dynamics transport model is used as the event generator, which has been applied successfully to study the \(LGPT\)[5; 24]. In the following discussion, we introduce this model briefly. As well known, isospin-dependent Quantum Molecular Dynamics (\(IQMD\)) model was used to describe the collision process between two nuclei. The Quantum Molecular Dynamics transport model is a \(n\)-body transport theory, which describes heavy-ion reaction dynamics from intermediate to relativistic energies [48; 49; 50; 51]. In the present work, we use a single excited source in the \(ThIQMD\) which is different from the traditional \(IQMD\). Usually, the ground state of the initial nucleus is considered to be \(T=0\)\(MeV\) in the traditional \(IQMD\) model. However, the \(ThIQMD\) model developed by Fang, Ma, and Zhou in Ref. [17] is used to simulate single thermal source at different temperatures and densities. The main parts of \(QMD\) transport model include the following issues: the initialization of the projectile and the target, nucleon propagation under the effective potential, the collisions between the nucleons in the nuclear medium and the Pauli blocking effect. In the \(ThIQMD\), instead of using the Fermi-Dirac distribution for \(T=0\)\(MeV\) with the nucleon's maximum momentum limited by \(P_{p}^{i}(\vec{r})=\hbar\left[3\pi^{2}\rho_{i}(\vec{r})\right]^{1/3}\), the initial momentum of nucleons is sampled by the Fermi-Dirac distribution at finite temperature: \[n\left(e_{k}\right)=\frac{g\left(e_{k}\right)}{e^{\frac{e_{k}-\mu_{i}}{T}}+1}, \tag{1}\] where the kinetic energy \(e_{k}=\frac{p^{2}}{2m}\), \(p\) and \(m\) is the momentum and mass of the nucleon, respectively. \(g\left(e_{k}\right)=\frac{V}{2\pi^{2}}\left(\frac{2m}{\hbar^{2}}\right)^{\frac {3}{2}}\sqrt{e_{k}}\) represents the state density with the volume of the source \(V=\frac{4}{3}\pi r^{3}\) where \(r=r_{V}A^{\frac{1}{3}}\) (\(r_{V}\) is a parameter to adjust the initial density). In addition, the chemical potential \(\mu_{i}\) is determined by the following equation: \[\frac{1}{2\pi^{2}}\left(\frac{2m}{\hbar^{2}}\right)^{\frac{3}{2}}\int_{0}^{ \infty}\frac{\sqrt{e_{k}}}{e^{\frac{e_{k}-\mu_{i}}{T}}+1}de_{k}=\rho_{i}. \tag{2}\] where \(i=n\) or \(p\) refer to the neutron or proton. In the \(ThIQMD\) model, the interaction potential is also represented by the form as follows: \[U=U_{Sky}+U_{Coul}+U_{Yuk}+U_{Sym}+U_{MDI}, \tag{3}\] where \(U_{Sky}\), \(U_{Coul}\), \(U_{Yuk}\), \(U_{Sym}\), and \(U_{MDI}\) are the density-dependent Skyrme potential, the Coulomb potential, the surface Yukawa potential, the isospin asymmetry potential, and the momentum-dependent interaction, respectively. Among these potentials, the Skyrme potential, the Coulomb potential and the momentum-dependent interaction can be written as follows: \[U_{Sky}=\alpha(\frac{\rho}{\rho_{0}})+\beta(\frac{\rho}{\rho_{0}})^{\gamma}, \tag{4}\] where \(\rho\) and \(\rho_{\circ}\) are total nucleon density and its normal value at the ground state, \(i.e.\), \(0.16\)\(fm^{-3}\), respectively. The above parameters \(\alpha\), \(\beta\), and \(\gamma\) with an incompressibility parameter \(K\) are related to the nuclear equation of state [52; 53; 54; 55; 56; 57; 58]. \[U_{Sym}=C_{sym}\frac{\left(\rho_{n}-\rho_{p}\right)}{\rho_{0}}\tau_{z}, \tag{5}\] \[U_{Coul}=\frac{1}{2}\left(1-\tau_{z}\right)V_{c}, \tag{6}\] where \(\rho_{n}\) and \(\rho_{p}\) are neutron and proton densities, respectively, \(\tau_{z}\) is the \(z\)-th component of the isospin degree of freedom for the nucleon, which equals \(1\) or \(-1\) for a neutron or proton, respectively, and \(C_{sym}\) is the symmetry energy coefficient. \(U_{Coul}\) is the Coulomb potential where \(V_{c}\) is its parameter for protons. \[U_{MDI}=\delta\cdot\ln^{2}\left(\epsilon\cdot\left(\Delta p\right)^{2}+1\right) \cdot\frac{\rho}{\rho_{0}}, \tag{7}\] where \(\Delta p\) is the relative momentum, \(\delta\) and \(\epsilon\) can be found in Refs. [48; 49]. Their values of the above potential parameters are all listed in Table \(I\): ### LEDNick\(\dot{Y}\) and Lyuboshitz analytical formalism Next, we briefly review the method for the two-particle momentum correlation function proposed by Lednick\(\acute{y}\) and Lyuboshitz [47; 59; 60]. The momentum correlation technique in nuclear collisions is based on the principle as follows: when they are emitted at small relative momentum, the two-particle momentum correlation is determined by the space-time characteristics of the production processes owing to the effects of quantum statistics (\(QS\)) and final-state interactions (\(FSI\)) [61; 62]. Therefore, the two-particle momentum correlation function can be expressed through a square of the symmetrized Bethe-Salpeter amplitude averaging over the four coordinates of the emitted particles and the total spin of the two-particle system, which represents the continuous spectrum of the two-particle state. In this theoretical approach, the final-state interactions of the particle pairs is assumed independent in the production process. According to the conditions in Ref. [63], the correlation function of two particles can be written as the expression: \[\mathbf{C}\left(\mathbf{k}^{*}\right)=\frac{\int\mathbf{S}\left(\mathbf{r}^{ *},\mathbf{k}^{*}\right)\left|\Psi_{\mathbf{k}^{*}}\left(\mathbf{r}^{*} \right)\right|^{2}d^{4}\mathbf{r}^{*}}{\int\mathbf{S}\left(\mathbf{r}^{*}, \mathbf{k}^{*}\right)d^{4}\mathbf{r}^{*}}, \tag{8}\] where \(\mathbf{r}^{*}=\mathbf{x}_{1}-\mathbf{x}_{2}\) is the relative distance of the two particles in the pair rest frame (\(PRF\)) at their kinetic freeze-out, \(\mathbf{k}^{*}\) is half of the relative momentum between two particles in the \(PRF\), \(\mathbf{S}\left(\mathbf{r}^{*},\mathbf{k}^{*}\right)\) is the probability to emit a particle pair with given \(\mathbf{r}^{*}\) and \(\mathbf{k}^{*}\), _i.e._, the source emission function, and \(\Psi_{\mathbf{k}^{*}}\left(\mathbf{r}^{*}\right)\) is the equal-time (\(t^{*}=0\)) reduced Bethe-Salpeter amplitude which can be approximated by the outer solution of the scattering problem in the \(PRF\)[64; 65]. This approximation is valid on condition \(\left|t^{*}\right|\ll m\left(r^{*}\right)^{2}\), which is well fulfilled for sufficiently heavy particles like protons or kaons and reasonably fulfilled even for pions [59]. In the above limit, the asymptotic solution of the wave function of the two charged particles approximately takes the expression: \[\Psi_{\mathbf{k}^{*}}\left(\mathbf{r}^{*}\right)=e^{i\delta_{c}} \sqrt{A_{c}\left(\lambda\right)}\times\\ \left[e^{-i\mathbf{k}^{*}\mathbf{r}^{*}}F\left(-i\lambda,1,i \xi\right)+f_{c}\left(k^{*}\right)\frac{\tilde{G}\left(\rho,\lambda\right)}{r ^{*}}\right]. \tag{9}\] In the above equation, \(\delta_{c}=arg\)\(\Gamma\left(1+i\lambda\right)\) is the Coulomb \(s\)-wave phase shift with \(\lambda=\left(k^{*}a_{c}\right)^{-1}\) where \(a_{c}\) is the two-particle Bohr radius, \(A_{c}\left(\lambda\right)=2\pi\lambda\left[\exp\left(2\pi\lambda\right)-1 \right]^{-1}\) is the Coulomb penetration factor, and its positive (negative) value corresponds to the repulsion (attraction). \(\tilde{G}\left(\rho,\lambda\right)=\sqrt{A_{c}\left(\lambda\right)}\left[G_{0 }\left(\rho,\lambda\right)+iF_{0}\left(\rho,\lambda\right)\right]\) is a combination of regular (\(F_{0}\)) and singular (\(G_{0}\)) \(s\)-wave Coulomb functions [60; 59]. \(F\left(-i\lambda,1,i\xi\right)=1+\left(-i\lambda\right)\left(i\xi\right)/1!^{2}+\left(-i\lambda\right)\left(-i\lambda+1\right)\left(i\xi\right)^{2}/2! ^{2}+\cdots\) is the confluent hypergeometric function with \(\xi=\mathbf{k}^{*}\mathbf{r}^{*}+\rho\), \(\rho=k^{*}r^{*}\). \[f_{c}\left(k^{*}\right)=\left[K_{c}\left(k^{*}\right)-\frac{2}{a_{c}}h\left( \lambda\right)-ik^{*}A_{c}\left(\lambda\right)\right]^{-1} \tag{10}\] is the \(s\)-wave scattering amplitude renormalizied by the long-range Coulomb interaction, with \(h\left(\lambda\right)=\lambda^{2}\sum_{n=1}^{\infty}\left[n\left(n^{2}+\lambda ^{2}\right)\right]^{-1}-C-\ln\left[\lambda\right]\) where \(C=0.5772\) is the Euler constant. \(K_{c}\left(k^{*}\right)=\frac{1}{f_{0}}+\frac{1}{2}d_{0}k^{*^{2}}+Pk^{*^{4}}+\cdots\) is the effective range function, where \(d_{0}\) is the effective radius of the strong interaction, \(f_{0}\) is the scattering length and \(P\) is the shape parameter. The parameters of the effective range function are important parameters characterizing the essential properties of the \(FSI\), and can be extracted from the correlation function measured experimentally [38; 65; 66; 67]. For \(n\)-\(n\) momentum correlation functions which include uncharged particle, only the short-range particle interaction works. For \(p\)-\(p\) momentum correlation functions, both the Coulomb interaction and the short-range particle interaction dominated by the \(s\)-wave interaction are taken into account. ## III Analysis and discussion Within the framework of the thermal isospin-dependent quantum molecular dynamics model [17; 5; 14], the two-particle momentum correlation functions are calculated by using the phase-space information from the freeze-out stage of the excited nuclear source at an initial temperature varying from 1 \(MeV\) to 20 \(MeV\) and/or density varying from \(\rho=0.2\rho_{0}\) to \(1.2\rho_{0}\). This work performs calculations for thermal source systems with different mass including \(\left(A,Z\right)~{}=~{}(36,15),(52,24),(80,33),(100,45),(112,50)\), and \((129,54)\). We firstly calculated the proton-proton momentum correlation function \(C_{pp}(q)\) for finite-size systems at temperatures ranging from 1 to 20 \(MeV\). In Fig. 1, the results of \(C_{pp}(q)\) for temperature of 2, 4, 6, 8, 10 and 12 \(MeV\) at different values of density (\(0.2\rho_{0}\) - \(1.2\rho_{0}\)) are presented. The proton-proton momentum correlation function exhibits a peak at relative momentum \(q\) = 20 \(MeV\)/\(c\), which is due to the strong final-state \(s\)-wave attraction together with the suppression at lower relative momentum as a result of Coulomb repulsion and the antisymmetrization wave function between two protons. The shape of the two-proton momentum correlation functions is consistent with many previous experimental data in heavy-ion collisions, eg. Ref. [68]. For protons which are emitted from the lower temperature (\(T<8~{}MeV\)) source in Fig. 1 (a)-(c), the general trend is very similar. The figure shows that \(C_{pp}(q)\) increases as \(\rho\) increases for \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\alpha\) & \(\beta\) & \(\gamma\) & \(K\) & \(\delta\) & \(\epsilon\) \\ \hline \(\left(MeV\right)\) & \(\left(MeV\right)\) & \(\left(MeV\right)\) & \(\left(MeV\right)\) & \(\left(\left(GeV\right)\)/\(c\)\({}^{-2}\)\(\right)\(\right)\) \\ \hline \(-390.1\) & 320.3 & 1.14 & 200 & 1.57 & 500 \\ \hline \hline \end{tabular} \end{table} Table 1: The value of the interaction potential parameters. fixed \(T\) (\(T<8~{}MeV\)). The increase of the density indicates that the geometrical size becomes smaller for a source with fixed neutrons and protons, which makes the strength of the momentum correlation function stronger. Finally, the \(p\)-\(p\) momentum correlation function becomes almost one at \(q>60~{}MeV/c\). For larger \(T\) (\(T>8\)\(MeV\)) in Fig. 1 (d)-(f), the difference of \(C_{pp}(q)\) between different densities becomes smaller. From Fig. 1, it is found that the \(C_{pp}(q)\) almost keep the same above \(T=8\)\(MeV\) for different densities and the \(p\)-\(p\) momentum correlation function becomes almost unique above approximately \(q=30~{}MeV/c\). It indicates that the emitted proton is not affected by the change of density when the source temperature beyond certain value (\(T\approx 8~{}MeV\) in present work). In order to understand which one of the two factors (\(i.e.\), temperature and density) has larger influence, the two-particle momentum correlation in fig. 2 is plotted by exchanging of the two input parameters. From fig. 2, we can intuitively observe dependence of the two-particle momentum correlation on the source temperature. The dependence of \(C_{pp}(q)\) on the source temperature is stronger than on density. In other words, the \(C_{pp}(q)\) is more sensitive to \(T\) than to density \(\rho\). In addition, for larger \(\rho\) from fig. 2 (a) to (f), the difference of \(C_{pp}(q)\) between different densities becomes bigger. Next, we explore whether the phenomenon exists in momentum correlation functions for the uncharged-particle pairs. Fig. 3 presents the neutron-neutron momentum correlation functions (\(C_{nn}(q)\)) for temperature of 2, 4, 6, 8, 10 and 12 \(MeV\) at different values of density, respectively. For neutron-neutron momentum correlation function, it peaks at \(q\approx 0~{}MeV/c\) caused by the \(s\)-wave attraction. Although the \(C_{nn}(q)\) has different shape compared with the \(p\)-\(p\) momentum correlation function, it has the similar dependence on the source temperature and density. The similar trend in \(C_{pp}(q)\) and \(C_{nn}(q)\) shows the close emission mechanism in the evolution process. Fig. 4 shows the results of a larger system at different source-temperature and density, and a similar behavior of \(C_{pp}(q)\) is demonstrated. We also observe that the proton-proton momentum correlation in larger-size system (\((A,Z)=(129,54)\)) in Fig. 4 becomes weaker in comparison with the smaller-size source (\((A,Z)=(36,15)\)) in Fig. 1. In view of the above phenomenon, Fig. 5 describes the relationship between system-size and momentum correlation function in more details. The decreasing of \(C_{pp}(q)\) as the system-size increasing for a fixed value of \(T\) or \(\rho\) can be clearly seen in Fig. 5 (g), Figure 1: The proton-proton momentum correlation function (\(C_{pp}(q)\)) at different densities (\(i.e.\), 0.2\(\rho_{0}\), 0.4\(\rho_{0}\), 0.6\(\rho_{0}\), 0.8\(\rho_{0}\), 1.0\(\rho_{0}\), and 1.2\(\rho_{0}\)) for the smaller nucleus (\(A\)=36, \(Z\)=15) with fixed source-temperatures \(T=2~{}MeV\) (a), \(4~{}MeV\) (b), 6 \(MeV\) (c), 8 \(MeV\) (d), 10 \(MeV\) (e) and 12 \(MeV\) (f), respectively. The freeze-out time is taken to be 200 \(fm/c\). which is consistent with the previous results of Gaussian source [37; 38; 69]. In Fig. 5 (a)-(i), with larger temperature or lower density, the difference of \(C_{pp}(q)\) between different \(T\) or \(\rho\) becomes smaller, respectively. The Gaussian source radii are extracted for further discussion later in this article. From the above plots, we can extract \(C_{max}(q)\), \(i.e.\), the maximum value of \(C_{pp}(q)\) as well as the full width at half maximum (\(FWHM\)) of \(C_{pp}(q)\) distribution, \(i.e.\) at \(C_{pp}(q)=[C_{max}(q)-1]/2\). The source-temperature \(T\) dependence of \(C_{max}(q)\) and \(FWHM\) for the proton-proton momentum correlation function with different density are given in Fig. 6. As shown in Fig. 6 (a) and (b), both \(C_{max}(q)\) and \(FWHM\) decrease gradually with the increasing of \(T\). In addition, both of them increase gradually with density. At high temperature, the change of \(C_{max}(q)\) and \(FWHM\) is very small and not plotted in the figure. Of course, the behavior of the \(C_{max}(q)\) and \(FWHM\) with \(T\) and \(\rho\) can also be clearly seen in Fig. 2, and the increasing of \(C_{max}(q)\) and \(FWHM\) are generally inversely proportional to Gaussian radius \(r_{0}\) as shown later. Similarly, the system-size \(A\) dependence of \(C_{max}(q)\) and \(FWHM\) for the proton-proton momentum correlation function at \(T=2MeV\) and \(\rho=0.6\rho_{0}\) is shown in Fig. 7. The dependence of \(C_{max}(q)\) and \(FWHM\) on system-size \(A\) is quite similar to the temperature dependence in Fig. 6. The \(C_{max}(q)\) and \(FWHM\) values become smaller for larger systems. Fig. 8 shows the source-temperature, density, and system-size dependence of Gaussian radii extracted from two-particle momentum correlation functions, where panels (a) and (b) are results with the smaller source size and the larger source size, respectively. The radii are extracted by a Gaussian source assumption, \(i.e.\), \(S(r)\approx\exp[-r^{2}/(4r_{0}^{2})]\), where \(r_{0}\) is the Gaussian source radius from the proton-proton momentum correlation functions. The theoretical calculations for \(C_{pp}(q)\) was performed by using the \(Lednicky\) and \(Lyuboshitz\) analytical method. The best fitting radius is judged by finding the minimum of the reduced chi-square between the \(ThIQMD\) calculations and the Gaussian source assumption. Since the effect of the strong \(FSI\) scales as \(f_{c}\left(k^{*}\right)/r^{*}\) in Eq. (9), one may read the sensitivity of the correlation function to the temperature \(T\), density \(\rho\) and atomic number \(A\) from their effects on the Gaussian radius \(r_{0}\). One may observe a linear dependence on these parameters up to \(T\approx 8~{}MeV\) and then a lost of sensitivity in a plateau region at higher temperatures in Fig. 8. As the density decreases, the decreasing speed of the Gaussian radius of the small system is larger than that of the larger system. Fig. 9 shows the Gaussian radius of the different system-size varies with the temperature in panels (a)-(c) or density in panels (d)-(f). The Gaussian source radius is consistent with the system-size, Figure 3: The neutron-neutron (\(n\)-\(n\)) momentum correlation functions (\(C_{nn}(q)\)) in the same conditions as Fig. 1. Figure 4: Same to Fig. 1, but for a larger system (\(A\)=129, \(Z\)=54). \(i.e.\), at higher temperature or larger density, the differences of Gaussian source between different system sizes are bigger in the low density and low temperature region, but the difference in opposite conditions almost disappear. In other words, the sensitivity of the source radii to the system size seem to be different in the different regions of temperatures and densities. For example, the sensitivity is better in the region of lower \(T\) and higher \(\rho\) (Fig. 9(b) and (c)), or it is better in the higher \(T\) region for the lower \(\rho\) (Fig. 9(a)), or it is better in the higher \(\rho\) region for the lower \(T\) (Fig. 9(d)). From the above discussion, it is demonstrated that the strength of the two-particle momentum correlation function is affected by the source temperature, density and system size. The two-particle momentum correlation function strength is larger for a single source with lower temperature, higher density or smaller mass number as shown in Fig. 1-5. Otherwise, the strength becomes smaller. To some extents, the strong correlation between two particles is mainly caused by the closed position of each other in phase space in both coordinate and momentum. Varying only one in the three condition parameters (temperature, density and system size), lower temperature means smaller momentum space, higher density means smaller coordinate space and small system size also mean smaller coordinate space to keep fixed density compared with large system size. The dependencies of the two-particle momentum correlation function strength on the source temperature, density and system size could be explained by the change of the phase space sizes. Two particles emitted from small phase space will have strong correlation and those from large phase space will have weak correlation. For example, the increase of the \(C_{pp}(q)\) strength with the increase of the density for Figure 5: \(C_{pp}(q)\) of different source size systems at fixed temperatures (i.e., from left column to right column, they correspond to \(T=2\), \(4\) and \(6\)\(MeV\), respectively) or fixed densities (i.e., from top row to bottom row, they correspond to \(\rho=0.2\rho_{0}\), \(0.6\rho_{0}\), \(1.0\rho_{0}\), respectively). a fixed system size could be explained by the decreasing of the coordinate space as shown in Fig. 1 (a). And the small \(C_{pp}(q)\) strength at temperature higher than \(8~{}MeV\) could be caused by the large momentum space compared with lower temperatures as shown in Fig. 1 (d-f). The decrease of the \(C_{pp}(q)\) strength with the increase of the system size for a fixed density could also be explained by the increasing of the coordinate space as shown in Fig. 5 (g). Thus it is concluded that the phase space size for the emitted nucleons have strong effect on strength of the two-particle momentum correlation function, which can also be seen in the extracted Gaussian radii as shown in Fig. 8. ## IV Summary In summary, the two-particle momentum correlation functions for single excited sources are investigated using the Lednicky\(\acute{y}\) and Lyuboshitz analytical formalism with the phase-space information at the freeze-out stage for different initial temperatures and densities in a framework of the \(ThIQMD\) transport approach. We mainly performed a series of studies focusing on the varied effects of source temperature, density and system-size on the two-particle momentum correlation functions. The results reflect that the shape of the two-proton momen Figure 8: Gaussian source radius as a function of temperature at different densities (\(\rho=0.2\rho_{0}\), \(0.4\rho_{0}\), \(0.6\rho_{0}\), \(0.8\rho_{0}\), \(1.0\rho_{0}\), \(1.2\rho_{0}\)) for a fixed source size. Panel (a) and (b) correspond to the smaller source size with (\(A=36\), \(Z=15\)) and the larger source size with (\(A=129\), \(Z=54\)), respectively. Figure 6: Source-temperature \(T\) dependencies of \(C_{max}(q)\) (a) and of \(FWHM\) (b) of \(C_{pp}(q)\) distributions at different densities (\(0.2\rho_{0}\) - \(1.2\rho_{0}\)) for the (\(A=35\), \(Z=16\)) system. tum correlation function is in accordance with the previous experimental data in heavy-ion collisions [68]. At the same time, the trend of the relationship between the two-proton momentum correlation and system-size is consistent with previous simulations [37; 38; 69]. At low source-temperature, the larger density makes the two-particle momentum correlation stronger. However, at higher source temperature, the effect becomes almost disappear. Both proton-proton correlations and neutron-neutron correlations have the similar responses to temperature and density. This work also shows that the emission source is not much influenced by density above a certain temperature for a single excited source. In the same way, the emission source are softly influenced by temperature below a given density for a single excited source. In one word, the dependence of the two-particle momentum correlation function on the source temperature, density and system size could be explained by the change of the coordinate and/or momentum phase space sizes. In the end, the Gaussian radii are extracted to explore the emission source sizes in single excited systems. Gaussian radii become larger in the larger systems. The dependence of the extracted Gaussian radius on source-temperature and density is consistent with behavior of the two-proton momentum correlation function as discussed in the texts. ###### Acknowledgements. This work was supported in part by the National Natural Science Foundation of China under contract Nos. 11890710, 11890714, 11875066, 11925502, 11961141003, 11935001, 12147101 and 12047514, the Strategic Priority Research Program of CAS under Grant No. XDB34000000, National Key R&D Program of China under Grant No. 2016YFE0100900 and 2018YFE0104600, Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, and the China PostDoctoral Science Foundation under Grant No. 2020M681140.
2302.04968
Elementary Proof of QAOA Convergence
The Quantum Alternating Operator Ansatz (QAOA) and its predecessor, the Quantum Approximate Optimization Algorithm, are one of the most widely used quantum algorithms for solving combinatorial optimization problems. However, as there is yet no rigorous proof of convergence for the QAOA, we provide one in this paper. The proof involves retracing the connection between the Quantum Adiabatic Algorithm and the QAOA, and naturally suggests a refined definition of the `phase separator' and `mixer' keywords.
Lennart Binkowski, Gereon Koßmann, Timo Ziegler, René Schwonnek
2023-02-09T22:57:59Z
http://arxiv.org/abs/2302.04968v1
# Elementary Proof of QAOA Convergence ###### Abstract The Quantum Alternating Operator Ansatz (QAOA) and its predecessor, the Quantum Approximate Optimization Algorithm, are one of the most widely used quantum algorithms for solving combinatorial optimization problems. However, as there is yet no rigorous proof of convergence for the QAOA, we provide one in this paper. The proof involves retracing the connection between the Quantum Adiabatic Algorithm and the QAOA, and naturally suggests a refined definition of the 'phase separator' and'mixer' keywords. ## I Introduction In the current era of gate-based noisy quantum computers, the class of _variational quantum algorithms_ (VQAs) is at the center of research. First and foremost, the _quantum approximate optimization algorithm_[1] receives enormous scientific as well as industrial attention. Like many other VQAs, it is developed for the purpose of solving _combinatorial optimization problems_ (COPs) (maximize \(f:\{0,1\}^{n}\to\mathbb{R}\) subject to some constraints) on quantum computers with the aid of classical optimizers. It is, to some extend, a discretized and gate-based version of the _quantum adiabatic algorithm_ (QAA, [2]) which is itself a continuous-time algorithm. The QAA and the closely related _quantum annealing_[3] rely on slowly evolving a quantum system (resp. some external parameters) in order to transition a well-known initial state into some state representing an optimal solution. Due to their analog structure, they are not executable on gate-based architectures, but on _quantum annealers_ (see [4] for an overview) which constitute the second large family of quantum computer architectures. In its original formulation, the quantum approximate optimization algorithm is only suited for unconstrained problems. A common technique for enlarging its scope to constrained problems is _softcoding_ the constraints. That is, the constraints enter the objective function as additional terms, penalizing infeasible inputs. However, for several instances, this approach was observed to produce unfavorable output distributions which suffer from poor optimization quality or feasibility violation (see, e.g., [5, 6, 7]). In order to improve the treatment of constrained problems Hadfield et al. extended the quantum approximate optimization algorithm to the _quantum alternating operator ansatz_ (QAOA, [8]) which also allows for _hardcoding_ the constraints. That is, the objective function is left unchanged and feasibility preservation is instead enforced strictly. In a nutshell, a QAOA-circuit consists of parametrized _phase separator_ gates \(U_{\text{P}}\) and _mixer_ gates \(U_{\text{M}}\). Both types of gates should preserve feasibility such that - in an ideal setting - feasible states are mapped to feasible states again. Classically and iteratively optimizing the circuit parameters then should yield a good approximation of an optimal solution. This heuristic argument goes through only if every feasible state can be reached: The QAOA-circuit is, given the right parameter values, able to (approximately) produce every feasible state. Typically, the reachability of feasible states only depends on the properties of the mixer \(U_{\text{M}}\). A more or less rigorous proof why the quantum approximate optimization algorithm should converge for every (unconstrained) COP with only one optimal solution was already given in [1]. This sketch of a proof, in turn, builds on the close connection to the QAA and the underlying principle of adiabatic evolution/quantum annealing (see [9] for mathematical treatment). However, neither is the proof carried out in great mathematical detail, nor does it attempt to be as general as possible. Moreover, since the QAOA comprises similar principles as the quantum approximate optimization algorithm, it stands to reason to extend this result, once suitably formalized, to the QAOA; a task that, surprisingly, has not yet been tackled. With this paper we address this issue and come up with refined definitions for the phase separator and mixer gates which make the connection to the quantum approximate optimization algorithm more visible. First, we prove the convergence of the QAA with suitable initial Hamiltonian and initial state in Section III. The proof is built on the aforementioned proof sketch in [1]. We extract the underlying principles and already obtain a precise definition for a _mixer Hamiltonian_. However, by invoking a version of the adiabatic theorem without _gap_ condition_, we obtain a more general result which does not require the considered optimization problem to have only one single optimal solution. Second, we prove the convergence of the QAOA with suitable initial state in Section IV. For this, we generalize all the properties of the original mixer proposed in the quantum approximate optimization algorithm. We define our versions of _simultaneous_ and _sequential mixers_ which directly make use of the just generalized properties. The convergence proof is then built on the convergence of the QAA instance which admits the respective mixer Hamiltonian as initial Hamiltonian. The underlying idea is again due to Farhi et al., but suitably generalized for constrained problems and sequential mixers. ## II Preliminaries ### Combinatorial Optimization Problems In the following, we restrict to maximization problems, as minimization tasks may be considered analogously. This choice of the optimization direction simply allows us to state the convergence proofs more compactly. A generic COP of _size_\(N\) is of the form \[\max_{\mathbf{z}\in S}f(\mathbf{z}),\quad S\subseteq Z(N), \tag{1}\] where \(Z(N)\) denotes the set of bit strings of length \(N\), \(f:Z(N)\to\mathbb{R}\) is the _objective function_, and \(S\subseteq Z(N)\) is the set of _feasible bit strings_ or the _solution set_. The problem is called _unconstrained_ if \(S=Z(N)\). Moreover, we denote the set of all solutions maximizing \(f\) by \(S_{\max}\). ### Problem Encoding on Quantum Computers In order to treat a COP with the help of quantum computers, the problem first has to be translated into a quantum-mechanical language. The standard encoding procedure identifies each bit string \(\mathbf{z}\) with a computational basis state \(|\mathbf{z}\rangle\) of the \(N\)-qubit space \(\mathcal{H}\coloneqq\mathbb{C}^{2^{N}}\). The classical objective function \(f\) is further considered as an objective Hamiltonian \(C\) via \[C\coloneqq\sum_{\mathbf{z}\in Z(N)}f(\mathbf{z})\,|\mathbf{z}\rangle\!\langle\mathbf{z}|\;. \tag{2}\] In this setting, the (optimal) solution bit strings span the (_optimal_) _solution space_ \[\mathcal{S}_{\max}\coloneqq\mathrm{span}\{|\mathbf{z}\rangle\,:\,\mathbf{z}\in S_{ \max}\}\subseteq\mathcal{S}\coloneqq\mathrm{span}\{|\mathbf{z}\rangle\,:\,\mathbf{z} \in S\}\subseteq\mathcal{H}. \tag{3}\] The maximization task is now equivalent to finding a computational basis state in \(\mathcal{S}_{\max}\). By construction, \(\mathcal{S}_{\max}\) is the eigenspace of \(C|_{\mathcal{S}}\) corresponding to its largest eigenvalue. In the following, we will slightly relax the quantum optimization task as we will consider any highest energy state of \(C|_{\mathcal{S}}\) an optimal solution. ### Quantum Adiabatic Algorithm In a nutshell, the continuous-time _quantum adiabatic algorithm_ (QAA, [2]) tackles the eigenstate search via quasi-adiabatic evolution of an initial state \(|\iota\rangle\) with respect to a time-dependent Hamiltonian \(H(t)\) which interpolates between an initial Hamiltonian \(H_{\mathrm{I}}\) and the objective Hamiltonian \(C\). In case of a maximization task, \(|\iota\rangle\) should be a highest energy state of \(H_{\mathrm{I}}\). The interpolating Hamiltonian is typically given by the convex combination [10] \[H(t)=H_{\mathrm{lin}(H_{\mathrm{I}},C)}(t)\coloneqq(1-t)H_{\mathrm{I}}+tC, \quad t\in[0,1]. \tag{4}\] The evolution speed is controlled via a parameter \(T>0\): The actual time evolution is with respect to \(H(s/T)\), \(s\in[0,T]\). The intuition behind the QAA is that evolving a highest energy state of \(H(0)\) sufficiently slowly (i.e., \(T\gg 1\)) yields a highest energy state of \(C\) if the energy levels stay separated. Mathematical rigor is granted by the adiabatic theorems (see Section III). ### Quantum Approximate Optimization Algorithm The _quantum approximate optimization algorithm_[1] can, in some sense, be seen as a discrete version of the QAA with fixed initial state \[\ket{+}\coloneqq\frac{1}{\sqrt{2^{N}}}\sum_{\mathbf{z}\in\mathbb{Z}(N)}\ket{\mathbf{z}} \tag{5}\] and initial Hamiltonian \[B\coloneqq\sum_{n=1}^{N}\sigma_{x}^{(n)}. \tag{6}\] Note that \(\ket{+}\) is the non-degenerate highest energy state of \(B\). \(B\) and \(C\) are incorporated into parametrized gates: \[U_{B}(\beta)\coloneqq e^{-i\beta B}=\prod_{n=1}^{N}e^{-i\beta\sigma_{x}^{(n)} }\quad\text{and}\quad U_{C}(\gamma)\coloneqq e^{-i\gamma C}. \tag{7}\] Specifying a _depth_\(p\in\mathbb{N}\), the parametrized trial states are constructed via \[\ket{\vec{\beta},\vec{\gamma}}\coloneqq V(\vec{\beta},\vec{\gamma})\ket{+} \coloneqq\left(\prod_{q=1}^{p}U_{B}(\beta_{q})U_{C}(\gamma_{q})\right)\ket{+} \tag{8}\] In an iterative process, the parameters are updated by a classical optimization rule in order to maximize the expectation value \[F_{p}(\vec{\beta},\vec{\gamma})=\bra{\vec{\beta},\vec{\gamma}}C\ket{\vec{ \beta},\vec{\gamma}}. \tag{9}\] Measuring the final outcome \(\ket{\vec{\beta}_{\text{opt}},\vec{\gamma}_{\text{opt}}}\) in the computational basis then yields a distribution of optimal solution approximations. ### Quantum Alternating Operator Ansatz Building on the ideas of the quantum approximate optimization algorithm, the _quantum alternating operator ansatz_ (QAOA, [8]) extends its design to general constrained problems. Given a COP with objective Hamiltonian \(C\) and solution space \(\mathcal{S}\), the parametrized gate \(U_{B}(\beta)\) is substituted with problem-specific'mixer' gates. For simplicity, we will focus on the case where the same mixers are used in every iteration. Thereby, we can collect them again in a single mixer gate \(U_{\text{M}}(\beta)\). It is demanded to fulfill two important properties: * Feasibility preservation: For all parameter values \(\beta\in\mathbb{R}\)\(U_{\text{M}}(\beta)(\mathcal{S})\subseteq\mathcal{S}\) should hold. * Full mixing of solutions: For all feasible computational basis states \(\ket{\mathbf{z}},\ket{\mathbf{z}^{\prime}}\in\mathcal{S}\), there should exist a power \(r\in\mathbb{N}\) and a parameter value \(\beta\in\mathbb{R}\) so that \(\bra{\mathbf{z}}U_{\text{M}}^{*}(\beta)\ket{\mathbf{z}^{\prime}}\neq 0\). Furthermore, the parametrized gate \(U_{C}(\gamma)\) could be replaced by a more general 'phase separator' gate \(U_{\text{P}}(\gamma)\) which resembles the classical objective function's behavior. In order to be more concrete, we will further focus on the case where \(U_{\text{M}}(\beta)\) and \(U_{\text{P}}(\gamma)\) are given by (products of) exponentials of Hamiltonians. The correct definition of \(U_{\text{M}}(\beta)\) follows naturally from the following convergence considerations and is given in Section IV. We define the phase separator already now: **Definition 1**.: Given a COP with solution space \(\mathcal{S}\) and optimal solution space \(\mathcal{S}_{\max}\), a Hamiltonian \(H\) is called a _phase separator Hamiltonian_ iff it fulfills the following two conditions: 1. \(H\) is diagonal in the computational basis. 2. The eigenspace of \(H|_{\mathcal{S}}\) corresponding to its largest eigenvalue is \(\mathcal{S}_{\max}\). The corresponding (_parametrized_) _phase separator_ is given by \[U_{\text{P}}(H,\gamma)\coloneqq e^{-i\gamma H}. \tag{10}\] ## III Convergence proof for the QAA We first examine the convergence behavior of the QAA. Although originally stated for unconstrained problems, we can easily extend the idea to a COP with a non-trivial solution space \(\mathcal{S}\): The initial Hamiltonian \(H_{\mathrm{I}}\) should preserve feasibility, i.e., \(H(\mathcal{S})\subseteq\mathcal{S}\), and the initial state \(\ket{\iota}\) should lie within \(\mathcal{S}\). In addition, we substitute the objective Hamiltonian \(C\) with a more general phase separator Hamiltonian \(H_{\mathrm{P}}\) which trivially preserves feasibility. Then, for every \(t\in[0,1]\), the time evolution with respect to \(H_{\mathrm{lin}(H_{\mathrm{I}},H_{\mathrm{P}})}(t)\) applied to \(\ket{\iota}\) will give again a feasible state. Thus, we effectively restrict ourselves to the subspace \(\mathcal{S}\subseteq\mathcal{H}\). The underlying concept of the QAA is captured by the adiabatic theorem. For our analysis, we use a more general version than Farhi et al. did in [2]. **Theorem 2** (Adiabatic Theorem, [11]).: _Let \(\{H(t)\,:\,0\leq t\leq 1\}\subseteq\mathcal{L}(\mathcal{H})\) be a family of self-adjoint operators such that \(H(\,\cdot\,)\in C^{2}([0,1],\mathcal{L}(\mathcal{H}))\). For \(T>0\), let \(\tilde{U}_{T}\) be the solution of_ \[\frac{\mathrm{d}}{\mathrm{d}\,s}\tilde{U}_{T}(s)=-iH(s/T)\tilde{U}_{T}(s), \quad 0\leq s\leq T;\quad\tilde{U}_{T}(0)=\mathds{1} \tag{11}\] _and set \(U_{T}(t):=\tilde{U}_{T}(tT)\), \(0\leq t\leq 1\). Let \(\lambda(t)\) be an eigenvalue of \(H(t)\), respectively, with corresponding spectral projection \(\mathds{P}(t)\). Furthermore, let \(P\in C^{2}([0,1],\mathcal{L}(\mathcal{H}))\) such that for every \(0\leq t\leq 1\), \(P(t)\) is a projection with \(H(t)P(t)=\lambda(t)P(t)\). In addition, \(P(t)=\mathds{P}(t)\) should hold for almost all \(t\in[0,1]\). Then_ \[\lim_{T\to\infty}(\mathds{1}-P(t))U_{T}(t)P(0)=0 \tag{12}\] _uniformly in \(t\) in \([0,1]\)._ Theorem 2 essentially states that, in the adiabatic limit, starting within (a subspace of) the eigenspace of \(H(0)\) corresponding to the eigenvalue \(\lambda(0)\), one stays within the eigenspace of \(H(t)\) corresponding to the eigenvalue \(\lambda(t)\), \(0\leq t\leq 1\), if one follows the time evolution generated by \(H\), and the curve of spectral projections \(P\) can be \(C^{2}\)-continued through all potential level crossings. In contrast, Farhi et al. used a version of the adiabatic theorem that prohibits any level crossing (see [12]). A sketch of a convergence proof for the QAA was given in [1] as an intermediate step to argue the convergence of the quantum approximate optimization algorithm. Besides the adiabatic theorem, the proof is mainly based on the Perron-Frobenius Theorem. First recall the definition of irreducibility in the context of matrices. **Definition 3**.: A matrix \(A\in\mathbb{C}^{n\times n}\) is called _irreducible_ iff there are no proper \(A\)-invariant coordinate subspaces of \(\mathbb{C}^{n}\). That is, the only coordinate subspaces left invariant by \(A\) are \(\{0\}\) and \(\mathbb{C}^{n}\). **Theorem 4** (Perron-Frobenius).: _Let \(A\in\mathbb{C}^{n\times n}\) be component-wisely non-negative and irreducible. Then \(A\) admits a non-degenerate largest eigenvalue._ The crucial observation is that the matrix representation of the initial Hamiltonian (6) in the computational basis fulfills both requirements of the Perron-Frobenius Theorem. As this will also play an essential role throughout our convergence proof, we use these very properties for giving a first definition of a _mixer_. **Definition 5**.: A Hamiltonian \(B\in\mathcal{L}(\mathcal{H})\) is called a _mixer_ for a COP with solution space \(\mathcal{S}\) iff \(B(\mathcal{S})\subseteq\mathcal{S}\) and \(B|_{\mathcal{S}}\in\mathcal{L}(\mathcal{S})\) is component-wise non-negative and irreducible in the computational basis. The idea is now to apply the Perron-Frobenius Theorem to the linear interpolation \(H_{\mathrm{lin}(B,C)}(t)\) at every time \(0\leq t<1\) to conclude the existence of an eigenvalue curve \(\lambda_{\mathrm{max}}\) that connects both the largest eigenvalues of \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(0)=B|_{\mathcal{S}}\) and \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(1)=C|_{\mathcal{S}}\). For this, we need the following immediate result which can be proven quite easily. **Corollary 6**.: _Let \(A\in\mathbb{C}^{n\times n}\) be diagonal and let \(B\in\mathbb{C}^{n\times n}\) be irreducible. Then also \(A+B\) is irreducible._ **Theorem 7** (Convergence of QAA).: _Consider a COP with solution space \(\mathcal{S}\subseteq\mathcal{H}\), optimal solution space \(\mathcal{S}_{\mathrm{opt}}\subseteq\mathcal{S}\), and phase separator Hamiltonian \(C\). If \(B\in\mathcal{L}(\mathcal{H})\) is a mixer Hamiltonian in the sense of Definition 5 and \(\ket{\iota}\in\mathcal{S}\) is a highest energy state of \(B|_{\mathcal{S}}\), then_ \[\lim_{T\to\infty}U_{T}(1)\ket{\iota}\in\mathcal{S}_{\mathrm{opt}}, \tag{13}\] _where \(U_{T}\) is the quasi-adiabatic evolution w.r.t. to the linear interpolation between \(B\) and \(C\)._ In the following proof, we directly identify all appearing operators \(\mathcal{L}(\mathcal{S})\) with their matrix representation in the computational basis. Proof.: Denote by \(\lambda_{\max}(t)\) the largest eigenvalue of \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(t)\), for \(0\leq t\leq 1\), respectively. Let \(0\leq t_{0}<1\). Since \(B|_{\mathcal{S}}\) is irreducible, so is \((1-t_{0})B|_{\mathcal{S}}\). As \(C\) is diagonal, also \(H_{\mathrm{lin}(B,C)}(t_{0})|_{\mathcal{S}}=(1-t_{0})B|_{\mathcal{S}}+t_{0}C|_{ \mathcal{S}}\) is irreducible by Corollary 6. W.l.o.g. assume that \(C\) has non-negative spectrum [13]. Then, \(B|_{\mathcal{S}}\) as well as \(C|_{\mathcal{S}}\) are component-wisely non-negative. In summary, \(H_{\mathrm{lin}(B,C)}(t_{0})|_{\mathcal{S}}\) is component-wisely non-negative and irreducible. According to the Perron-Frobenius Theorem, \(\lambda_{\max}(t_{0})\) is non-degenerate. Furthermore, the mapping \[H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}:\mathbb{R}\to\mathcal{L}(\mathcal{S}), \quad t\mapsto H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(t)=(1-t)B|_{\mathcal{S}}+ tC|_{\mathcal{S}} \tag{14}\] is analytic and \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(t)\) is symmetric for all \(t\in\mathbb{R}\). Let \(L\) denote the discrete set of level crossings/eigenvalue splittings of \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}\). According to [14, Theorem 6.1], the instantaneous eigenvalues of \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(t)\), \(t\in\mathbb{R}\), can be sorted as \(\{\lambda_{m}(t)\,:\,1\leq m\leq M\}\), \(M\leq 2^{N}\}\), such that \([t\mapsto\lambda_{m}(t)]\in C^{\,\omega}(\mathbb{R},\mathbb{R})\) and for the corresponding spectral projections \(\mathds{P}_{m}(t)\), it holds that \([t\mapsto\mathds{P}_{m}(t)]\in C^{\,\omega}\big{(}\mathbb{R}\setminus L, \mathcal{L}(\mathcal{S})\big{)}\), for every \(1\leq m\leq M\). Furthermore, the spectral projections have removable singularities in \(L\), i.e. there exist analytic continuations \(P_{m}\), defined on whole \(\mathbb{R}\), such that \(P_{m}(t)=\mathds{P}_{m}(t)\) for \(t\in\mathbb{R}\setminus L\), for all \(1\leq m\leq M\). By continuity, these continuations are themselves orthogonal projections with constant rank and fulfill \[H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(t)P_{m}(t)=\lambda_{m}(t)P_{m}(t)\] for all \(t\in\mathbb{R}\). W.l.o.g. assume \(\lambda_{1}(0)=\lambda_{\max}(0)\). Since \(\lambda_{\max}(t_{0})\) remains non-degenerate for \(0\leq t_{0}<1\), it follows that \(\lambda_{1}\equiv\lambda_{\max}\) on \([0,1)\) and by continuity of \(\lambda_{1}\) that \(\lambda_{1}\equiv\lambda_{\max}\) on \([0,1]\). In addition, the corresponding spectral projection \(\mathds{P}_{1}\) is well-defined on \([0,1)\). Therefore, its continuation \(P_{1}\) fulfills all properties necessary to apply Theorem 2, i.e. (12) holds. Since \(P_{1}(0)=\mathds{P}_{1}(0)\), one especially obtains that \[0=\lim_{T\to\infty}(\mathds{1}-P_{1}(1))U_{T}(1)P_{1}(0)\ket{ \iota}=\lim_{T\to\infty}(\mathds{1}-P_{1}(1))U_{T}(1)\ket{\iota}\] \[\Leftrightarrow \lim_{T\to\infty}U_{T}(1)\ket{\iota}=P_{1}(1)\lim_{T\to\infty}U_{ T}(1)\ket{\iota}.\] Since \(P_{1}(1)\) is a projection with \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}(1)P_{1}(1)=\lambda_{\max}(1)P_{1}(1)\), one concludes (13). Following the above proof, one realizes that the eigenvalue curve \(\lambda_{\max}\) does not cross any other eigenvalue curve of \(H_{\mathrm{lin}(B,C)}|_{\mathcal{S}}\) except, possibly, at \(t=1\). In [1], even a level crossing at \(t=1\) is avoided by assuming that the COP only has one optimal solution, implying that \(\lambda_{\max}(1)\) is non-degenerate. However, by invoking a more general version of the adiabatic theorem, we were able to get rid of this assumption. ## IV Convergence proof for the QAOA We next examine the convergence behavior of the QAOA which contains the quantum approximate optimization algorithm as a special case. Its ingredients are basically the same as for our generalized version of the QAA. However, the decomposition of the mixer Hamiltonian into local Hamiltonians is extremely valuable from an application-oriented point of view and is also introduced by the QAOA. In the spirit of Definition 5, we propose the following adaptation of Hadfield et al.'s definition. **Definition 8**.: Given a COP with solution space \(\mathcal{S}\), a family of Hamiltonians \(\{B_{i}\}_{i\in I}\subset\mathcal{L}(\mathcal{H})\) is called a _mixing family_ iff for every \(i\in I\), \(B_{i}(\mathcal{S})\subseteq\mathcal{S}\), \(B_{i}|_{\mathcal{S}}\) is component-wise non-negative in the computational basis, and any coordinate subspace of \(\mathcal{S}\) that is left invariant under every \(B_{i}\) is already trivial. That Definition 8 really is a decomposed version of Definition 5 can be argued as follows: Consider the matrix representation of each of the operators \(B_{i}|_{\mathcal{S}}\) in the computational basis as adjacency matrix of a graph whose vertices are identified with feasible computational basis states. Starting from the graph resembled by \(B_{1}\), adding another operator \(B_{i}\) corresponds to adding edges represented by non-zero entries of \(B_{i}\)'s matrix representation. The actual weights (i.e., values of the entries) are not important, but the condition of component-wise non-negativity implies that no entries are cancelled during the summation, that is, the edge set of the graph \(G_{I}\) with adjacency matrix \[B_{I}|_{\mathcal{S}},\quad B_{I}\coloneqq\sum_{i\in I}B_{i} \tag{15}\] really is the union of all the edge sets of the graphs \(G_{i}\) with respective adjacency matrix \(B_{i}|_{\mathcal{S}}\), \(i\in I\). The imposed condition of triviality of mutual invariant coordinate subspaces then is equivalent to the fact that \(G_{I}\) is fully connected which, in turn, is equivalent to its adjacency matrix being irreducible. Thus, we have concluded **Proposition 9**.: _Given a COP with solution space \(\mathcal{S}\), a family of Hamiltonians \(\{B_{i}\}_{i\in I}\subset\mathcal{L}(\mathcal{H})\) is a mixing family iff \(B_{I}\) is a mixer Hamiltonian._ Utilizing our definition of a mixing family, we now introduce our version of'simultaneous' and'sequential' mixers. **Definition 10**.: Let \(\mathsf{H}=\{H_{i}\}_{i\in I}\subset\mathcal{L}(\mathcal{H})\) be a mixing family for a given COP. The corresponding (_parametrized_) _simultaneous mixer_ is defined as \[U_{\mathrm{M},0}(\mathsf{H},\beta)\coloneqq e^{-i\beta\sum_{i\in I}H_{i}}. \tag{16}\] Specifying a permutation \(\sigma\in S(I)\), the corresponding (_parametrized_) _sequential mixer_ is defined as \[U_{\mathrm{M},\sigma}(\mathsf{H},\beta)\coloneqq\prod_{i\in I}e^{-i\beta H_{ \sigma(i)}}. \tag{17}\] From their definition it immediately follows that both (16) and (17) fulfill the original QAOA demands: feasibility preservation and full mixing of solutions. However, due to our refined definition, we can now extend the sketch of a convergence proof in [1] to the general QAOA setting. The procedure is as follows: Figure 1: The eigenvalue curve \(\lambda_{\max}\) stays separated from all the other eigenvalue curves for \(0\leq t<1\). If the corresponding COP has exactly one optimal solution the separation extends to \(t=1\) (left plot). However, if the COP has multiple optimal solutions \(\lambda_{\max}\) intersects with at least one other eigenvalue curve at \(t=1\) (right plot). 1. discretize the quasi-adiabatic time evolution \(U_{T}\) 2. decompose \(H_{\mathrm{lin}(B,C)}\) using a (multivariate) Lie product formula 3. exploit the convergence of the corresponding QAA instance We start with a simple statement about the distance of products of operators with factors being close together. **Lemma 11**.: _For \(\varepsilon>0\) and \(m\in\mathbb{N}\), let \(\{V_{j}\}_{j=1}^{m},\{W_{j}\}_{j=1}^{m},\subset\mathcal{L}(\mathcal{H})\) be families of unitary operators so that_ \[\left\|V_{j}-W_{j}\right\|<\varepsilon \tag{18}\] _holds for all \(j\in[m]\). Then the following estimate is valid:_ \[\left\|\prod_{j=1}^{m}V_{j}-\prod_{j=1}^{m}W_{j}\right\|<(1+ \varepsilon)^{m}-1. \tag{19}\] Proof.: Since (18) holds, one can find linear operators \(R_{j}\in\mathcal{L}(\mathcal{H})\) with \(\left\|R_{j}\right\|\leq 1\) and \(V_{j}=W_{j}+\varepsilon R_{j}\) for each \(j\in[m]\), respectively. (19) clearly holds for \(m=1\). Therefore, it remains to show that if (19) holds for an \(m\in\mathbb{N}\), then it also holds for \(m+1\): \[\left\|\prod_{j=1}^{m+1}V_{j}-\prod_{j=1}^{m+1}W_{j}\right\| =\left\|\left(\prod_{j=1}^{m}V_{j}\right)V_{m+1}-\left(\prod_{j=1 }^{m}W_{j}\right)W_{m+1}\right\|\] \[=\left\|\left(\prod_{j=1}^{m}V_{j}\right)\left(W_{m+1}+\varepsilon R _{m+1}\right)-\left(\prod_{j=1}^{m}W_{j}\right)W_{m+1}\right\|\] \[=\left\|\left(\prod_{j=1}^{m}V_{j}\right)\left(\mathds{1}- \varepsilon R_{m+1}W_{m+1}^{*}\right)-\left(\prod_{j=1}^{m}W_{j}\right)\right\|\] \[\leq\left\|\prod_{j=1}^{m}V_{j}-\prod_{j=1}^{m}W_{j}\right\|+ \left\|\left(\prod_{j=1}^{m}V_{j}\right)\varepsilon R_{m+1}W_{m+1}^{*}\right\|\] \[<(1+\varepsilon)^{m}-1+\varepsilon\] \[\leq(1+\varepsilon)^{m}-1+\varepsilon(1+\varepsilon)^{m}=(1+ \varepsilon)^{m+1}-1.\] **Theorem 12** (Convergence of QAOA).: _Consider a COP with solution space \(\mathcal{S}\subseteq\mathcal{H}\), optimal solution space \(\mathcal{S}_{\mathrm{opt}}\subseteq\mathcal{S}\), phase separator Hamiltonian \(C\), and mixing family \(\{B_{i}\}_{i\in I}\). Let \(U_{\mathrm{P}}\) and \(U_{\mathrm{M}}\) be the corresponding phase separator and (simultaneous or sequential) mixer. Furthermore, let \(\left|\iota\right>\in\mathcal{S}\) be a highest energy state of \(B_{I}|_{\mathcal{S}}\). Then, for every \(\varepsilon>0\), one can choose finitely many parameters \(\vec{\beta}\) and \(\vec{\gamma}\) such that_ \[\mathrm{dist}(\left|\vec{\beta},\vec{\gamma}\right>,\mathcal{S}_{ \mathrm{opt}})<\varepsilon, \tag{20}\] _where_ \[\left|\vec{\beta},\vec{\gamma}\right>\coloneqq V(\vec{\beta}, \vec{\gamma})\left|\iota\right>\coloneqq\left(\prod_{q}U_{\mathrm{M}}(\beta_{ q})U_{\mathrm{P}}(\gamma_{q})\right)\left|\iota\right>. \tag{21}\] Proof.: Let \(U_{T}\), \(T>0\), denote the quasi-adiabatic evolution w.r.t. \(H_{\mathrm{lin}(B_{I},C)}\). By Proposition 9, \(B_{I}\) is a mixer Hamiltonian in the sense of Definition 5. Therefore, for any \(\varepsilon>0\), Theorem 7 implies the existence of a \(T>0\) so that \[\left\|(\mathds{1}-P_{1}(1))U_{T}(1)\left|\iota\right>\right\|< \frac{\varepsilon}{2},\] where \(P_{1}\) is the \(C^{2}\)-continuation of the curve of spectral projections onto the highest energy eigenspaces of \(H_{\mathrm{lin}(B_{I},C)}|_{\mathcal{S}}\). W.l.o.g. assume that \(\mathrm{dim}(\mathcal{S})>1\) as the statement would be trivial otherwise. Then, \(\alpha\coloneqq\left\|\mathds{1}-P_{1}(1)\right\|_{\mathcal{L}(\mathcal{S})}>0\) since \(P_{1}(1)\) has rank one by continuity. Discretizing the quasi-adiabatic time evolution \(U_{T}(1)\) yields the existence of an \(m\in\mathbb{N}\) such that \[\left\|\prod_{j=1}^{m}e^{-iH_{\text{lin}(B_{I},C)}\left(j\frac{x}{m}\right)j\frac {x}{m}}-U_{T}(1)\right\|<\frac{\varepsilon}{4\alpha}. \tag{22}\] In the following, set \[W_{j}\coloneqq e^{-iH_{\text{lin}(B_{I},C)}\left(j\frac{x}{m}\right)j\frac{x} {m}}\] and distinguish between the two possibilities to choose a mixer. Simultaneous mixer:The Lie product formula implies that for all \(j\in[m]\), there exist \(n_{j}\in\mathbb{N}\) such that for all \(\tilde{n}\geq n_{j}\) it holds that \[\left\|\left(e^{-i\frac{1-j\frac{x}{m}}{n}j\frac{x}{m}B_{I}}e^{-i\frac{\left( j\frac{x}{m}\right)^{2}}{n}C}\right)^{\tilde{n}}-W_{j}\right\|<\sqrt{\frac{ \varepsilon}{4\alpha}+1}-1, \tag{23}\] respectively. Taking \(n\coloneqq\max\{n_{j}\,:\,j\in[m]\}\), this estimate holds for all \(j\in[m]\) and (especially) \(\tilde{n}=n\). Sequential mixer:W.l.o.g. choose the permutation \(\sigma=\text{id}_{I}\). The multivariate Lie product formula (15, Problem IX.8.5) implies that for all \(j\in[m]\), there exist \(n_{j}\in\mathbb{N}\) so that for all \(\tilde{n}\geq n_{j}\) it holds that \[\left\|\left(\left(\prod_{i\in I}e^{-i\frac{\left(1-j\frac{x}{m}\right)}{n}j \frac{x}{m}B_{i}}\right)e^{-i\frac{\left(j\frac{x}{m}\right)^{2}}{n}C}\right) ^{\tilde{n}}-W_{j}\right\|<\sqrt[n]{\frac{\varepsilon}{4\alpha}+1}-1, \tag{24}\] respectively. In both cases, choose \(q=n^{m}\) parameter values \(\vec{\beta}=(\vec{\beta}_{1},\ldots,\vec{\beta}_{m})\) and \(\vec{\gamma}=(\vec{\gamma}_{1},\ldots,\vec{\gamma}_{m})\) as \[\left(\vec{\beta}_{j}\right)_{k} =\frac{1-j\frac{tT}{m}}{n}j\frac{tT}{m}\] \[\left(\vec{\gamma}_{j}\right)_{k} =\frac{\left(j\frac{tT}{m}\right)^{2}}{n}\] for all \(k\in[n]\) and all \(j\in[m]\). Then, by construction, (23) and (24) translate into \[\left\|V(\vec{\beta}_{j},\vec{\gamma}_{j})-W_{j}\right\|<\sqrt[n]{\frac{ \varepsilon}{4\alpha}+1}-1.\] Thus, by (22) and Lemma 11, it follows that \[\left\|V(\vec{\beta},\vec{\gamma})-U_{T}(t)\right\| =\left\|\prod_{j=1}^{m}V(\vec{\beta}_{j},\vec{\gamma}_{j})-U_{T} (t)\right\|\] \[\leq\left\|\prod_{j=1}^{m}V(\vec{\beta}_{j},\vec{\gamma}_{j})- \prod_{j=1}^{m}W_{j}\right\|+\left\|\prod_{j=1}^{m}W_{j}-U_{T}(t)\right\|\] \[<\frac{\varepsilon}{4\alpha}+\frac{\varepsilon}{4\alpha}=\frac{ \varepsilon}{2\alpha}.\] In summary, it follows that \[\left\|(\mathds{1}-P_{1}(1))\left|\vec{\beta},\vec{\gamma}\right\|\right\| =\left\|(\mathds{1}-P_{1}(1))V(\vec{\beta},\vec{\gamma})\left| \iota\right\|\right\|\] \[<\left\|\mathds{1}-P_{1}(1)\right\|_{\mathcal{L}(\mathcal{S})} \frac{\varepsilon}{2\alpha}+\frac{\varepsilon}{2}=\varepsilon.\] Then, \(\text{im}(P_{1}(1))\subseteq\mathcal{S}_{\text{opt}}\) proves the assertion. Conclusion and Outlook In this paper we presented an elementary proof for the convergence of the QAOA. This proof can be regarded as a discretized and carefully extended version of the Adiabatic Theorem, building on the ideas of Farhi et al. Beside another core theorem (Perron-Frobenius), this extension is merely based on elementary matrix inequalities. Most importantly, our proof builds on fewer assumptions (multiple optimal solutions are allowed) and extends to non-trivial feasibility structures (\(\mathcal{S}\subsetneq\mathcal{H}\)). Furthermore, the proof canonically gave rise to refined definitions of the QAOA-mixer and QAOA-phase separator concepts. Most notably, exactly the same notions arise when properly recreating classical feasibility symmetries within the framework of QAOA (see [16]). This strongly indicates that the definitions we gave in this paper optimally capture the overall principle the QAOA is based on. We essentially showed that irreducibility and component-wise non-negativity of the mixer Hamiltonian \(B\), restricted to the feasible subspace \(\mathcal{S}\), are sufficient criteria for the convergence of the QAA and the QAOA. Moreover, one can readily verify that irreducibility is also a necessary condition in the following sense: Given an arbitrary initial state \(\ket{\iota}\) and the existence of a non-trivial \(B|_{\mathcal{S}}\)-invariant coordinate subspace, there always exists an objective Hamiltonians \(C\) such that the QAA and the QAOA will not be able to approximate any state in \(\mathcal{S}_{\max}\) to arbitrary precision. On the other hand, the condition that \(B|_{\mathcal{S}}\) should be component-wise non-negative is not necessary. In our convergence proof, we imposed this condition in order to apply the Perron-Frobenius Theorem. However, there also exist more general versions of this theorem (see, e.g., [17]) which substitute this condition and irreducibility with the more general properties of preserving a given cone and permuting its faces, respectively. Unfortunately, the cones in questions are merely given by all the orthants in \(\mathcal{S}\) since every coordinate subspace of \(\mathcal{S}\) should be represented on their faces. This yields again, up to some additionally allowed matrix signatures, the same conditions. Therefore, we do not see much possibilities for relaxing the assumptions, made in Theorem 7 and Theorem 12. An interesting and still remaining question is whether one can also characterize the rate of convergence of the QAA for the case of multiple optimal solutions. In this case, the spectral gap is necessarily vanishing for \(t\to 1\) (see Figure 1), but stays finite throughout the interval \([0,1)\). That is, even though a level crossing occurs, it only happens once and at a predictable time. There are some results on the rate of convergence in the Adiabatic Theorem which are valid for all kinds of (allowed) level crossings (see, e.g., [11; 18]). Fine-tuning these results with respect to the particular situation of the QAA promises to be an insightful future project. ###### Acknowledgements. We thank Tim Heine, Lauritz van Luijk, Tobias J. Osborne, Christoph Pohl, Antonio Rotundo, Martin Steinbach, and Reinhard F. Werner for helpful discussions. GK acknowledges financial support by the DAAD and IIT Indore (Kapil Ahuja) for a guest stay. RS acknowledges financial support by the Quantum Valley Lower Saxony and by the BMBF project ATIQ. LB and TZ acknowledge financial support by the BMBF project QuBRA.
2304.11755
Control of Discrete-Time LTI Systems using Stochastic Ensemble Systems
In this paper, we study the control properties of a new class of stochastic ensemble systems that consists of families of random variables. These random variables provide an increasingly good approximation of an unknown discrete, linear-time invariant (DLTI) system, and can be obtained by a standard, data-driven procedure. Our first result relates the reachability properties of the stochastic ensemble system with that of the limiting DLTI system. We then provide a method to combine the control inputs obtained from the stochastic ensemble systems to compute a control input for the DLTI system. Later, we deal with a particular kind of stochastic ensemble system generated from realizing Bernoulli random variables. For this, we characterize the variance of the computed state and control. We also do the same for a situation where the data is updated sequentially in a streaming fashion. We illustrate the results numerically in various simulation examples.
Nirabhra Mandal, Mohammad Khajenejad, Sonia Martinez
2023-04-23T21:47:10Z
http://arxiv.org/abs/2304.11755v1
# Control of Discrete-Time LTI Systems using Stochastic Ensemble Systems ###### Abstract In this paper, we study the control properties of a new class of stochastic ensemble systems that consists of families of random variables. These random variables provide an increasingly good approximation of an unknown discrete, linear-time invariant (DLTI) system, and can be obtained by a standard, data-driven procedure. Our first result relates the reachability properties of the stochastic ensemble system with that of the limiting DLTI system. We then provide a method to combine the control inputs obtained from the stochastic ensemble systems to compute a control input for the DLTI system. Later, we deal with a particular kind of stochastic ensemble systems generated from realizing Bernoulli random variables. For this, we characterize the variance of the computed state and control. We also do the same for a situation where the data is updated sequentially in a streaming fashion. We illustrate the results numerically in various simulation examples. Approximate reachability, sample reachability, stochastic ensemble system. ## 1 Introduction Ensemble control, _i.e._, investigating the ability of steering and manipulating an entire ensemble of (partially) unknown systems in a desired and optimal manner, has emerged from several science and engineering applications in recent years, _e.g_, coordination of the movement of flocks in biology [1], manipulation of spin ensembles in nuclear magnetic resonance [2], [3], or desynchronization of pathological neurons in the brain in neuroscience [4]. **Literature Review** Motivated by this, a huge body of seminal work has been done on the analysis of (deterministic) ensemble controllability [5], [6], [7] and synthesis of optimal ensemble controls [8], [9], [10] by developing new analytical and numerical methods, which have received increasing attention due to their application to robotics [7], energy systems [11, 12] and quantum control [13, 14]. However, the traditional ensemble control problem consists of driving a collection of initial states of a continuum of systems to a set of final states with the same control input [15], which could be potentially conservative. The problem of reachability for finite-dimensional linear time-varying systems has been studied in [16, 17], but here the system varies in a continuum and not in a countable set. On the other hand, the reachability properties of bilinear ensemble systems are studied in [18]. In [8], the authors explore the problem of optimal control for stochastic linear systems. The stochastic nature comes from additive noise to the dynamics. Reference [11] solves an ensemble control problem for devices with cyclic energy consumption patterns by utilizing techniques from the Markov Decision Process framework. Finally, the work [12] looks into a similar problem but with uncertain dynamics and uses techniques from stochastic and distributionally robust optimization in the process. However, in all the aforementioned work, either the control signal needs to be unique, or a continuum of ensemble (and partially unknown) systems is required to exist. This work aims to bridge this gap. **Contributions** Unlike ensemble control, we let the system parameters take values in a countable set. Moreover, we do not restrict the control to be unique but allow every sample run of the system to employ a different control function. This being said, we consider a so-called class of stochastic ensemble systems, which arise from the approximation of systems whose exact parameters are unknown. A known approach to obtain good parametric models consists of learning the distribution of for the model whose realizations correspond to a system approximation. As more data becomes available, the distributions become more accurate, resulting in a stochastic ensemble approximation. This enables us to i) study the reachability properties of a new class of stochastic ensemble systems, ii) provide a procedure for combining controls from such systems, iii) compute a control for the limiting DLTI system, iv) characterize the variance of the state and control of a stochastic ensemble system produced by means of a Bernoulli distribution, and v) derive an improved result by means of least squares error minimization. As conclusions of these main contributions, we also can vi) compute the variance of the state and control of a stochastic ensemble system that is obtained in an streaming fashion. We illustrate our results in numerical examples. **Notations** We denote the set of real numbers, the set of non-negative real numbers, the set of integers, and the set of non-negative integers using \(\mathbb{R}\), \(\mathbb{R}_{\geq 0}\), \(\mathbb{Z}\) and \(\mathbb{Z}_{\geq 0}\), respectively. We let \(\mathbb{R}^{n}\) (similarly \(\mathbb{Z}_{\geq 0}^{n}\)) be the Cartesian product of \(\mathbb{R}\) (similarly \(\mathbb{Z}_{\geq 0}\)) with itself. \(\mathbb{R}^{n\times m}\) denotes the set of real matrices of order \(n\times m\). The \(i^{\text{th}}\) component of a vector \(\mathbf{v}\in\mathbb{R}^{n}\) is denoted by \(\mathbf{v}_{i}\) and the \(ij^{\text{th}}\) entry of a matrix \(\mathbf{M}\in\mathbb{R}^{n\times m}\) is denoted by \(\mathbf{M}_{ij}\). For a set of square matrices \(\{\mathbf{M}^{(s)}\}_{i\in\mathbb{Z}}\), the product operator is used to denote multiplication from right to left, _i.e._\(\prod_{i=1}^{n}\mathbf{M}^{(s)}:=\mathbf{M}^{(n)}\cdots\mathbf{M}^{(1)}\). For a vector \(\mathbf{v}\in\mathbb{R}^{n}\), \(\mathbf{v}\geq 0\) denotes a component-wise inequality. We let \(\text{abs}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a function that produces a vector with component-wise absolute values from the input vector, _i.e._\(\text{abs}(\mathbf{v})_{i}=|\mathbf{v}_{i}|\in\mathbb{R}\), \(\forall\,i\in\{1,\dots,n\}\). We denote the empty set using \(\varnothing\). ## 2 Problem Formulation Consider a discrete-time, unknown, linear time-invariant (LTI) system of the form \[\mathbf{x}(k+1)=\mathbf{A}\mathbf{x}(k)+\mathbf{B}\mathbf{u}(k), \tag{1}\] where \(\mathbf{x}\in\mathbb{R}^{n}\) is the state, \(\mathbf{x}(0)=\mathbf{x}_{0}\) and \(\mathbf{x}_{f}\) are the initial and desired final states, respectively, \(\mathbf{u}\in\mathbb{R}^{m}\) is the control input, \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbf{B}\in\mathbb{R}^{n\times m}\). The system matrices \(\mathbf{A}\) and \(\mathbf{B}\), as well as the initial state \(\mathbf{x}_{0}\) are unknown, but it is possible to construct increasingly good approximations of these system parameters using some known approach, e.g., any data-driven or Machine Learning-based method. This approximation is assumed to be done through the following newly defined class of _stochastic ensemble systems_, for which we aim to characterize its reachability properties. **Definition 2.1**: _(Stochastic ensemble system). Consider sequences of independent random variables \(\mathcal{X}_{0}:=\{\mathbf{x}_{0}^{(s)}\in\mathbb{R}^{n}\}_{s\in\mathbb{Z}_{ \geq 0}}\), \(\mathcal{A}:=\{\mathbf{A}^{(s)}\in\mathbb{R}^{n\times n}\}_{s\in\mathbb{Z}_{ \geq 0}}\) and \(\mathcal{B}:=\{\mathbf{B}^{(s)}\in\mathbb{R}^{n\times m}\}_{s\in\mathbb{Z}_{ \geq 0}}\) such that \(\mathcal{A},\mathcal{B},\) and \(\mathcal{X}_{0}\) are all independent of each other. For every \(\sigma\in\mathbb{Z}_{\geq 0}\), a stochastic ensemble system is a linear, time varying (LTV) system of the form:_ \[\widehat{\mathbf{x}}(\sigma,k\!+\!1) =\!\widehat{\mathbf{A}}(s_{A}(\sigma,k))\,\widehat{\mathbf{x}}( \sigma,k)\!+\!\widehat{\mathbf{B}}(s_{B}(\sigma,k))\,\widehat{\mathbf{u}}( \sigma,k) \tag{2a}\] \[\widehat{\mathbf{x}}(\sigma,0) =\mathbf{x}_{0}^{(s_{0}(\sigma))}\in\mathcal{X}_{0}\,, \tag{2b}\] _where, the indicator functions \(s_{A}:\mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0},\quad s_{B}: \mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}\) and \(s_{0}:\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}\) are used to index different realizations of the random variables in \(\mathcal{X}_{0}\), \(\mathcal{A}\), and \(\mathcal{B}\), respectively. Thus, \(\widehat{\mathbf{A}}(s_{A}(\sigma,k))\) (similarly \(\widehat{\mathbf{B}}(s_{B}(\sigma,k))\)) represents a realization of \(\mathbf{A}^{(s_{A}(\sigma,k))}\in\mathcal{A}\) (respectively \(\mathbf{B}^{(s_{B}(\sigma,k))}\in\mathcal{B}\)) for each \(\sigma\) and \(k\). \(\bullet\)_ We use the terms random variable and realization interchangeably. The \(\sigma\) parameter is used to distinguish between different runs of the system. Also note that since the initial state and the system matrices in (1) are realizations of different random variables, each state \(\widehat{\mathbf{x}}(\sigma,k)\) is also a realization of some random variable. Moreover, we need to formally define the notion of _stochastic ensemble approximation_. **Definition 2.2**: _(Stochastic ensemble approximation). Let \(\mathbf{M}\in\mathbb{R}^{n\times m}\). Consider a sequence of independent random variables \(\mathcal{M}:=\{\mathbf{M}^{(s)}\}_{s\in\mathbb{Z}_{\geq 0}}\) such that \(\exists B\in\mathbb{R}\) such that \(\|\mathbf{M}^{(s)}\|\leq B\), \(\forall s\in\mathbb{Z}_{\geq 0}\). Then, \(\mathcal{M}\) is a stochastic ensemble approximation of \(\mathbf{M}\) if and only if_ \[\forall\varepsilon>0,\ \mathbb{P}(\|\mathbf{M}-\sum_{s=1}^{d}w_{s}\mathbf{M}^{(s)} \|_{p}>\varepsilon)\to 0\ \text{as}\,d\to\infty,\] _with respect to some \(p-\)norm \(\|\cdot\|_{p}\) and for some \(w_{s}\)'s of the form \([w_{1},\cdots,w_{d}]^{\top}\in\mathcal{S}^{d}:=\{\mathbf{y}\in\mathbb{R}^{d} \,|\,\mathbf{1}^{\top}\mathbf{y}=1,\mathbf{y}\geq 0\}\) for each \(d\in\mathbb{Z}_{\geq 0}\). \(\bullet\)_ Furthermore, we assume the following. **Assumption 2.1**: _There exists a stochastic ensemble system in the form of (1) that consists of stochastic ensemble approximations of the original system (1) as defined below._ Being concerned with meeting certain reachability requirements for (1), next we provide a formal definition for the notion of approximate reachability. **Definition 2.3**: _(Approximate reachability of final state from an initial state). Let \(\mathcal{X}_{0}\), \(\mathcal{A}\), and \(\mathcal{B}\) be stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{A}\), and \(\mathbf{B}\) respectively. Consider the stochastic ensemble system in (1) and let \(\mathbf{x}_{f}\in\mathbb{R}^{n}\). Suppose there exists a fixed time \(K\in\mathbb{Z}_{\geq 0}\) and for any realization of the random variables \(\widehat{\mathbf{x}}(\sigma,0),\widehat{\mathbf{A}}(s_{A}(\sigma,k)),\) and \(\widehat{\mathbf{B}}(s_{B}(\sigma,k))\) there exist controls \(\widehat{\mathbf{u}}(\sigma,0),\cdots,\widehat{\mathbf{u}}(\sigma,K-1)\) that drive the realization \(\widehat{\mathbf{x}}(\sigma,0)\) through the trajectory \(\widehat{\mathbf{x}}(\sigma,k)\), \(\forall k\in\{1,\dots,K\}\) under the dynamics (1), i.e._ \[\widehat{\mathbf{x}}(\sigma,K)=\prod_{k=0}^{K}\widehat{\mathbf{A} }(s_{A}(\sigma,k))\,\widehat{\mathbf{x}}(\sigma,0) \tag{3}\] \[+\sum_{k=0}^{K-1}[\prod_{i=k+1}^{K}\widehat{\mathbf{A}}(s_{A}( \sigma,i))]\widehat{\mathbf{B}}(s_{B}(\sigma,k))\widehat{\mathbf{u}}( \sigma,k)\,.\] _Then \(\mathbf{x}_{f}\) is said to be approximately reachable from \(\mathbf{x}_{0}\) if and only if \(\{\widehat{\mathbf{x}}(\sigma,K)\}_{\sigma\in\mathbb{Z}_{\geq 0}}\) is a stochastic ensemble approximation of \(\mathbf{x}_{f}\). \(\bullet\)_ Further, we need to formally introduce the notion of sample reachability as follows. **Definition 2.4**: _(Sample reachability of final state from an initial state in unit time). Let \(\mathcal{X}_{0}\), \(\mathcal{X}_{f}\), and \(\mathcal{A}\) be stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{x}_{f}\) and \(\mathbf{A}\) respectively, and \(s_{f}:\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}\) be an indicator function that denotes the realizations of the random variables in the set \(\mathcal{X}_{f}:=\{\mathbf{x}_{f}^{(s)}\in\mathbb{R}^{n}\}_{s\in\mathbb{Z}_{ \geq 0}}\). Consider the stochastic ensemble system in (2.2) with \(\widehat{\mathbf{B}}(k)=\mathbf{B}\). Suppose there exists a control \(\{\widehat{\mathbf{u}}(\sigma,0)\}_{\sigma\in\mathbb{Z}_{\geq 0}}\) for each \(\sigma\in\mathbb{Z}_{\geq 0}\) that drives \(\widehat{\mathbf{x}}(\sigma,0)=\mathbf{x}_{0}^{(s_{0}(\sigma))}\) to \(\widehat{\mathbf{x}}(\sigma,K)=\mathbf{x}_{f}^{(s_{f}(\sigma))}\) under the dynamics (2.2), in one (\(K=1\)) time step, i.e._ \[\mathbf{x}_{f}^{(s_{f}(\sigma))}=\widehat{\mathbf{A}}(s_{A}(\sigma,1))\ \mathbf{x}_{0}^{(s_{0}(\sigma))}+\mathbf{B}\,\widehat{\mathbf{u}}(\sigma,0)\,. \tag{2.4}\] _Then \(\mathbf{x}_{f}\) is sample reachable from \(\mathbf{x}_{0}\) in unit time. \(\bullet\)_ Note that the control \(\widehat{\mathbf{u}}(\sigma,0)\) is chosen in order to drive \(\mathbf{x}_{0}^{(s_{0}(\sigma))}\) to \(\mathbf{x}_{f}^{(s_{f}(\sigma))}\) in the previous definition. Our problem of interest can cast as follows. **Problem 2.1**: _Given the aforementioned setup and Assumption 2.1, characterize the relation between approximate reachability under the stochastic ensemble system (2.2) and the standard reachability under the LTI system (2.1). Moreover, find controls that drive \(\mathbf{x}_{0}\) to \(\mathbf{x}_{f}\) under (2.1) using the stochastic ensemble system (2.2)._ We conclude this section by highlighting the difference between our notion of stochastic ensemble system with that of (deterministic) ensemble control. **Remark 2.1**: _(Comparison with ensemble control)._ The stochastic ensemble system in (2.2) has a similar structure to ensemble systems [5]. However, the \(\sigma\) that parameterizes the system takes discrete values. Moreover, \(\widehat{\mathbf{A}}\) and \(\widehat{\mathbf{B}}\) do not vary continuously with respect to \(\sigma\). We also allow the controls for each \(\sigma\) to be different, i.e., we do not seek one single control that steers the ensemble system between points of interest. Finally, if a system is ensemble controllable, then similar results to the ones in this paper can be given, generalizing ensemble control systems and their objective. \(\bullet\) ## 3 Control using Stochastic Ensemble Systems In this section, we first show that reachability under the DLTI system (2.1) is sufficient for approximate reachability1 under the stochastic ensemble systems. Footnote 1: Not to be misinterpreted as the reachability of autonomous systems, studied e.g., in the authors’ previous works [19, 20]. Recall that we use \(\sigma\in\mathbb{Z}_{\geq 0}\) to index the runs of the system. Moreover, \(s_{0}(\sigma)\) (respectively \(s_{A}(\sigma,k)\), \(s_{B}(\sigma,k)\)) indexes the realization of the initial state (respectively of matrices \(\mathbf{A}\) and \(\mathbf{B}\) at the \(k^{\text{th}}\) time step) used in (2.2). In the sequel, we omit the \(\sigma\) for the sake of brevity wherever there is no confusion. Now we are ready to state the result. **Lemma 3.1**: _(Sufficient condition for approximate reachability using state reachability). Let \(\mathcal{X}_{0}\), \(\mathcal{A}\), and \(\mathcal{B}\) be stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{A}\), and \(\mathbf{B}\) respectively. Suppose that \(\mathbf{x}_{f}\) is reachable from \(\mathbf{x}_{0}\) in \(K\in\mathbb{Z}_{\geq 0}\) time steps under the dynamics (2.1). Then, \(\mathbf{x}_{f}\) is approximately reachable from \(\mathbf{x}_{0}\) under the stochastic ensemble system in (2.2)._ Since \(\mathcal{X}_{0},\mathcal{A}\), and \(\mathcal{B}\) are stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{A}\), and \(\mathbf{B}\) respectively \(\exists B_{\mathcal{X}_{0}},B_{\mathcal{A}}\), and \(B_{\mathcal{B}}\) such that \(\|\mathbf{x}_{0}^{(s)}\|\leq B_{\mathcal{X}_{0}}\), \(\|A^{(s)}\|\leq B_{\mathcal{A}}\), and \(\|B^{(s)}\|\leq B_{\mathcal{B}}\), \(\forall s\in\mathbb{Z}_{\geq 0}\). Now, by the hypothesis, there exists controls \(\mathbf{u}(0),\cdots,\mathbf{u}(K-1)\) that drive \(\mathbf{x}_{0}\) to \(\mathbf{x}_{f}\) under the dynamics (2.1). Moreover since \(K\) is finite, all the controls can be bounded by a constant \(B_{\mathbf{u}}\). We show that \(\widehat{\mathbf{u}}(\sigma,k)=\mathbf{u}(k)\), \(\forall k\in\{1,\ldots,K-1\}\) proves the claim. Let \(\varepsilon>0\). If we use the controls \(\mathbf{u}(0),\cdots,\mathbf{u}(K-1)\) in (2.3), we obtain \[\begin{array}{l}\mathbf{y}(1)\!-\!\sum_{s_{0}=1}^{N}w_{s_{0},1}\widehat{ \mathbf{x}}(s(1))\!\!=\!\!\widehat{\mathbf{A}}(s_{A}(1))(\mathbf{x}_{0}\!-\!\! \sum_{s_{0}=1}^{N}w_{s_{0},1}\mathbf{x}_{0}^{(s_{0})})\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\widehat{\mathbf{B }}(s_{B}(1))(\mathbf{u}(0)-\sum_{s_{0}=1}^{N}w_{s_{0},1}\mathbf{u}(0)),\end{array}\] at \(k=1\) starting at \(\mathbf{x}_{0}^{(s_{0})}\) and for \([w_{s_{0},1}]=[w_{1,1},\cdots,w_{N,1}]\in\mathcal{S}^{N}\) which makes \(\mathcal{X}_{0}\) a stochastic ensemble approximation of \(\mathbf{x}_{0}\) and \(\mathbf{y}(1)=\widehat{\mathbf{A}}(s_{A}(\sigma,1))\mathbf{x}_{0}+\widehat{ \mathbf{B}}(s_{B}(\sigma))\mathbf{u}(0)\). Note that here we are using the weights that make \(\mathcal{X}_{0}\) a stochastic ensemble approximation throughout and we use the independence properties of the random variables to separate the product and the sum. Now, since \(\sum_{s_{0}=1}^{N}w_{s_{0},1}\mathbf{u}(0)=\mathbf{u}(0)\), then \[\|\mathbf{y}(0)\!-\!\sum_{s_{0}=1}^{N}\!\!\!u\!\!\!\!\!\!\!\!\!\!w_{s_{0},1} \widehat{\mathbf{x}}(s(1)\!)\!\!\|\!\leq\!\!B_{\mathcal{A}}\!\|\mathbf{x}_{0}\! -\!\!\sum_{s_{0}=1}^{N}\!\!\!w_{s_{0},1}\mathbf{x}_{0}^{(s_{0})}\|. \tag{3.5}\] Next, by defining \(\mathbf{z}(1)=\widehat{\mathbf{A}}(s_{A}(1))\mathbf{x}_{0}+\mathbf{B}\mathbf{u}(0)\) and considering a \([w_{s_{B}(1),2}]=[w_{1,2},\cdots,w_{N,2}]\in\mathcal{S}^{N}\), \(\mathcal{B}\) becomes a stochastic ensemble approximation of \(\mathbf{B}\). Then, by similar arguments as before \[\begin{array}{l}\|\mathbf{z}(1)-\sum_{s_{B}(1)=1}^{N}\!\!\!w_{s_{B}(1),2} \,\mathbf{y}(1)\|\leq\\ B_{\mathbf{u}}\|\mathbf{B}-\sum_{s_{B}(1)=1}^{N}\!\!\!w_{s_{B}(1),2}\mathbf{B}( s_{B}(1))\|.\end{array} \tag{3.6}\] Finally, for \(\mathbf{x}(1)=\mathbf{A}\mathbf{x}_{0}+\mathbf{B}\mathbf{u}(0)\), we have \[\begin{array}{l}\|\mathbf{x}(1)-\sum_{s_{A}(1)=1}^{N}w_{s_{A}(1),3} \mathbf{z}(1)\|\\ \leq\|\mathbf{A}-\sum_{s_{A}(1)=1}^{N}\!\!\!w_{s_{A}(1),3}\widehat{\mathbf{A }}(s_{A}(1))\|\!\|\mathbf{x}(0)\|,\end{array} \tag{3.7}\] using \([w_{s_{A}(1),2}]=[w_{1,2},\cdots,w_{N,2}]\in\mathcal{S}^{N}\) which makes \(\mathcal{A}\) a stochastic ensemble approximation of \(\mathbf{A}\). Then, using the triangle inequality of the norms and (3.5)-(3.7), we obtain \[\begin{array}{l}\|\mathbf{x}(1)-\sum_{s_{0}=1}^{N}\!\!\!w_{s_{0},1}\widehat{ \mathbf{x}}(\sigma,1)\|\leq\\ \alpha[\|\mathbf{x}_{0}\!-\!\sum_{s_{0}=1}^{N}\!\!\!w_{s_{0},1}\mathbf{x}_{0}^{(s_ {0})}\|\!+\!\|\mathbf{B}\!-\!\sum_{s_{B}(1)=1}^{N}\!\!\!w_{s_{B}(1),2}\mathbf{B}( s_{B}(1))\|\\ +\|\mathbf{A}-\sum_{s_{A}(1)=1}^{N}\!\!\!w_{s_{A}(1),3}\widehat{\mathbf{A}}(s_ {A}(1))\|\!\|,\end{array}\] where \(\alpha=\max\{B_{\mathcal{A}},B_{\mathbf{u}},\|\mathbf{x}(0)\|\}\). Hence, \[\begin{array}{l}\mathbb{P}\left(\|\mathbf{x}(1)-\sum_{s Thus, \(\{\widehat{\mathbf{x}}(s(1))\}_{s(1)\in\mathbb{Z}_{\geq 0}}\) forms a stochastic ensemble approximation of \(\mathbf{x}(1)\). Thus, using the fact that \(\{\mathbf{x}_{0}^{(s_{0})}\}_{s_{0}\in\mathbb{Z}_{\geq 0}}\), \(\{\widehat{\mathbf{A}}(s_{A}(1))\}_{s_{A}(1)\in\mathbb{Z}_{\geq 0}}\), and \(\{\widehat{\mathbf{B}}(s_{B}(1))\}_{s_{B}(1)\in\mathbb{Z}_{\geq 0}}\) are stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{A}\), and \(\mathbf{B}\) respectively, we have shown that \(\{\widehat{\mathbf{x}}(s(1))\}_{s(1)\in\mathbb{Z}_{\geq 0}}\) forms a stochastic ensemble approximation of \(\mathbf{x}(1)\). Next using the fact that \(\{\widehat{\mathbf{x}}(s(1))\}_{s(1)\in\mathbb{Z}_{\geq 0}}\), \(\{\widehat{\mathbf{A}}(s_{A}(2))\}_{s_{A}(2)\in\mathbb{Z}_{\geq 0}}\), and \(\{\widehat{\mathbf{B}}(s_{B}(2))\}_{s_{B}(2)\in\mathbb{Z}_{\geq 0}}\) are stochastic ensemble approximations of \(\mathbf{x}(1)\), \(\widehat{\mathbf{A}}\), and \(\mathbf{B}\) respectively we can show that \(\{\widehat{\mathbf{x}}(s(2))\}_{s(2)\in\mathbb{Z}_{\geq 0}}\) forms a stochastic ensemble approximation of \(\mathbf{x}(2)\). By induction on \(\mathbf{x}(1),\cdots,\mathbf{x}(K)\), the proof can be completed. \(\blacksquare\) Note that the previous lemma states that the desired state \(\mathbf{x}_{f}\) is reachable from \(\mathbf{x}_{0}\) only if \(\mathbf{x}_{f}\) is approximately reachable from \(\mathbf{x}_{0}\). Also, note that we deal with a much weaker version of reachability here. All we require is that \(\mathbf{x}_{f}\) be in the reachable subspace from \(\mathbf{x}_{0}\). We do not require the whole \(\mathbb{R}^{n}\) to be reachable. Next, by using the notion of sample reachability (Definition 2.4), we synthesize a control input sequence that drives the system in (2.1) from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{f}\). **Lemma 3.2**: _(Approximation of control of LTI systems using stochastic ensemble systems). Consider the system in (2.1) with initial state \(\mathbf{x}_{0}\) and desired final state \(\mathbf{x}_{f}\). Let \(\mathcal{X}_{0}\), \(\mathcal{X}_{f}\) and \(\mathcal{A}\) be stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{x}_{f}\) and \(\mathbf{A}\) respectively. Next consider the stochastic ensemble system in (2.2) with \(\widehat{\mathbf{B}}(k)=\mathbf{B}\), \(\forall k\in\{1,\ldots,K-1\}\). Suppose that \(\mathbf{x}_{f}\) is sample reachable from \(\mathbf{x}_{0}\) in unit time. Let \(\{\widehat{\mathbf{u}}(\sigma,0)\}_{\sigma\in\mathbb{Z}_{\geq 0}}\) be as in Definition 2.4. Then for each \(\varepsilon>0\),_ \[\lim_{N\to\infty}\mathbb{P}(\|\mathbf{B}\mathbf{v}-\mathbf{B}\|_{\sigma=1}^{N }w_{\sigma}\widehat{\mathbf{u}}(\sigma,0)\|>\varepsilon)\to 0 \tag{3.8}\] _with \(w_{\sigma}\)'s of the form \([w_{1},\cdots,w_{N}]^{\top}\in\mathcal{S}^{N}\) for each \(N\in\mathbb{Z}_{\geq 0}\) and with \(\mathbf{B}\mathbf{v}\) satisfying_ \[\mathbf{x}_{f}=\mathbf{A}\mathbf{x}_{0}+\mathbf{B}\,\mathbf{v}\,. \tag{3.9}\] _\({}_{\blacksquare}\)_ Proof: We do this in a very similar way to the proof of Lemma 3.1. So we skip most of the details due to lack of space. Consider an \(\varepsilon>0\). Then, consider the state evolution equations \[\mathbf{x}_{f}^{(s_{f})} =\widehat{\mathbf{A}}(s_{A}(1))\mathbf{x}_{0}+\mathbf{B}\mathbf{u }^{\prime}(0)\,, \tag{3.10a}\] \[\mathbf{x}_{f} =\widehat{\mathbf{A}}(s_{A}(1))\mathbf{x}_{0}+\mathbf{B}\mathbf{u }^{\prime\prime}(0)\,. \tag{3.10b}\] Note that (3.10a) has been formulated by choosing controls such that they drive the initial condition \(\mathbf{x}_{0}\) (and not the realizations) to the final state \(\mathbf{x}_{f}^{(s_{f})}\). Similarly (3.10b) has been formulated by choosing controls such that they drive the initial condition \(\mathbf{x}_{0}\) to the final state \(\mathbf{x}_{f}\) (and not the realizations). Then, using very similar arguments as in the proof of Lemma 3.1 it is possible to show that the controls \(\mathbf{u}^{\prime}\), and \(\mathbf{u}^{\prime\prime}\) exist and can be attained by taking the weighted sum of the controls in (2.4). Moreover, using the triangle inequality of the norms and probability theory, it can be shown that \[\mathbb{P}(\|\mathbf{B}\mathbf{v}-\mathbf{B}\mathbf{\sum}_{s_{0}= 1}^{N}w_{s_{0},1}\widehat{\mathbf{u}}(s_{0})\|<\varepsilon)\geq\] \[\mathbb{P}(\|\mathbf{A}-\mathbf{\sum}_{s_{A}(1)=1}^{N}w_{s_{A}(1 ),3}\widehat{\mathbf{A}}(s_{A}(1))\|<\frac{\varepsilon}{3\beta})+\] \[\mathbb{P}(\|\mathbf{x}_{f}-\mathbf{\sum}_{s_{f}=1}^{N}w_{s_{f},2 }\mathbf{x}_{f}^{(s_{f})}\|<\frac{\varepsilon}{3\beta})+\] \[\mathbb{P}(\|\mathbf{x}_{0}-\mathbf{\sum}_{s_{0}=1}^{N}w_{s_{0},1 }\mathbf{x}_{0}^{(s_{0})}\|<\frac{\varepsilon}{3\beta})\to 1\text{ as }N\to\infty\,,\] where \(\beta=\max\{B_{\mathcal{A}},1,\|\mathbf{x}_{0}\|\}\). \(\blacksquare\) The previous result states that if \(\mathbf{x}_{f}\) is sample reachable from \(\mathbf{x}_{0}\) in unit time, \(\mathbf{x}_{f}\) is also reachable under (2.1). It is worthwhile to note that we are not approximating \(\mathbf{B}\) using a stochastic ensemble approximation. If we did, then essentially we would introduce convolution-like sums which makes the problem more complicated. Further, if \(\mathbf{x}_{f}\) is reachable from \(\mathbf{x}_{0}\) only in multiple time steps, then the matter of non-unique control sequences poses another problem. Note that the claim in Lemma 3.2 works for any sequence of random variables that form stochastic ensemble approximations as defined in Definition 2.2. In the next section we study stochastic ensemble systems that are generated from sampling the DLTI system parameters in (2.1). ## 4 On Stochastic Ensemble Systems Generated through Sampling Here, we deal with a particular kind of stochastic ensemble system whose realizations are produced from samples of (2.1). This corresponds to the following scenario. Consider the individual components of the states to be nodes and the \(\mathbf{A}\) matrix describing the interconnection between them. Suppose each node is aware of which neighbors are antagonistic and which neighbors are cooperative, but is unaware of the absolute magnitude of influence of its neighbors. The realizations of \(\widehat{\mathbf{A}}\) are hence obtained based on the relative order of influence of other nodes on a particular node. In such a scenario, we assume that the stochastic ensemble system is produced in the following way. Suppose the vectors and matrices are transformed into probability mass functions over an underlying sample space. This mass function produces samples from the sample space which correspond to the realizations of the stochastic ensemble system. In particular, we first define a function \(\gamma:\mathbb{R}^{n}\to\mathbb{R}\), as \(\gamma(\mathbf{w}):=\mathbf{1}^{\top}\mathrm{abs}(\mathbf{w})\), and with a slight abuse of notation, we consider \(\gamma:\mathbb{R}^{n\times m}\to\mathbb{R}\) as \(\gamma(\mathbf{M}):=\max_{j\in\{1,\ldots,m\}}\mathbf{1}^{\top}\mathrm{abs}( \mathbf{M}_{\star j})\). This helps us in describing a _vector sampling scheme_ procedure through the following lemma, whose proof omitted since it is trivial. **Lemma 4.1**: _(Vector sampling scheme). Consider a vector \(\mathbf{w}\in\mathbb{R}^{n}\setminus\{\mathbf{0}\}\). Then, let \(\overline{\mathbf{w}}:=(1/\gamma(\mathbf{w}))\mathrm{abs}(\mathbf{w})\). Consider \(\overline{\mathbf{w}}\) as a mass function on the sample space \(\Omega=\{1,\cdots,n\}\) and let \(\{s_{1},\cdots,s_{N}\}\) be \(N\) samples of \(\Omega\). To each sample \(s_{i}\), associate a vector \(\mathbf{s}^{(i)}\) as_ \[[\mathbf{s}^{(i)}]_{j}:=\begin{cases}\gamma(\mathbf{w}),&\text{if}\,j=s_{i} \text{ and }[\mathbf{w}]_{j}>0;\\ -\gamma(\mathbf{w}),&\text{if}\,j=s_{i}\text{ and }[\mathbf{w}]_{j}<0;\\ 0,&\text{otherwise}\,.\end{cases}\] _Then, the mean of the random variable corresponding to the \(i^{th}\) component of \(\sum_{i=1}^{N}\mathbf{s}^{(i)}\) is \(\mathbf{w}_{i}\). Finally, \(\lim_{N\to\infty}\mathbb{P}(\|\mathbf{w}-\frac{1}{N}\sum_{i=1}^{N}\mathbf{s}^ {(i)}\|>\varepsilon)\to 0\), for each \(\varepsilon>0\)._ Note that for matrices, the same procedure can be performed via a similar transformation by considering each row (or column) separately as vectors. The stochastic ensemble system generated in this process has nice properties related to the system (1). First, it retains the sparsity structure of the DLTI system (1). Moreover, since each component is a Bernoulli random variable, using known properties we can provide convergence rates for each realization. Next, we analyze the variance of the average control computed considering stochastic ensemble approximations produced as in Lemma 4.1. The next result follows from Hoeffding's inequality. **Lemma 4.2**: _(Variance of average control). Let \(\mathcal{X}_{0}\), \(\mathcal{X}_{f}\) and \(\mathcal{A}\) be stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{x}_{f}\) and \(\mathbf{A}\) respectively generated using Lemma 4.1. Suppose \(\{\widehat{\mathbf{u}}(\sigma,0)\}_{\sigma\in\mathbb{Z}_{\geq 0}}\) satisfy the hypothesis of Lemma 3.2 and let \(\mathbf{v}\) be as in (3.9). Then for each \(\varepsilon>0\),_ \[\mathbb{P}(\|\mathbf{B}(\mathbf{v}-\frac{1}{N}\sum_{\sigma=1}^{N} \mathbf{u}^{(s)})\|>e)\leq 2C[\exp(-\frac{2N\varepsilon^{2}}{9g^{2}\gamma( \mathbf{x}_{f})^{2}})\] \[+\exp(-\frac{2N\varepsilon^{2}}{9g^{2}\gamma(\mathbf{x}_{0})^{2 }})+\exp(-\frac{2N\varepsilon^{2}}{9g^{2}\gamma(\mathbf{A})^{2}})],\] _where \(C\) is a constant dependent only on \(n\)._ First we characterize the convergence rates of the realizations of \(\mathcal{X}_{0},\mathcal{X}_{f}\), and \(\mathcal{A}\) using the procedure in Lemma 4.1. Note that \[\|\mathbf{x}_{0}-\frac{1}{N}\sum_{s_{0}=1}^{N}\mathbf{x}_{0}^{(s_ {0})}\|\!\leq\!\!\eta(n)\max_{i}|[\mathbf{x}_{0}]_{i}-\frac{1}{N}\sum_{s_{0}= 1}^{N}[\mathbf{x}_{0}^{(s_{0})}]_{i}|,\] \[\|\mathbf{x}_{f}-\frac{1}{N}\sum_{s_{f}=1}^{N}\mathbf{x}_{f}^{(s_ {f})}\|\!\leq\!\!\eta(n)\max_{i}|[\mathbf{x}_{f}]_{i}-\frac{1}{N}\sum_{s_{f}= 1}^{N}[\mathbf{x}_{f}^{(s_{f})}]_{i}|, \tag{4.11}\] where \(\eta(n)\) is a constant dependent on \(n\) such that \(\|\mathbf{w}\|\leq\eta(n)\|\mathbf{w}\|_{\infty}\), \(\forall\mathbf{w}\in\mathbb{R}^{n}\). Moreover, \[\|\mathbf{A}-\frac{1}{N}\sum_{s_{A}(1)=1}^{N}\widehat{\mathbf{A}}(s_ {A}(1))\|\leq\] \[\mu(n)\max_{j\in\{1,\ldots,n\}}\sum_{i\in\{1,\ldots,n\}}|[\mathbf{ A}]_{ij}-\frac{1}{N}\sum_{s_{A}(1)=1}^{N}[\widehat{\mathbf{A}}(s_{A}(1))]_{ij}|, \tag{4.12}\] where \(\mu(n)\) is a constant dependent on \(n\) such that \(\|\mathbf{M}\|\leq\mu(n)\|\mathbf{M}\|_{\infty}\), \(\forall\mathbf{M}\in\mathbb{R}^{n\times n}\). Next, by using Hoeffding's inequality [21] and exploiting the Bernoulli nature of the random variables, the right hand side of the inequalities in (4.11) and (4.12) can be bounded in probability for all \(i\in\{1,\ldots,n\}\) and for each \(\varepsilon>0\) as, \[\mathbb{P}(|[\mathbf{x}_{0}]_{i}-\frac{1}{N}\sum_{s_{0}=1}^{N}[ \mathbf{x}_{0}^{(s_{0})}]_{i}|\!>\!\varepsilon)\leq 2\exp(-\frac{2N \varepsilon^{2}}{\gamma(\mathbf{x}_{0})^{2}})\,,\] \[\mathbb{P}(|[\mathbf{x}_{f}]_{i}-\frac{1}{N}\sum_{s_{f}=1}^{N}[ \mathbf{x}_{f}^{(s_{f})}]_{i}|\!>\!\varepsilon)\!\!\leq\!\!2\exp(-\frac{2N \varepsilon^{2}}{\gamma(\mathbf{x}_{f})^{2}})\,,\] \[\mathbb{P}(|\mathbf{A}_{ij}-\frac{1}{N}\sum_{s_{A}(1)=1}^{N}[ \widehat{\mathbf{A}}(s_{A}(1))]_{ij}|\!>\!\varepsilon)\!\!\leq\!\!2\exp(-\frac {2N\varepsilon^{2}}{\gamma(\mathbf{A})^{2}})\] Applying triangle inequality returns the results. We can use the proof technique here to also characterize the rate of convergence in Lemma 3.1 considering stochastic ensemble approximations produced as in Lemma 4.1. **Lemma 4.3**: _(Variance of state trajectory). Let \(\mathcal{X}_{0}\), \(\mathcal{A}\) and \(\mathcal{A}\) be stochastic ensemble approximations of \(\mathbf{x}_{0}\), \(\mathbf{A}\) and \(\mathbf{B}\) respectively, generated using Lemma 4.1. Then \(\mathbf{x}(K)\) in Lemma 3.1 satisfies, for each \(\varepsilon>0\),_ \[\mathbb{P}(\|\mathbf{x}(K)-\frac{1}{N}\sum_{\sigma=1}^{N}\widehat{ \mathbf{x}}(\sigma,K)\|>\varepsilon)\leq 2C(e^{c_{1}}+e^{c_{2}}+e^{c_{3}}),\] _where \(c_{1}=-\frac{2N\varepsilon^{2}}{9\|\mathbf{M}\|^{2}\gamma(\mathbf{B})^{2}}\), \(c_{2}=-\frac{2N\varepsilon^{2}}{9\|\mathbf{M}\|^{2}\gamma(x_{0})^{2}}\), \(c_{3}=-\frac{2N\varepsilon^{2}}{9\|\mathbf{x}_{0}\|^{2}\gamma(\mathbf{A})^{2}}\) and \(C\) only depends on \(n\)._ The proof follows the same arguments as in the proofs of Lemmas 3.1 and 4.2. ### Sample Averaging using Least Squares Error Minimization In the previous section, we assigned uniform weights to each realization in order to produce the stochastic ensemble approximation. For each sample run of the process, we can make this averaging method better by introducing a least square error minimization with the samples produced from the vector sampling scheme in Lemma 4.1. Since the weights associated with the optimization problem are restricted to be in the simplex \(\mathcal{S}^{n}\) (as per Definition 2.2), we approach this problem in two different ways. We describe each of them next and compare their performances later. **Accumulated Least Squares Error (ALSE) Minimization** In this approach, we take into account all the realizations of the stochastic ensemble approximations \(\mathcal{X}_{0},\mathcal{X}_{f}\), and \(\mathcal{A}\) all at once. Then we compute the associated averaging weights from least squares error minimization and finally use these weights for the computed control. Next, we provide a closed form solution to the least squares error minimization problem by exploiting the entries of the vectors in Lemma 4.1. The samples produced in Lemma 4.1 have a very particular structure to them. In fact, they have a non-zero entry in one component and have _zeros_ everywhere else (_i.e._ their support is a singleton set). Hence, the whole problem boils down to adjusting the weights individually for the components in order to minimize the norm distance of the weighted sum of the samples and the vector. We provide this in the next result. **Lemma 4.4**: _(Least square error problem solution for singleton support samples). Consider a vector \(\mathbf{w}\in\mathbb{R}^{n}\setminus\{\mathbf{0}\}\). Let \(\overline{\mathbf{w}}:=(1/\gamma(\mathbf{w}))\mathrm{abs}(\mathbf{w})\) and suppose \(\{\mathbf{s}^{(1)},\cdots,\mathbf{s}^{(N)}\}\) is a set of \(N\) sample vectors produced using the sampling scheme in Lemma 4.1. Let \(\mathbf{S}\) be the matrix whose \(i^{th}\) column is \(\mathbf{s}^{(i)}\). Then a solution to_ \[\min_{\mathbf{y}\in S^{N}}\;\|\mathbf{S}\mathbf{y}-\mathbf{w}\|_{2}^{2}\,, \tag{4.13}\] _is given by \(\mathbf{y}^{*}\) with \([\mathbf{y}^{*}]_{j}=\alpha_{j}/n_{j}\), where \(\alpha_{j}=|[\mathbf{w}]_{k}|\) with \(k\) such that \([\mathbf{s}^{(j)}]_{k}\neq 0\) and \(n_{j}=|\{i\in\{1,\ldots,N\}\,|\,[\mathbf{s}^{(i)}]_{k}\neq 0\}|\)._ Note that the problem is convex in \(\mathbf{y}\) and that the set \(\{1,\ldots,n\}\) can be split into two disjoint sets \(\mathcal{D}_{1}:=\{p\in\{1,\ldots,n\}\,|\,\exists\,q\,\mathrm{s.t.}\,[\mathbf{ s}^{(q)}]_{p}\neq 0\}\) and \(\mathcal{D}_{2}:=\{1,\ldots,n\}\setminus\mathcal{D}_{1}\). Now, since the support of \(\mathbf{s}^{(i)}\) is a singleton, the problem can be rewritten as \[\min_{\mathbf{y}\in S^{N}}\;\sum_{i\in\mathcal{D}_{1}}(\sum_{j\in s(s)}[ \mathbf{s}^{(j)}]\mathbf{y}_{j}-\mathbf{w}_{i})^{2}+\sum_{i\in\mathcal{D}_{2 }}\mathbf{w}_{i}^{2}:J(\mathbf{y})\,,\] where \(s(i):=\{p\in\{1,\ldots,N\}\,|\,[\mathbf{s}^{(p)}]_{i}\neq 0\}\). It is easy to see that for the solution \(\mathbf{y}^{*}\) in the claim, the value of the cost function is \(J(\mathbf{y}^{*})=\sum_{i\in\mathcal{D}_{2}}\mathbf{w}_{i}^{2}\). As this is the minimum value of \(J(\cdot)\), the proof is complete. It is worth noting that if the set \(\mathcal{D}_{2}\) in the proof of Lemma 4.4 becomes empty, then \(J(\mathbf{y}^{*})=0\). This means that if the samples are rich enough so that all the non-zero entries of the original vector appear at least once in the set \(\{\mathbf{s}^{(1)},\cdots,\mathbf{s}^{(N)}\}\), then the weights assigned using the least squares problem recreates the original vector perfectly. This does not translate to the case for matrices as then the associated weights to the matrix samples are coupled using multiple non-zero entries across the matrix, as illustrated in Section 5. **Streaming Least Squares Error (SLSE) Minimization** In this case, the controls are attained from sequentially changing the realizations of the vectors and matrices. For the uniform weights case, the solution can be made better by adding more samples. For the ALSE minimization case, the weights have to be determined beforehand and then the controls computed, since we do not assign the same weights to each sample. Moreover, as the number of samples increases, the weights assigned to the previous samples need to be recomputed in order to satisfy (4.13). However for the SLSE minimization case, the weights to the controls can be updated sequentially in order to construct a sub-optimal solution to (4.13). Essentially, suppose \(\mathbf{E}_{1}\) and \(\mathbf{E}_{2}\) are estimates of \(\mathbf{E}^{*}\), with \(\mathbf{E}_{1},\mathbf{E}_{2}\) and \(\mathbf{E}^{*}\) being comparable entities from \(\{\mathcal{X}_{0},\mathcal{X}_{f},\mathcal{A}\}\), _i.e._ they are arbitrary elements in the same sequence. Then we solve for \[\min_{(w_{1},w_{2})\in S^{2}}\|w_{1}\mathbf{E}_{1}+w_{2}\mathbf{E}_{2}-\mathbf{ E}^{*}\|_{2}^{2}\,. \tag{4.14}\] The computed controls are updated using the same weights. We explain this in Algorithm 4.1. **Algorithm 4.1**: Sequential control computation with least square error estimation ``` 1:\(\mathcal{X}_{0}\), \(\mathcal{X}_{f}\), \(\mathcal{A}\), \(\mathbf{x}_{0}\), \(\mathbf{x}_{f}\), \(\mathbf{A}\),\(\mathbf{B}\), errorBound 2:control \(\mathbf{v}(k)\) 3:error\(\leftarrow\infty\); \(\quad\widehat{\mathbf{x}}_{0}\leftarrow\widehat{\mathbf{x}}(\sigma,0)\in \mathcal{X}_{0}\); 4:\(\widehat{\mathbf{x}}_{f}\leftarrow\mathbf{x}_{f}(s_{f}(\sigma))\in\mathcal{X} _{f}\); \(\quad\widehat{\mathbf{A}}\leftarrow\widehat{\mathbf{A}}(s_{A}(\sigma,1))\in \mathcal{A}\); 5:\(\mathbf{v}\leftarrow\widehat{\mathbf{v}}\) such that \(\widehat{\mathbf{x}}_{f}=\widehat{\mathbf{A}}\widehat{\mathbf{x}}_{0}+ \widehat{\mathbf{B}}\widehat{\mathbf{v}}\) 6:whileerror \(>\) errorBound do 7: Randomly choose \(i\in\{1,2,3\}\) 8:if\(i=1\)then 9: update \(\widehat{\mathbf{x}}_{0}\) 10:\((w_{1}^{*},w_{2}^{*})\leftarrow\) solution of (4.14) with \(\mathbf{E}_{1}=\) old \(\widehat{\mathbf{x}}_{0}\), \(\mathbf{E}_{2}=\) new \(\widehat{\mathbf{x}}_{0}\) and \(\mathbf{E}^{*}=\mathbf{x}_{0}\). 11:elseif\(i=2\)then 12: update \(\widehat{\mathbf{x}}_{f}\) 13:\((w_{1}^{*},w_{2}^{*})\leftarrow\) solution of (4.14) with \(\mathbf{E}_{1}=\) old \(\widehat{\mathbf{x}}_{f}\), \(\mathbf{E}_{2}=\) new \(\widehat{\mathbf{x}}_{f}\) and \(\mathbf{E}^{*}=\mathbf{x}_{f}\). 14:elseif\(i=3\)then 15: update \(\widehat{\mathbf{A}}\) 16:\((w_{1}^{*},w_{2}^{*})\leftarrow\) solution of (4.14) with \(\mathbf{E}_{1}=\) old \(\widehat{\mathbf{A}}\), \(\mathbf{E}_{2}=\) new \(\widehat{\mathbf{A}}\) and \(\mathbf{E}^{*}=\mathbf{A}\). 17:endif 18:\(\mathbf{v}^{\prime}\leftarrow\widehat{\mathbf{v}}\) such that \(\widehat{\mathbf{x}}_{f}=\widehat{\mathbf{A}}\widehat{\mathbf{x}}_{0}+ \widehat{\mathbf{B}}\widehat{\mathbf{v}}\) 19:error \(\leftarrow\)\(\|(w_{1}^{*}-1)\mathbf{v}+w_{2}^{*}\mathbf{v}^{\prime}\|\) 20:\(\mathbf{v}\gets w_{1}^{*}\mathbf{v}+w_{2}^{*}\mathbf{v}^{\prime}\) 21:endwhile ``` We conclude this section by comparing the averaging methods provided earlier. ### Comparison between the Averaging Methods Since the weights obtained in the least squares error minimization are obtained from an optimization problem, we can compare their convergence rate with the uniform averaging scheme. **Lemma 4.5**: _(Comparison between averaging methods). Suppose \(\{\mathbf{E}_{i}\}_{i=1}^{N}\) are estimates of \(\mathbf{E}^{*}\), with \(\{\mathbf{E}_{i}\}_{i=1}^{N}\) and \(\mathbf{E}^{*}\) being comparable entities from \(\{\mathcal{X}_{0},\mathcal{X}_{f},\mathcal{A}\}\). Let \(\mathbf{F}_{\mathrm{uniform}}:=(1/N)\sum_{i=1}^{N}\mathbf{E}_{i}\). Let \(\mathbf{F}_{\mathrm{ALSE}}:=\sum_{i=1}^{N}w_{i}^{*}\mathbf{E}_{i}\) with \([w_{1}^{*},\cdots,w_{N}^{*}]\) being a solution of_ \[\min_{[w_{1},\cdots,w_{N}]\in S^{N}}\|\mathbf{E}^{*}-\sum_{i=1}^{N}w_{i} \mathbf{E}_{i}\|_{2}^{2}\,. \tag{4.15}\] _Finally, let \(\mathbf{F}_{\mathrm{SLSE}}:=\mathbf{F}(N)\) where \(\mathbf{F}(N)\) is a solution to the difference equation \(\mathbf{F}(n)=v_{1}^{*}\mathbf{F}(n-1)+v_{2}^{*}\mathbf{F}(n-2)\) starting from \(\mathbf{F}(1)=\mathbf{E}_{1},\mathbf{F}(2)=\mathbf{E}_{2}\) and where \(v_{i}^{*}\) comes from Algorithm 4.1. Then,_ \[\|\mathbf{E}^{*}\!-\!\mathbf{F}_{\mathrm{ALSE}}\!\|\!\leq\!\!\|\mathbf{E}^{*}\!- \!\mathbf{F}_{\mathrm{SLSE}}\!\|\!\!\leq\!\!\|\mathbf{E}^{*}\!-\!\mathbf{F}_{ \mathrm{uniform}}\|\,. \tag{4.16}\] Proof.: Note that \([v_{1}^{*},\cdots,v_{N}^{*}]\) is a feasible solution to (4.15). This results in the first inequality. The second inequality holds since \((1/N)\mathbf{1}\) is a feasible solution to the optimization problem in Algorithm 4.1. ## 5 Simulations and Analysis In this section, we provide simulation examples to validate our results in two scenarios. In the first case, we verify Lemma 4.5 in a power systems example. In the second case, we study the error in the computed state for a system with large dimensions. We use CVX MATLAB toolbox [22] to solve the optimization problems. ### Error in Computed Control Here, we take an example system in the form of (2.1) from [23] using MATLAB's power toolbox. The system dimensions are \(n=10\) and \(m=5\) (with \(\mathbf{A}\in\mathbb{R}^{n\times n}\), \(\mathbf{B}\in\mathbb{R}^{n\times m}\)) and we choose the initial condition randomly. First, we verify the ordering in estimation errors for the different averaging methods in Figure 1. This is in accordance with (4.16). Moreover, note that in Figure 1(a), it is possible to obtain _zero_ relative error for the initial condition estimation using ALSE Minimization and SLSE Minimization as per the discussion following Lemma 4.4. Next, we deal with the errors in the computed control and the computed state.The goal is to drive the state close to the origin in one time step. The actual control that does this was computed by taking \(\mathbf{u}_{\mathrm{actual}}=\mathbf{B}^{\dagger}(-\mathbf{A}\mathbf{x}_{0})\), where \(\mathbf{B}^{\dagger}\) denotes the Moore-Penrose pseudoinverse. Then the three different controls \(\mathbf{u}_{\mathrm{computed}}\) were computed using the methods listed in this paper (_i.e._ uniform average, ALSE minimization and SLSE minimization). The comparison of the relative error in computed control with respect to the number of samples is shown in Figure 2(a). Note that the \(\mathbf{u}_{\mathrm{computed}}\) for the ALSE minimization case was computed after considering all the samples. We also computed the relative error in the computed state with respect to the number of samples and present the results in in Figure 2(b). It is worth mentioning that the error in the computed control and the computed states do not always follow the same ordering as the estimation errors of the initial condition and the system matrix. ### Error in Computed State Here, we simulate our algorithms in a trajectory tracking problem for a system with large dimensions. Notice that the sample averaging method works for one time step controls only. Thus, it can be used to track a predefined trajectory of states. For this example, we take the system dimensions as \(n=100\), and \(m=80\) and generate \(\mathbf{A}\in\mathbb{R}^{n\times m},\mathbf{B}\in\mathbb{R}^{n\times m}, \mathbf{x}(0)\in\mathbb{R}^{n},\mathbf{x}(1)\in\mathbb{R}^{n}\), and \(\mathbf{x}(2)\in\mathbb{R}^{n}\) randomly. The matrices \(\mathbf{A}\), and \(\mathbf{B}\) are used as the system matrices and \(\mathbf{x}(0)\) is used as the initial condition. The states \(\mathbf{x}(1)\), and \(\mathbf{x}(2)\) are used as the reference trajectory. Similar to the previous case, for each \(t\in\{0,1\}\), the control for each realization \(\mathbf{A}^{(s)},\mathbf{x}^{(s)}(t)\), and \(\mathbf{x}^{(s)}(t+1)\) is computed as \(\mathbf{u}_{\mathrm{computed}}=\mathbf{B}^{\dagger}(\mathbf{x}^{(s)}(t+1)- \mathbf{A}^{(s)}\mathbf{x}^{(s)}(t))\). The relative error in the computed states with respect to the number of samples is given in Figure 3. Notice that (similar to what we observed in Section 5.1) the ordering given in Lemma 4.5 that holds for the estimation errors for the initial condition and the system matrix, may not hold for the computed states as shown in Figure 3. Also, the errors in Figure 3 are much larger than the errors in Figure 2. This is because, the dimensions in this case are much larger than the previous case. Using the idea of convergence of the computed control in Lemma 3.2, we can give similar results for the convergence of computed states, omitted due to the lack of space. ## 6 Conclusion In this paper, we studied the reachability of a new class of stochastic ensemble systems, where are not required to have a continuum of ensembles. We provided a necessary condition for reachability under multiple time steps for the limiting DLTI system using approximate reachability for the stochastic ensemble system. We also pro Figure 1: Relative error in estimation versus number of samples for example in Section 5.1. (a) Relative error in estimated initial condition \(\mathbf{x}_{0}\). (b) Relative error in estimated system matrix \(\mathbf{A}\). vided a sufficient condition that averages the control obtained from the sample reachability condition for the ensemble system to find a control for the DLTI system. We provided convergence rates when the stochastic ensemble systems are produced using sampling. In the future, we will extend the sufficient condition to multi-time and continuous-time cases, and will study stochastic ensemble approximations of nonlinear systems.
2305.06100
Effect of spatially oscillating field on Schwinger pair production
Effect of spatially oscillating fields on the electron-positron pair production is studied numerically and analytically when the work done by the electric field over its spatial extent is smaller than twice the electron mass. Under large spatial scale, we further explain the characteristics of the position and momentum distribution via tunneling time, tunneling distance and energy gap between the positive and negative energy bands in the Dirac vacuum. Our results show that the maximum reduced particle number is about five times by comparing to maximum number for non-oscillating field. Moreover, the pair production results via Dirac-Heisenberg-Wigner formalism can be also calculated by using local density approximation and analytical approximation method when spatial oscillating cycle number is large. Moreover, in case of large spatial scale field, the position distribution of created particles could be interpreted by the tunneling time.
Orkash Amat, Li-Na Hu, Mamat Ali Bake, Melike Mohamedsedik, B. S. Xie
2023-05-10T12:41:43Z
http://arxiv.org/abs/2305.06100v1
# Effect of spatially oscillating field on Schwinger pair production ###### Abstract Effect of spatially oscillating fields on the electron-positron pair production is studied numerically and analytically when the work done by the electric field over its spatial extent is smaller than twice the electron mass. Under large spatial scale, we further explain the characteristics of the position and momentum distribution via tunneling time, tunneling distance and energy gap between the positive and negative energy bands in the Dirac vacuum. Our results show that the maximum reduced particle number is about five times by comparing to maximum number for non-oscillating field. Moreover, the pair production results via Dirac-Heisenberg-Wigner formalism can be also calculated by using local density approximation and analytical approximation method when spatial oscillating cycle number is large. Moreover, in case of large spatial scale field, the position distribution of created particles could be interpreted by the tunneling time. Introduction Schwinger effect is one of the fascinating nonperturbative phenomena in quantum electrodynamics (QED)[1; 2; 3; 4; 5]. This effect has not been yet observed directly in the laboratory as the critical field strength \(E_{cr}=m^{2}c^{3}/e\hbar\approx 1.3\times 10^{16}\mathrm{V/cm}\) (corresponding laser intensity is about \(4.3\times 10^{29}\mathrm{W/cm^{2}}\), where \(m\) and \(-e\) are the electron mass and charge) is not feasible so far[6; 7]. With the rapid development of the laser technology, however, the forthcoming laser intensity[8; 9] is expected to reach \(10^{24}-10^{26}\mathrm{W/cm^{2}}\) that has raised hopes of observing pair production in the future [10]. The effects of spacetime-dependent inhomogeneous fields on the pair production is an interesting issue for the studies in the strong field QED. We know that the different shapes of the spatial or temporal part of field have different effects on the pair production, e.g., temporal Sauter envelope[11], spatial Sauter envelope[12], temporal super-Gaussian envelope[13], spatial Gaussian envelope[14] etc. Besides, the influence of different field parameters are also important, e.g., frequency chirp effect[15; 16; 17] and phase effect[18]. For the non-plane wave background field, there are many studies for either some simple spatial inhomogeneous fields like of the cosine, Sauter and Gaussian shapes[12; 13; 14; 15; 16; 17] or some time-dependent fields with complicated temporal shapes [19; 20; 21; 22]. In Ref.[23; 24], the pair production in spatially oscillating fields is investigated, but spatial part is the cosine or sine function since effects of more complex spatial oscillation fields on pair production is still facing a theoretical challenging. For example, the momentum spectrum of the created pairs need more detailed examination for those complicated spatially oscillating inhomogeneous field [25; 26; 27]. Among them there is a special case, in which the work done by the electric field over its spatial extent (we denote it \(W\)) may be smaller than twice of the electron mass. In this special case, however, the previous studies [28; 29; 30; 31] are focused mainly on the fields with the Gaussian-like shapes. To our knowledge, an effect of more complicated spatially oscillating inhomogeneous field on the pair production is still lacking enough research, therefore, it is necessary to study the effect of spatially oscillating field in the specific \(W<2mc^{2}\). Under such circumstance, we intend to study and answer some of involving problems. For example, what will happen to the distribution of pair production? Will the local density approximation (LDA) be applied to it? Is there any connection to existed rigorous solution analytically? and so on. On the other hand, the tunneling picture has played a key role to the vacuum pair pro duction beside of the multiphoton mechanism for the temporal oscillating field when \(\hbar\omega\leqslant 2mc^{2}\), where \(\omega\) is the field frequency [28], it has been seen from the study for some interesting fields [32]. Thus, it would be also helpful to understand the results of present research from the view point of tunneling time and distance. Motivated by the factors mentioned above, therefore, in this paper, we investigate the vacuum pair production when \(W<2mc^{2}\). First we give a simple perspective picture which reveals mainly the tunneling process. We further demonstrate its correctness for the pair production via different numerical approaches and analytical approximation. We interpret why and how the electron-positron pair can leave out from the vacuum. Moreover, we present and discuss the exact tunneling distance and time of the created particle, which is employed to understand the characteristics of the position and momentum distribution of created pairs. Finally, we show a relationship between the position distribution and tunneling time by employing the worldline instanton (WI) approach. It is necessary to give a simple introduction to some of the widely used and powerful tools that can calculate particle distribution in spatial oscillating inhomogeneous fields. Among many methods to investigate pair production, as far as we know, some important ones include the WI technique [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48], the real time Dirac-Heisenberg-Wigner (DHW) formalism [49; 50; 51; 52; 53], the computational quantum field theory [54; 55; 56; 57; 58], the imaginary time method [59], the quantum Vlasov equation (QVE) [60; 61; 62; 63; 64], the Wentzel-Kramers-Brillouin (WKB) approach [65; 66; 67; 68], scattering matrix approach [69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81], and so on. In particular, the DHW formalism allows us to investigate the pair production for any background field, not limited to plane wave [13]. On the other hand, due to the complex nature of the pair production under spacetime-dependent field, one can find only a few analytical results for simple background modes [82; 83; 84; 85; 32]. Hence, we have to adopt numerical methods to study various natures of pair production. In present work, we will use the DHW formalism as numerical approach. The paper is organized as follows. In Sec. II, we briefly introduce the vacuum description of pair production process in spatial oscillating field. In Sec. III, numerical and analytical approaches are introduced. In Sec. IV, we derive the analytical solution of the tunneling time for large spatial scale via the WIF. In Sec. V, we give and discuss our numerical and analytical results, and interpret the momentum and position distributions of created pairs. Meanwhile, we show the relation between the tunneling time and position distribution, and explain the position distribution. Finally, the summary is given briefly in Sec. VI. We use the natural units (\(\hbar=c=1\)), throughout this paper, and express all quantities in terms of the electron mass \(m\). ## II Pair production process in spatial oscillating field The electron-positron will leave out of the vacuum after the particle jumps from any negative high energy state at \(x_{+}=x+\Delta x\) to any positive low energy state at \(x_{-}=x-\Delta x\) in the Dirac semiclassical picture of the vacuum when the work \(W=e\int\mathbf{E}\cdot d\mathbf{x}<2mc^{2}\) as shown in Fig. 1. The electron position distribution depends on the center-of-mass coordinate \(x\)[49], thus, we will consider above transition probability of the particle at \(x\) in order to interpret the position and momentum distributions of electron. Moreover, we can obtain the transition (tunneling) distance \(d=x_{+}-x_{-}=2\Delta x\), see Fig. 1. Note that this quantum jumping process includes the tunneling process under an external spatial oscillating spacetime-dependent field. The energy gap between the two energy bands \(\Delta E\) is transformed into the created pair's energy. From energy conservation of this process, \(\Delta E=E_{e^{-}}+E_{e^{+}}\), we can find the general relativistic relation under the pure external field (does not include the pondermotive force [31; 39; 29]) as \[\left(\frac{\Delta E}{2}\right)^{2}=m_{*}^{2}+q^{2}, \tag{1}\] where \(m_{*}\) is effective mass [86], \(q\) denotes a characteristic residual kinetic momentum of the outgoing particles from the tunneling process. According to our new perspective picture, the electron-positron pair can escape successfully from the vacuum by jumping from the negative high energy state to the positive low energy state when \(W<2mc^{2}\). This is why the electron-positron pairs can get out of the vacuum. In order to better understand, we consider an example of spacetime-dependent spatially oscillating electric field. In our case, we ignore particle momenta orthogonal to this dominant direction. We choose \(A^{\mu}\left(x,t\right)=\left(\phi\left(x\right)f(t),0,0,0\right)\). The electric field can be written in the following form \[E\left(x,t\right)=\varepsilon E_{cr}g(x)f(t), \tag{2}\] where \(g(x)=\cos\left(kx\right)e^{-\frac{x^{2}}{2\varepsilon^{2}}}\) and \(f(t)=\mathrm{sech}^{2}\left(\omega t\right)\), \(\varepsilon\) is the peak field strength, \(E_{cr}\) denote the critical field strength for electric and magnetic field, \(\lambda\) is spatial scale, \(k\) is the spatial wave number, \(\sigma_{\lambda}=k\lambda\) is spatial oscillating cycle number. Accordingly, the corresponding scalar potential is \[\phi(x)=-\frac{\sqrt{\pi}\lambda}{\sqrt{8}}e^{-\frac{x^{2}}{2 \varepsilon^{2}}\lambda^{2}}\left(\mathrm{erf}\left(\frac{x-ik\lambda^{2}}{ \sqrt{2}\lambda}\right)+\mathrm{erf}\left(\frac{x+ik\lambda^{2}}{\sqrt{2} \lambda}\right)\right). \tag{3}\] To better compare optimization schemes, we consider the external field which is the same energy in the spacetime with different \(k\), \(\lambda\) and \(\omega\). We define \(\varepsilon_{0}=\varepsilon=0.5\) for \(k=0\) case. The total energy of the external field in the same 2-space volume \(V_{2}\) can be written as \[\mathcal{E}=\frac{V_{2}}{2}\int\int E^{2}(x,t)dxdt=\text{constant}, \tag{4}\] If we fix the \(\omega\) in the whole work, we can obtain the peak field strength for different \(k\) as \[\varepsilon=\sqrt{\frac{2}{1+e^{-k^{2}\cdot 2^{2}}}}\varepsilon_{0}. \tag{5}\] In Fig. 2, areas A and B represent \(W\geqslant 2mc^{2}\) and \(W<2mc^{2}\) cases respectively. Note that the vacuum pair production in the area A is well understood in many previous works [29; 30; 31], but the pair creation in area B need to further study. Hence, in the present paper we would like to focus specifically in the area B to investigate the vacuum pair production when \(W<2mc^{2}\). ## III Numerical and analytical methods ### DHW formalism For studying the electron-positron pair production of vacuum in the background fields we write the Lagrangian [30] \[\mathcal{L}\Big{(}\Psi,\bar{\Psi},A\Big{)}=\frac{i}{2}\Big{(}\bar{\Psi}\gamma^ {\mu}\mathcal{D}_{\mu}\Psi-\bar{\Psi}\mathcal{D}_{\mu}^{i}\gamma^{\mu}\Psi \Big{)}-m\bar{\Psi}\Psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}, \tag{6}\] Figure 1: Plot for the Dirac vacuum under high spatial oscillating spacetime-dependent electric field, where \(E_{\pm}(x)=\pm mc^{2}+\phi(x)\) when \(W<2mc^{2}\). The magenta and blue lines denote the \(E_{+}(x)\) (minimum positive energy band) and \(E_{-}(x)\) (maximum negative energy band) for the same center-of-mass coordinate \(x\) respectively. where \(\mathcal{D}_{\mu}=\left(\partial_{\mu}+ieA_{\mu}\right)\) is the covariant derivative and correspondingly \(\mathcal{D}_{\mu}^{\dagger}=\left(\overset{\leftarrow}{\partial_{\mu}}-ieA_{ \mu}\right)\). In order to describe the dynamics of the particles, we need the Dirac equation \[\left(i\gamma^{\mu}\partial_{\mu}-e\gamma^{\mu}A_{\mu}(r)-m\right)\Psi(r)=0, \tag{7}\] and the adjoint Dirac equation \[\bar{\Psi}(r)\left(i\gamma^{\mu}\overset{\leftarrow}{\partial_{\mu}}+e \gamma^{\mu}A_{\mu}(r)+m\right)=0. \tag{8}\] The Dirac spinors \(\Psi\), \(\bar{\Psi}\) and the vector potential \(A^{\mu}\left(r\right)\) are the main ingredients in the DHW formalism. Note that, the background is considered to be a classical one. Further, we introduce the density operator as [30] \[\hat{\mathcal{C}}_{\alpha\beta}\left(r,s\right)=\mathcal{U}\left(A,r,s\right) \,\left[\bar{\Psi}_{\beta}\left(r-s/2\right),\Psi_{\alpha}\left(r+s/2\right) \right], \tag{9}\] where \(r\), \(s\) denotes the center-of-mass and the relative coordinate of two particles. The Wilson line factor is used to make the density operator gauge invariant under the \(U(1)\) gauge: \[\mathcal{U}\left(A,r,s\right)=\exp\left(\mathrm{i}es\int_{-1/2}^{1/2}d\xi A \left(r+\xi s\right)\right). \tag{10}\] In order to perform numerical calculations, we use the DHW as the powerful tool in our study due to it allows us to investigate vacuum pair production for inhomogeneous field [49]. This method is well applied to the case of one spatial dimension, in which there are only four Wigner components, \(\mathbb{S}\), \(\mathbb{V}_{0}\), \(\mathbb{V}_{x}\) and \(\mathbb{P}\) for electric field \(E(x,t)\). The DHW equations of motion in this case of 1+1 can be written as [29] \[D_{t}\mathbb{S}-2p_{x}\mathbb{P}=0, \tag{11}\] \[D_{t}\mathbb{V}_{0}+\partial_{x}\mathbb{V}_{x}=0,\] (12) \[D_{t}\mathbb{V}_{x}+\partial_{x}\mathbb{V}_{0}=-2\mathbb{P},\] (13) \[D_{t}\mathbb{P}+2p_{x}\mathbb{S}=2m\mathbb{V}_{x}, \tag{14}\] where \[D_{t}=\partial_{t}+e\int_{-1/2}^{1/2}d\xi E_{x}(x+i\xi\partial_{p_{x}},t) \partial_{p_{x}}. \tag{15}\] In order to perform simulation more conveniently, we can define 4 Wigner components as \(\mathbb{W}_{0}=\mathbb{S}\), \(\mathbb{W}_{1}=\mathbb{V}_{0}\), \(\mathbb{W}_{2}=\mathbb{V}_{x}=\mathbb{V}\) and \(\mathbb{W}_{3}=\mathbb{P}\). So from the initial conditions given by the vacuum solution for single particle \[\mathbb{S}_{vac}\left(p_{x}\right)=-\frac{2}{\sqrt{1+p_{x}^{2}}},\ \mathbb{V}_{vac} \left(p_{x}\right)=-\frac{2p_{x}}{\sqrt{1+p_{x}^{2}}}, \tag{16}\] we have the modified Wigner components as \[\mathbb{W}_{i}^{v}=\mathbb{W}_{i}-\mathbb{W}_{i,vac}. \tag{17}\] Finally, we can calculate the particle number density at asymptotic times \(t_{f}\rightarrow+\infty\) \[n\left(x,p_{x},t\rightarrow+\infty\right)=\frac{\mathbb{S}^{v}+p_{x}\mathbb{ W}_{x}^{v}}{\sqrt{1+p_{x}^{2}}}. \tag{18}\] The momentum and position distributions are given by \[n\left(p_{x},t\rightarrow+\infty\right) =\int\frac{dx}{2\pi}n\left(x,p_{x},t\rightarrow+\infty\right), \tag{19}\] \[n\left(x,t\rightarrow+\infty\right) =\int dp_{x}n\left(x,p_{x},t\rightarrow+\infty\right). \tag{20}\] And the total particle number is readily got as \[N(t\rightarrow+\infty)=\int\mathrm{d}p_{x}n\left(p_{x},t\rightarrow+\infty \right). \tag{21}\] It is worthy to point out that for the convenient comparison we should cope with the reduced quantities \(\bar{n}(p_{x},t\rightarrow+\infty)=n(p_{x},t\rightarrow+\infty)/\lambda\) and \(\bar{N}(t\rightarrow+\infty)=N(t\rightarrow+\infty)/\lambda\) under the same energy. ### Lda When the spatial variation scale is much larger than the Compton wavelength \(\lambda\gg\lambda_{C}\), the vacuum pair production can be locally described by fixing point \(x\)[29]. If we assume \(E\left(x,\ t\right)=\varepsilon E_{cr}g\left(x\right)h\left(t\right)\) and can \(\varepsilon\left(x\right)=\varepsilon E_{cr}g\left(x\right)\) as effective field strength, where \(h\left(t\right)\) is an arbitrary time-dependent function. We can obtain momentum and positron distributions by summing results for homogeneous fields with different field strengths given as [29] \[\bar{n}_{loc}\left(p_{x},t\rightarrow+\infty\right) =\int\frac{dx}{2\pi}\bar{n}_{loc}\left(\varepsilon\left(x\right) \left|p_{x},t\rightarrow+\infty\right), \tag{22}\] \[\bar{n}_{loc}\left(x,t\rightarrow+\infty\right) =\int dp_{x}\bar{n}_{loc}\left(\varepsilon\left(x\right)\left|p_ {x},t\rightarrow+\infty\right). \tag{23}\] \(\bar{n}_{loc}\left(\varepsilon\left(x\right)\left|p_{x},t\rightarrow\infty\right)\) can be found by using quantum kinetic theory at any fixed point \(x\) for a time-depend electric field \(E\left(t\right)=\varepsilon\left(x\right)h\left(t\right)\)[29]. We choose time-dependent spatial homogeneous DHW method to find one particle distribution [90], and calculate the LDA result. The one-particle momentum distribution function \(n(\mathbf{p},t)\) can be obtained by solving the following ten ordinary differential equations and the nine auxiliary quantities \(\mathsf{V}_{i}(\mathbf{p},t):=\mathbb{V}_{i}(\mathbf{p},t)\), \(\mathsf{A}_{i}(\mathbf{p},t):=\mathbb{A}_{i}(\mathbf{p},t)\) and \(\mathsf{T}_{i}(\mathbf{p},t):=\mathbb{T}_{i}(\mathbf{p},t)\)[90]: \[\dot{n} =\frac{e}{2\Omega}\ \mathbf{E}\cdot\mathsf{V}, \tag{24}\] \[\dot{\mathsf{V}} =\frac{2}{\Omega^{3}}\left(\left(e\mathbf{E}\cdot\mathbf{p} \right)\mathbf{p}-e\Omega^{2}\mathbf{E}\right)\left(n-1\right)-\frac{\left(e \mathbf{E}\cdot\mathsf{V}\right)\mathbf{p}}{\Omega^{2}}-2\mathbf{p}\times \mathsf{A}-2m\mathsf{T},\] \[\dot{\mathsf{A}} =-2\mathbf{p}\times\mathsf{V},\] \[\dot{\mathsf{T}} =\frac{2}{m}\left[m^{2}\mathsf{V}+\left(\mathbf{p}\cdot\mathsf{V }\right)\mathbf{p}\right].\] Initial condition values are selected as \(n(\mathbf{p},-\infty)=\mathsf{V}(\mathbf{p},-\infty)=\mathsf{A}(\mathbf{p},- \infty)=\mathsf{T}(\mathbf{p},-\infty)=0\) in order to perform the calculation. We can further obtain the one-particle momentum distribution \(n(\mathbf{p},t)\). It is worthy to note that, in our simulation, the Runge-Kutta method of 8(5, 3) order is used in order to avoid unphysical results during numerical calculation, in which we used \(\mathrm{RelTol}=\mathrm{AbsTol}=10^{-10}\) (where we have specified a relative RelTol as well as an absolute error tolerance AbsTol). In order to calculate the various distribution with high accuracy, the lattice sizes have been set to \(N_{x}\times N_{p_{x}}=8192\times 4096\) and \(N_{x}\times N_{p_{x}}=16384\times 4096\) for low and high spatial oscillating field. Note that \(\mathbf{p}\) (\(t=-\infty\)) \(=\mathbf{p}\) (\(t=+\infty\)) \(=\mathbf{q}\) in the time-depended Sauter pulse. ### Analytical approximation for large spatial scale The result can be obtained analytically by replacing the field strength \(\varepsilon\) in the analytical one-particle distribution solution with an effective field strength \(\varepsilon\left(x\right)=\varepsilon E_{cr}g(x)\) in spacetime-depended field when the field spatial scale is large. For example, we can get the analytical solution explicitly for \(E\left(x,t\right)=\varepsilon E_{cr}g\left(x\right)\operatorname{sech}^{2}\left( \omega t\right)\) by replacing \(\varepsilon\) in the QVE solution for \(E\left(t\right)=\varepsilon\operatorname{sech}^{2}\left(\omega t\right)\)[89] with \(\varepsilon\left(x\right)\) as (refer to Eq. (99) of Ref. [84]) \[n\left(x,p_{x},t\rightarrow+\infty\right)=\frac{2\sinh\left(\frac{\pi}{2 \omega}\big{[}\frac{2e}{\omega}\varepsilon\left(x\right)+\tilde{Q}\left(x \right)-Q\left(x\right)\big{]}\right)\sinh\left(\frac{\pi}{2\omega}\big{[} \frac{2e}{\omega}\varepsilon\left(x\right)-\tilde{Q}\left(x\right)+Q\left(x \right)\big{]}\right)}{\sinh\left(\frac{\pi}{\omega}\tilde{Q}\left(x\right) \right)\sinh\left(\frac{\pi}{\omega}Q\left(x\right)\right)}, \tag{25}\] where \[\tilde{Q}\left(x\right)=\sqrt{m^{2}+p_{\perp}^{2}+\left(p_{x}+\frac{2e}{ \omega}\varepsilon\left(x\right)\right)^{2}}, \tag{26}\] \[Q\left(x\right)=\sqrt{m^{2}+p_{\perp}^{2}+\left(p_{x}-\frac{2e}{\omega} \varepsilon\left(x\right)\right)^{2}}. \tag{27}\] In the following we denote this treatment as the analytical approximation (AA) for large spatial scale approach. ## IV Tunneling time for large spatial scale To interpret explicitly the features of the position distribution, one needs to introduce tunneling time for spacetime-depended inhomogeneous field when the spatial variation scale is much larger than the Compton wavelength \(\lambda\gg\lambda_{C}\) (slowly-varying-envelope approximation). The relationship between tunneling time in the Minkowski space and Euclidean space has been investigated in Ref. [32]. \(T_{t}\) is the tunneling time for any spacetime-depended inhomogeneous fields, \(x_{\pm}\) are the classical turning points, \(x_{4}^{min}\) and \(x_{4}^{max}\) are maximum and minimum of the fourth WI path \(x_{4}\) in the Euclidean space, \(\phi(x)\) is the potential of the field. From Ref. [32], we can know the definition of the tunneling time, the time taken by the particle from \(x_{-}\) to \(x_{+}\) in the barrier region is tunneling time (quantum tunneling time). Further, we can achieve the tunneling time easily by performing Wick-rotation in order to simplify the path integral via \(x_{4}=it\), see Refs. [12]. The tunneling time can be written as [32] \[T_{t}=2\left(x_{4}^{max}-x_{4}^{min}\right). \tag{28}\] If we choose the single-pulse time-dependent electric background \[E(t)=E\mathrm{sech}^{2}(\omega t). \tag{29}\] The WI paths can be obtained in the scalar or spinor QED [12] \[x_{3}(u) =\frac{m}{eE}\frac{1}{\gamma\sqrt{1+\gamma^{2}}}\mathrm{arcsinh} \left[\gamma\cos\left(2\pi u\right)\right], \tag{30}\] \[x_{4}(u) =\frac{m}{eE}\frac{1}{\gamma}\arcsin\left[\frac{\gamma}{\sqrt{1+ \gamma^{2}}}\sin\left(2\pi u\right)\right], \tag{31}\] where \(u=\tau/T\), in which \(\tau\) and \(T\) denote the proper-time and period of the WI path, \(\gamma=m\omega/eE\). Due to lack of analytical solution, it is hard to obtain the analytical solution of the tunneling time directly under spacetime-depended inhomogeneous field. When the spatial variation scale is much larger than the Compton wavelength \(\lambda\gg\lambda_{C}\), we can use spatial slowly-varying-envelope approximation. Thus, the tunneling time for spacetime-dependent field can be locally described by replacing the \(\varepsilon\) in the analytical tunneling time solution under time-dependent field with \(\varepsilon(x)=\varepsilon E_{cr}g(x)\). The tunneling time for our field could be obtained as \[T_{t}\left(x\right)=\frac{2}{\omega}\mathrm{arcsin}\left[\frac{ \gamma(x)}{\sqrt{1+\gamma^{2}(x)}}\right], \tag{32}\] where \[\gamma(x)=\frac{m\;\omega}{e|\varepsilon\left(x\right)|}. \tag{33}\] Note that the tunneling time was obtained by using the WI technique [12], and obviously it is based on the Bohm viewpoint [91]. ## V Results Now, we begin to prove the correctness of our perspective picture by adopting numerical and analytical methods. Here we use the DHW formalism, LDA and AA approaches. The reduced total particle number achieved by the DHW, LDA and AA approaches is shown as Fig. 3. Our results show that the maximum reduced particle number is about five times by comparing to that of [29], meanwhile, the maximum number corresponds to the parameter regime that belongs to area A. Interestingly, however, the particle number when \(W<2mc^{2}\) is still larger than normal case (\(k=0\)) in Fig. 3. On the other hand, the particle number when \(k\neq 0\) is always larger than that when \(k=0\), by the way, in which and area A, we would recover the result to the Fig.2 of Ref. [29]. Furthermore, the total particle distributions and maxima obtained from the three different approaches are approximately the same for appropriate \(\lambda\). For example, when \(k=0.1m\), the results of the three approaches are approximately the same for \(5m^{-1}\leqslant\lambda\leqslant+\infty\). This means that the spatial oscillating effect on the pair production can be tender when spatial oscillating cycle number is larger than two so that the LDA and AA are valid. Although the approximate same reduced total particle number is obtained by the different three methods of DHW, LDA and AA, it does not mean that the created pair experiences the same physical process. To see their differences, the momentum distribution is plotted in Fig. 4(a). While different approaches have different momentum distributions, they have almost the same area, which leads to the same total particle numbers approximately. Now we can interpret it by using our perspective picture mentioned in Sec. II, for example, the momentum corresponding to the maximum of the momentum distribution. As shown in Fig. 4(b), one notes that the electron in the negative energy state jumps from the point A to the point B, and during this process, the energy gap \(\Delta E\) between A and B points would transform the energy to the created pair. At the same time, the transition probability of the electron is the largest because the transition probability is proportional to the energy gap \(\Delta E\) and inversely proportional to the transition, i.e. tunneling, distance Figure 3: Plot of the reduced total particle number for the DHW (solid lines), LDA (dashed lines) and AA (dotes) approaches with various \(k\) and \(\lambda\) when \(\omega=0.1\ m\). \(d=x_{B}-x_{A}=2\Delta x=\pi/k\), where \(x_{B}\) and \(x_{A}\) are the positions of \(A\) and \(B\) points in Fig. 4(b). Thus, the maximal energy gap could be found as \(\Delta E=E_{B}-E_{A}\approx 12.1014\ m\), where \(E_{B}\) and \(E_{A}\) denotes the energy for \(A\) and \(B\) points in Fig. 4(b), respectively. Then we can obtain \(p\approx\pm 5.96749\ m\) appropriately for \(x=0\) point by using Eq. (1). Surprisingly this value is just appropriately the momentum corresponding to the maximum of the momentum distribution for LDA formalism, i.e., \(p_{peak}\approx\pm 5.97215\ m\) in Fig. 4(a). We stress that although the momentum distributions of the LDA and AA methods are exactly the same shape, but the LDA and AA methods do not include the charge density as comparable to the DHW formalism where the charge density is present. Another interesting feature is for the particle position distribution, shown in Fig. 4(c), could be also understood via our perspective picture. From it one can find that the strong oscillation occurs in the position distributions. To see the oscillatory phenomenon more clearly, we come back to the Fig. 4(b) again and have an intuitive looking at the maxima and minima transition Figure 4: Plots of momentum distribution (a), Dirac vacuum (b) and position distribution (c) when \(W<2mc^{2}\). The magenta, blue and green dashed lines represent the DHW, LDA and AA approaches, respectively, when \(k=0.1\ m\), \(\lambda=300\ m^{-1}\) and \(\omega=0.1\ m\). probability at the center-of-mass coordinate \(x\) via our perspective picture. The essentials of these oscillations is the connection between the distance for transition probability from the maxima to minima around of \(x=M\pi/k\), where \(M=0,\pm 1,\pm 2,\pm 3,...\), which corresponds to jumping from peak to trough in Fig. 4(b), and the tunneling distance \(d=2\Delta x=\pi/k\). Obviously these two distances are the same. Similarly, around of \(x=(1/2+M)\pi/k\), which corresponds to jumping from trough to peak in Fig. 4(b), the jump-transition has the same tunneling distance \(d=2\Delta x=\pi/k\). This is why the oscillating effect appears in the position distributions. Of course, the results are completely the same with the results achieved by adopting \(\bar{n}(x,t)\propto|\varepsilon E_{cr}g(x)f(t)|^{2}\) according to Refs. [92; 93]. This illustrates again that our perspective picture and its interpretation are reliable. Particularly note that it can not only offer the exact location (position) of the created particle, but also explain the characteristics of the position distribution. We can also interpret the oscillating effect in Fig. 4(c) by using tunneling time. An example of the tunneling time is shown in Fig. 5, we can observe that an obvious oscillating effect of the tunneling time and its minima and maxima corresponds to \(x=M\pi/k\) and \(x=(1/2+M)\pi/k\), respectively, with an interval of \(d=2\Delta x=\pi/k\). Since the particle number is inversely proportional the tunneling time [32], we can find the selfconsistent oscillating effect in the position distribution in the Fig. 4(c). This can be regard as the another physical interpretation of the particle transition probability for every center-of-mass coordinate \(x\). It should be pointed out that our new perspective picture provide us tunneling distance and the corresponding the tunneling time. On the contrary, the position of the created pair can be determined theoretically by the tunneling time. For instance, the position \(x\) in Fig. 5 is corresponding to the position \(x\) in the position distribution plotting of Fig. 4(c). Summary Effect of spacetime-dependent spatially oscillating fields on the electron-positron pair production is studied numerically and analytically while the work is smaller than twice the electron mass. We further propose a new perspective picture for spatially oscillating fields when \(W<2mc^{2}\). Under large spatial scale, we explain the characteristics of the position and momentum distribution through tunneling time, tunneling distance and energy gap between the positive and negative energy bands in the Dirac vacuum. Our results show that the maximum reduced particle number is about five times by comparing to maximum number for non-spatial oscillation. We find that the pair production results could be obtained by using LDA and AA when spatial oscillating cycle number is larger than two. Finally, we show relationship between the position distribution and tunneling time by employing the WI approach, and explain the position distribution. ## VII Acknowledgments Some helpful discussions with A. Ilderton and C. Kohlfuerst is acknowledged. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11935008 and No. 12265024. The computation was carried out at the HSCC of the Beijing Normal University.
2304.06229
Improving Segmentation of Objects with Varying Sizes in Biomedical Images using Instance-wise and Center-of-Instance Segmentation Loss Function
In this paper, we propose a novel two-component loss for biomedical image segmentation tasks called the Instance-wise and Center-of-Instance (ICI) loss, a loss function that addresses the instance imbalance problem commonly encountered when using pixel-wise loss functions such as the Dice loss. The Instance-wise component improves the detection of small instances or ``blobs" in image datasets with both large and small instances. The Center-of-Instance component improves the overall detection accuracy. We compared the ICI loss with two existing losses, the Dice loss and the blob loss, in the task of stroke lesion segmentation using the ATLAS R2.0 challenge dataset from MICCAI 2022. Compared to the other losses, the ICI loss provided a better balanced segmentation, and significantly outperformed the Dice loss with an improvement of $1.7-3.7\%$ and the blob loss by $0.6-5.0\%$ in terms of the Dice similarity coefficient on both validation and test set, suggesting that the ICI loss is a potential solution to the instance imbalance problem.
Muhammad Febrian Rachmadi, Charissa Poon, Henrik Skibbe
2023-04-13T02:53:50Z
http://arxiv.org/abs/2304.06229v1
Improving Segmentation of Objects with Varying Sizes in Biomedical Images using Instance-wise and Center-of-Instance Segmentation Loss Function ###### Abstract In this paper, we propose a novel two-component loss for biomedical image segmentation tasks called the Instance-wise and Center-of-Instance (ICI) loss, a loss function that addresses the instance imbalance problem commonly encountered when using pixel-wise loss functions such as the Dice loss. The Instance-wise component improves the detection of small instances or "blobs" in image datasets with both large and small instances. The Center-of-Instance component improves the overall detection accuracy. We compared the ICI loss with two existing losses, the Dice loss and the blob loss, in the task of stroke lesion segmentation using the ATLAS R2.0 challenge dataset from MICCAI 2022. Compared to the other losses, the ICI loss provided a better balanced segmentation, and significantly outperformed the Dice loss with an improvement of \(1.7-3.7\%\) and the blob loss by \(0.6-5.0\%\) in terms of the Dice similarity coefficient on both validation and test set, suggesting that the ICI loss is a potential solution to the instance imbalance problem. In this paper, we propose a novel two-component loss for biomedical image segmentation tasks called the Instance-wise and Center-of-Instance (ICI) loss, a loss function that addresses the instance imbalance problem commonly encountered when using pixel-wise loss functions such as the Dice loss. The Instance-wise component improves the detection of small instances or "blobs" in image datasets with both large and small instances. The Center-of-Instance component improves the overall detection accuracy. We compared the ICI loss with two existing losses, the Dice loss and the blob loss, in the task of stroke lesion segmentation using the ATLAS R2.0 challenge dataset from MICCAI 2022. Compared to the other losses, the ICI loss provided a better balanced segmentation, and significantly outperformed the Dice loss with an improvement of \(1.7-3.7\%\) and the blob loss by \(0.6-5.0\%\) in terms of the Dice similarity coefficient on both validation and test set, suggesting that the ICI loss is a potential solution to the instance imbalance problem. T -15, 2023 Full Paper - MIDL 2023 submission Instance-wise and Center-of-Instance segmentation loss, segmentation loss. ## 1 Introduction Object segmentation in biomedical images is a common task, yet presents challenges, namely class imbalance and instance imbalance problems, due to the diversity of object sizes. Class imbalance problem happens when the number of pixels of a class is much higher than the other classes. Whereas, instance imbalance problem happens when larger instances dominates over smaller instances of the same class. These are two frequent problems that arise when objects appear as multiple instances of diverse sizes in an image, such as stroke lesions in brain magnetic resonance imaging (MRI) (Guerrero et al., 2018). Addressing these challenges is crucial for accurate segmentation and improved diagnostic results. Deep learning semantic segmentation models often utilize a pixel-wise loss function to evaluate the quality of the segmentations produced by the model across the whole image. Previous studies have demonstrated that pixel-wise loss functions such as cross-entropy (CE) and Dice losses are effective at segmenting large objects and instances (Ronneberger et al., 2015; Milletari et al., 2016). However, pixel-wise loss functions have difficulty in identifying instances of varying sizes, as they tend to miss small features within large instances, and fail to detect individual small instances (Jeong et al., 2019; Maulana et al., 2021). This is mainly due to their focus on individual pixels, rather than considering the context of objects in an image (Reinke et al., 2021). Recent advancements in biomedical image analysis have led to the development of new loss functions (Ma et al., 2021), e.g. Tversky loss (Salehi et al., 2017), Focal loss (Lin et al., 2017), Generalized Dice loss (Sudre et al., 2017), and a combination of Dice loss with CE loss (Isensee et al., 2021), all designed to address the class imbalance problem. However, most of these methods are pixel-wise and fail to tackle the instance imbalance problem. The instance imbalance problem, where larger instances dominate smaller instances, remains a significant challenge. A normalized instance-wise loss function, where a loss value is computed for each instance individually, is required to address this problem. In this study, **our main contribution is the proposal of a novel instance-wise compound loss function, named the Instance-wise and Center-of-Instance (ICI) loss function**, which can be utilized in conjunction with any pixel-wise loss function for regularization. Through experimental evaluation, we demonstrate the superior performance of the ICI loss function compared to other related loss functions. ## 2 Related Approaches Several loss functions have been proposed to solve the instance imbalance problem in biomedical image segmentation, including inverse weighting (Shirokikh et al., 2020), the blob loss (Kofler et al., 2022), and the lesion-wise loss (Zhang et al., 2021). Figure 1: Comparison of instance-wise segmentation losses performed by our proposed Instance-wise loss (\(\mathcal{L}_{instance}\)) (C) and the blob loss (Kofler et al., 2022) (D). Artificial data are used for clearer visualization, where (A) shows the _label_ image, consisting of 5 instances (cyan), and (B) shows the _output_ image with many more instances (magenta). Each column in (C) and (D) represents an individual instance from the _label_ image. In (C), our proposed Instance-wise loss only includes false segmentations from _output_ instances that have intersections with the instance-of-interest (magenta). In contrast, blob loss (D) includes false segmentations from other _output_ instances (yellow). White are correct segmentations. **Inverse weighting (IW)** was proposed as an instance-weighted loss function, where each instance is given a weight that is inversely proportional to its size by using a global weight map (Shirokikh et al., 2020). However, IW is implemented by assigning weights to each individual pixel, and is not computed on each instance separately. In contrast, the **blob loss** is an instance-wise loss calculated for each instance in the label and averaged over all instances (Kofler et al., 2022). But the blob loss is overly sensitive to false segmentations as it includes false segmentations from other instances, even if they do not intersect with the instance-of-interest (see Figure 1 for visualization). This is because connected component analysis (CCA) is precomputed outside of the blob loss and only for the label image, so the blob loss cannot distinguish which instances of the output image (predicted segmentation) intersect with each instance of the label image (ground truth). Lastly, the **Lesion-wise loss (LesLoss)** was proposed to assign each blob the same size by transforming all instances into spheres with a fixed size based on instances' centers of mass (Zhang et al., 2021). Similar to the blob loss, the CCA is precomputed outside of the LesLoss and only for the label image. Also, an additional segmentation network is needed to perform segmentation of instances' spheres which limits its applicability. ## 3 Proposed Approach Our proposed **Instance-wise and Center-of-Instance (ICI) loss function** is inspired by both the blob loss (Kofler et al., 2022) and LesLoss (Zhang et al., 2021), where we combine instance-wise loss calculation with the normalization of all instances into a square/cube (2D/3D images) of a fixed size. The ICI loss function consists of two loss calculations: the **Instance-wise loss** and the **Center-of-Instance loss** calculations. In general, the Instance-wise loss, distinct from the existing blob loss (Kofler et al., 2022), is used to minimize missed segmentations of instances in the label image (reduce false negatives), especially the small instances. Whereas, the Center-of-Instance loss is used to improve the segmentation of small instances in the label image (reduce false negatives) and suppress the detection of small and spurious instances in the output image (reduce false positives). In this study, we used a compound loss which combines the global Dice loss (\(\mathcal{L}_{global}\)), Instance-wise loss (\(\mathcal{L}_{instance}\)), and Center-of-Instance loss (\(\mathcal{L}_{center}\)) with weights \(a\), \(b\), and \(c\), respectively. The compound loss is shown in Equation (1). The Dice loss is also used to calculate the individual components of the ICI loss; the Dice loss equation is shown in Equation (2). The general flow chart and visualization of the ICI loss is shown in Figure 2 and the formalism of our proposed ICI loss can be seen in Appendix A. \[\mathcal{L}=a\times\mathcal{L}_{global}+b\times\mathcal{L}_{instance}+c\times \mathcal{L}_{center} \tag{1}\] \[diceLoss=1-\frac{2|y\cap y_{\text{pred}}|}{|y|+|y_{\text{pred}}|}=1-\frac{2 \textit{TP}}{2\textit{TP}+\textit{FP}+\textit{FN}} \tag{2}\] ### Instance-wise loss Our proposed **Instance-wise loss (\(\mathcal{L}_{instance}\))** calculation is similar to the blob loss, where an instance-wise segmentation loss is performed by computing the Dice loss for each instance in the manual label. The difference between our Instance-wise loss calculation and the blob loss is that CCA is computed on the fly in our proposed loss function (on GPUs) for both the manual labels and the predicted segmentations. By performing CCA on the fly for both labels and predictions, all instances in the predicted segmentation that overlap by at least 1 pixel/voxel with manually labelled instances can be identified. This approach is not performed in the blob loss, which leads to the inclusion of false segmentations that do not have any overlap with the instance (in the manual label) being calculated (see Figure 1). Instance-wise loss calculation consists of two steps: 1) performing CCA for both the manual label (_label_) and predicted segmentation produced by a deep learning model (_output_), and 2) calculating the Dice loss for each instance in the _label_ (instance-of-interest) against any instances in the _output_ that intersect with the instance-of-interest. We implemented CCA by modifying the \(connected\_components\) function from the kornia library (Riba et al., 2020) so that all gradients in the predicted segmentation are tracked for back-propagation. In this study, the threshold value of 0.5 was heuristically determined and used to threshold the predicted segmentations before performing CCA. After all instances in \(label\) (i.e. \(cc\_label\)) and \(output\) (i.e. \(cc\_output\)) are identified, the Instance-wise loss can be computed by following Algorithm 1 in Appendix E. ### Center-of-Instance loss (\(\mathcal{L}_{center}\)) Our proposed **Center-of-Instance loss (\(\mathcal{L}_{center}\))** calculation transforms all blobs into squares (2D) or cubes (3D) with a fixed size before the Dice loss is calculated. Squares/cubes are used instead of circles/spheres (used in the LesLoss) to simplify the transformation's calculation. Simple calculation is important because all gradients need to be tracked for backpropagation on GPUs. All instances in \(label\) and \(output\) images can be easily transformed into squares/cubes as the center-of-mass for each instance is calculated during CCA. Using the center-of-mass, each instance can then be transformed into a square/cube by assigning 1 to the pixels/voxels that make up the square/cube area. Visualization of the transformation of 2D blobs into 2D squares (\(7\times 7\) pixels) is shown in Figure 2. Visualiza Figure 2: Flow chart and visualization of all losses used in this study. The total loss (\(\mathcal{L}\)) used in this study is a compound loss, as formulated in Equation (1), consisting of the global Dice loss (\(\mathcal{L}_{global}\)) and our proposed ICI loss which are Instance-wise loss (\(\mathcal{L}_{instance}\)) and Center-of-Instance loss (\(\mathcal{L}_{center}\)). tion in 3D space can be found in Appendix H. Based on our preliminary experiments, the best size of the fixed-size cubes for an image of original size \(192\times 192\times 192\) voxels was found to be \(7\times 7\times 7\) voxels (see Table 5 in Appendix I). Pseudo-code of the Center-of-Instance loss calculation is shown in Algorithm 2 in Appendix F. ## 4 Experimental Settings All experimental settings used in this study are described below. **Deep learning model:** For all experiments, we used a 3D Residual U-Net (Kerfoot et al., 2018) loaded from the MONAI library. Parameters that were used to create the 3D Residual U-Net are shown in Appendix D. We used a sigmoid function with 0.5 as the threshold value in the segmentation layer for binary segmentation. **Tested loss functions:** We compared our proposed ICI loss directly to the blob loss (Kofler et al., 2022) and the global Dice loss. We did not perform any comparisons with LesLoss because an additional network is needed for predicting each instance's sphere. For the blob loss, we used the recommended weights, which are \(\alpha=2\) for the (main) global Dice segmentation loss and \(\beta=1\) for the instance-wise segmentation, which also uses the Dice loss. For our proposed compound ICI loss, we tested different weights for the global Dice loss (\(a\)), the Instance-wise loss (\(b\)), and the Center-of-Instance loss (\(c\)). **Dataset:** We used the publicly available ATLAS v2.0 challenge dataset from the MICCAI 2022 challenge (Liew et al., 2022) available at [https://atlas.grand-challenge.org/](https://atlas.grand-challenge.org/). ATLAS v2.0 is a large public dataset of T1w stroke brain MRI and manually segmented lesion masks (\(N=1,271\)) that is divided into a public training set (\(n=655\)), a test set where the lesion masks are hidden (\(n=300\)), and a generalizability set, where both T1w and lesions masks are hidden (\(n=316\)). Specific to this study, we only used the public training set for training and validation, and the test set for testing the trained models. We did not use the generalizability set because only one model can be submitted to the system per month. In contrast, one submission can be submitted per day for evaluating the test set. **Training:** Out of 655 subjects in the public training set from the ATLAS v2.0 challenge dataset, we manually divided it into a training set (\(n=600\)) and validation set (\(n=55\)). We performed two different experiments: whole image experiments (with image size of \(192\times 192\times 192\)), and patch-based experiments (with patch size of \(96\times 96\times 96\)) to assess the effectiveness of our proposed ICI loss in segmenting 3D images of different sizes. In the whole image experiment, we trained 3D Residual U-Net models for 200 epochs by using a mini-batch of 4 random subjects in every step. In the patch-based experiment, we trained 3D Residual U-Net models for 600 epochs (with mini-batches of 2 subjects), where 8 patches were randomly extracted from each subject, with a 1:1 ratio for positive (stroke lesions) and negative (non-stroke lesions) labels. Randomized data augmentations were applied, including left/right flipping, rotation, zooming, and intensity scaling and shifting. Subject-wise intensity normalization was performed by using zero mean unit variance. The 3D Residual U-Net was optimized using the Adam optimizer (Kingma and Ba, 2015). The model that produced the best Dice metric in the validation set was used in testing. **Training environments:** We conducted our experiments using various NVIDIA GPUs, including rtxa6000, rtxa5000, v100, a100, and rtx8000, with CUDA version 11.7, Pytorch version 1.13.0, and MONAI version 0.9.0. **Testing/inference:** We first performed inference on T1w brain MRI in the test set on our computing nodes, and then we submitted the predicted segmentation results to the ATLAS v2.0 challenge's system. Note that only one submission was permitted per day. **Performance measurements:** The ATLAS v2.0 challenge produced 4 performance measurements: Dice similarity coefficient (DSC), volume difference, lesion-wise F1 score, and simple lesion count. We also measured the performance of all models in the validation set by using our own performance measurements, which are DSC, total and numbers of subjects with missed instances (MI), total and numbers of subjects with false instances (FI), and subjects without MI & FI. To decide which model performed best, a numeric rank (written inside square brackets [ ]) is given to each performance measurement such that a mean rank for each model can be calculated. ## 5 Results In this section, the \(\uparrow\) symbol means that higher values are better, while the \(\downarrow\) symbol means that lower values are better. The best value for each column is shown in bold and the second best is underlined. ### Whole image segmentation Figure 3 shows that compounding a pixel-wise segmentation loss (i.e., Dice loss) with both terms of our proposed ICI loss (i.e., in \(a=1,b=1,c=1\) (yellow lines) and \(a=1/4,b=1/2,c=1/4\) (purple lines)) have several advantages in training and validation compared to the other losses. First, yellow and purple lines achieved lower Dice losses in fewer training epochs, as shown in Figure 3A. Second, yellow and purple lines produced lower and more stable numbers of missed and false instances than the other losses during validation, as shown in Figure 3B and C. Lastly, yellow and purple lines achieved higher DSC values more quickly than the other losses during validation as shown in Figure 3D. All these observations suggest that ICI loss successfully regularized Dice loss by keeping the number of missed and false instances low, in addition to lowering the Dice loss itself. Quantitative results for all losses in the validation set produced by using the best models (i.e., with the highest DSC values) can be seen in Table 1. Note that \(\mathcal{L}_{center}\) itself (\(a=1,b=0,c=1\)) produced the lowest total FI, but failed to produced lower total MI and higher DSC. Figure 3: Training curves for (A) Dice loss and validation curves for (B) number of missed blobs, (C) number of false blobs, and (D) DSC values. On the other hand, compounding Dice loss with all of ICI loss's terms with optimum weights (\(a=1/4,b=1/2,c=1/4\)) ranked the best by producing the best DSC with mean rank of 2.83. In contrast, blob loss with recommended weights (\(\alpha=2,\beta=1\)) failed to produce better results except for the number of subjects with MI in the validation set, showing a mean rank of 4.00, which is worse than the baseline Dice loss without regularization (\(a=1,b=0,c=0\)) which showed a mean rank of 3.67. Table 2 shows that compounding Dice loss with all ICI loss's terms with optimum weights (\(a=1/4,b=1/2,c=1/4\)) is quite robust to the unseen test set by producing the best values for DSC, Volume Difference, and Lesion-wise F1 Score, and the second best value for Simple Lesion Count with the highest rank (mean rank of 1.25). In contrast, blob loss with recommended weights (\(\alpha=2,\beta=1\)) failed again to achieve a better mean rank than the baseline Dice loss without regularization (\(a=1,b=0,c=0\)). ### Patch-based segmentation Our proposed ICI loss also showed superior performance in the patch-based segmentation task when compared to both the baseline Dice loss without regularization (\(a=1,b=0,c=0\)) and the blob loss with recommended weights (\(\alpha=2,\beta=1\)) in both the validation and test sets, as seen in Tables 3 and 4. However, note that the optimum weights of the ICI loss used in the whole image segmentation experiments (i.e., \(a=1/4,b=1/2,c=1/4\)) were not the best in terms of mean rank in the test set, suggesting that the optimal weights for the ICI loss may be dependent on the input image size and task. Nevertheless, Table 3 and Table 4 show that the ICI loss is robust to the unseen test set, as different weights of the ICI loss consistently achieved higher mean ranks compared to the baseline Dice loss without regularization in both validation and test sets. study, we compared our ICI loss with the Dice loss, a popular pixel-wise segmentation loss, and the blob loss, which was proposed as an instance-wise segmentation loss, in the task of stroke lesion segmentation on the ATLAS R2.0 challenge dataset from MICCAI 2022. Our experiments show that using the ICI loss led to an average increase of 2.7% in segmentation accuracy compared to the Dice loss and 2.4% compared to the blob loss in both whole image segmentation and patch-based segmentation. The codes (implementation) of ICI loss in Pytorch is available at ([https://github.com/BrainImageAnalysis/ICI-loss](https://github.com/BrainImageAnalysis/ICI-loss)). There are many applications in biomedical image analysis in which the ICI loss may be useful, because many objects that are common targets of segmentation tasks consist of multiple instances of various sizes. The ICI loss has similar limitations to the blob loss: specifically, additional computational resources are required for performing CCA, and performance of the loss function may be sensitive to the weights and hyperparameters used. In our experiments with batch of \(4\times 1\times 192\times 192\times 192\), Dice loss, blob loss, and our proposed ICI loss took 0.26, 1.24, and 2.19 seconds to finish all computations per batch, respectively (see Appendix G for further analysis). Furthermore, our experiments have shown that a simple set of weights \(a=1,b=1,c=1\), without extensive hyperparameter tuning, is sufficient to improve segmentation results in all of the cases based on the DSC metric. The next step is to evaluate whether the ICI loss performs well in multi-class segmentation problems where some classes present as multiple instances with various sizes while others do not. Furthermore, combination with other pixel-wise losses such as CE and Boundary losses might be explored in future studies (some preliminary results can be observed in Appendix J). This work was supported by the program for Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) from the Japan Agency for Medical Research and Development AMED (JP15dm0207001). Library access provided by the Faculty of Computer Science, Universitas Indonesia is also gratefully acknowledged. CP was also supported by the Grant-in-Aid for Scientific Research for Young Scientists (KAKENHI 22K15658). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Weights (a=global,** & **Mean** & **DSC** (\(\uparrow\)) & **Volume** & **Lesion-wise** & **Simple Lesion** \\ b=blob, **encenter** & **Rank** (i) & & & Difference (i) & **F1 Score** (\(\uparrow\)) & **Count** (\(\downarrow\)) \\ \hline blob loss (\(a=2,b=1\)) & 5.25 & 0.5754 (0.2743) [4] & 12,0193 (1.853) [5] & 0.033 (0.2413) [5] & 5.993 (7.5349) [6] \\ \(a=1,b,c=0\) & 5.00 & 0.5598 (0.2742) [6] & 13,857.66 (28,911.29) [6] & 0.092 (0.287) [5] & 4.990 (6.2128) [3] \\ \(a=1,b=0,c=1\) & 4.00 & 0.5713 (0.2764) [5] & 12,031.63 (24,436.64) [4] & 0.4263 (0.2633) [3] & 5.000 (6.607) [4] \\ \(a=1,b=1,c=0\) & 2.50 & 0.5605 (0.2822) [2] & **10,978.488 (2.5486.96) [4]** & 0.421 (0.2921) [2] & 5.3800 (6.7766) [5] \\ \(a=1,b=1,c=1\) & **1.75** & 0.5750 (0.2717) [3] & 11,564.12 (13,260.82) [2] & **0.4480 (0.2557) [1]** & **4.7800 (6.5448) [1]** \\ \(a=1/4,b=1/2,c=1/4\) & 2.50 & **0.5817 (0.2705) [1]** & 11,588.88 (23,728.80) [3] & 0.4256 (0.2932) [4] & 4.9867 (6.1807) [2] \\ \hline \hline \end{tabular} \end{table} Table 4: Performance values on the test set from the patch-based experiments. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Weights (a=global,** & **Mean** & **DSC** (\(\uparrow\)) & **Total** & **Subjects w/** & **Total** & **Subjects** & **Subjects wo/** & **Best** \\ b=blob, **c=center)** & **Rank** (\(\downarrow\)) & **DSC** (\(\uparrow\)) & **MI** (\(\downarrow\)) & **MI** (\(\downarrow\)) & **all MI** (\(\downarrow\)) & **FI** (\(\downarrow\)) & **w/ FI** (\(\downarrow\)) & **MI \& **FI** (\(\uparrow\)) & **Epoch** \\ \hline blob loss (\(a=2,b=1\)) & 3.67 & 0.5237 [3] & 34 [4] & 24 [3] & 6 [22] & 471 [3] & 53 [4] & 1 [3] & 403 \\ a = 1, b = 0, c = & 4.33 & 0.5124 [5] & 30 [2] & 24 [3] & 7 [3] & 503 [5] & 54 [5] & 1 [3] & 292 \\ a = 1, b = 0, c = & 1 & 3.83 & 0.5082 [6] & 32 [3] & 24 [3] & 6 [22] & 478 [4] & 51 [2] & 1 [3] & 387 \\ a = 1, b = 1, c = & 3.17 & **0.5310 [1]** & 36 [5] & 27 [4] & 6 [22] & 462 [2] & 52 [3] & 2 [2] & 440 \\ a = 1, b = 1, c = & 2.17 & 0.5204 [4] & 30 [2] & 23 [2] & 22 [6] & **441 [1]** & **50 [1]** & **3 [1]** & 422 \\ a = 1/4, b = 1/2, c = 1/4 & **2.33** & 0.5295 [2] & **27 [1]** & **22 [1]** & **5 [1]** & 513 [6] & 51 [2] & **3 [1]** & 422 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance values on the validation set from the patch-based experiments.
2307.11205
Mean Flow and Turbulence in Unsteady Canopy Layers
Non-stationarity is the rule in the atmospheric boundary layer (ABL). Under such conditions, the flow may experience departures from equilibrium with the underlying surface stress, misalignment of shear stresses and strain rates, and three-dimensionality in turbulence statistics. Existing ABL flow theories are primarily established for statistically stationary flow conditions and cannot predict such behaviors. Motivated by this knowledge gap, this study analyzes the impact of time-varying pressure gradients on mean flow and turbulence over urban-like surfaces. A series of large-eddy simulations of pulsatile flow over cuboid arrays is performed, programmatically varying the oscillation amplitude $\alpha$ and forcing frequency $\omega$. The analysis focuses on both longtime-averaged and phase-dependent flow dynamics. Inspection of longtime-averaged velocity profiles reveals that the aerodynamic roughness length $z_0$ increases with $\alpha$ and $\omega$, whereas the displacement height $d$ appears to be insensitive to these parameters. In terms of phase-averaged flow statistics, it is found that $\alpha$ primarily controls the oscillation amplitude of the streamwise velocity and Reynolds stresses, but has a negligible impact on their wall-normal structure. On the other hand, $\omega$ determines the size of the region affected by the unsteady forcing, which identifies the so-called Stokes layer thickness $\delta_s$. Within the Stokes layer, phase-averaged resolved Reynolds stress profiles feature substantial variations during the pulsatile cycle, and the turbulence is out of equilibrium with the mean flow. Two phenomenological models have been proposed that capture the influence of flow unsteadiness on $z_0$ and $\delta_s$, respectively.
Weiyi Li, Marco G. Giometto
2023-07-20T19:47:06Z
http://arxiv.org/abs/2307.11205v2
# Mean Flow and Turbulence in Unsteady Canopy Layers ###### Abstract Non-stationarity is the rule in the atmospheric boundary layer (ABL). Under such conditions, the flow may experience departures from equilibrium with the underlying surface stress, misalignment of shear stresses and strain rates, and three-dimensionality in turbulence statistics. Existing ABL flow theories are primarily established for statistically stationary flow conditions and cannot predict such behaviors. Motivated by this knowledge gap, this study analyzes the impact of time-varying pressure gradients on mean flow and turbulence over urban-like surfaces. A series of large-eddy simulations of pulsatile flow over cuboid arrays is performed, programmatically varying the oscillation amplitude \(\alpha\) and forcing frequency \(\omega\). The analysis focuses on both longtime-averaged and phase-dependent flow dynamics. Inspection of longtime-averaged velocity profiles reveals that the aerodynamic roughness length \(z_{0}\) increases with \(\alpha\) and \(\omega\), whereas the displacement height \(d\) appears to be insensitive to these parameters. In terms of phase-averaged flow statistics, it is found that \(\alpha\) primarily controls the oscillation amplitude of the streamwise velocity and Reynolds stresses, but has a negligible impact on their wall-normal structure. On the other hand, \(\omega\) determines the size of the region affected by the unsteady forcing, which identifies the so-called Stokes layer thickness \(\delta_{s}\). Within the Stokes layer, phase-averaged resolved Reynolds stress profiles feature substantial variations during the pulsatile cycle, and the turbulence is out of equilibrium with the mean flow. Two phenomenological models have been proposed that capture the influence of flow unsteadiness on \(z_{0}\) and \(\delta_{s}\), respectively. ## 1 Introduction Advancing our conceptual understanding and ability to predictively model exchange processes between urban areas and the atmosphere is of critical importance to a wide range of applications, including urban air quality control (Britter & Hanna, 2003; Barlow _et al._, 2004; Pascheke _et al._, 2008), urban microclimate studies (Roth, 2012; Li & Bou-Zeid, 2014; Ramamurthy _et al._, 2017), and weather and climate forecasting (Holtslag _et al._, 2013), to name but a few. It hence comes as no surprise that substantial efforts have been devoted towards this goal over the past decades, via, e.g., numerical simulations (Bou-Zeid _et al._, 2004; Xie _et al._, 2008; Cheng & Porte-Agel, 2015; Giometto _et al._, 2016; Sadique _et al._, 2017; Auvinen _et al._, 2017; Zhu _et al._, 2017; Li & Bou-Zeid, 2019), wind tunnel experiments (Raupach _et al._, 1980; Bohm _et al._, 2013; Marucci & Carpentieri, 2020), and observational studies (Rotach, 1993; Kastner-Klein & Rotach, 2004; Rotach _et al._, 2005; Christen _et al._, 2009). These studies have explored the functional dependence of flow statistics on urban canopy geometry (Lettau, 1969; Raupach, 1992; Macdonald _et al._, 1998; Coceal & Belcher, 2004; Yang _et al._, 2016; Li & Katul, 2022), characterized the topology of coherent structures (Kanda _et al._, 2004; Coceal _et al._, 2007; Christen _et al._, 2007; Li & Bou-Zeid, 2011; Inagaki _et al._, 2012), and derived scaling laws for scalar transfer between the urban canopy and the atmosphere (Pascheke _et al._, 2008; Cheng & Porte-Agel, 2016; Li & Bou-Zeid, 2019), amongst others. Most of the previous works have focused on atmospheric boundary layer (ABL) flow under (quasi-)stationary conditions. However, stationarity is of rare occurrence in the ABL (Mahrt & Bou-Zeid, 2020), and theories based on equilibrium turbulence are therefore often unable to grasp the full range of physics characterizing ABL flow environments. Major drivers of non-stationarity in the ABL include time-varying horizontal pressure gradients, associated with non-turbulent motions ranging from submeso to synoptic scales, and time-dependent thermal forcings, induced by the diurnal cycle or by cloud-induced time variations of the incoming solar radiation (Mahrt & Bou-Zeid, 2020). These conditions often result in departures from equilibrium turbulence, with important implications on time- and area-averaged exchange processes between the land surface and the atmosphere. The first kind of non-stationarity was examined in Mahrt (2007, 2008); Mahrt _et al._ (2013), which showed that time-variations of the driving pressure gradient can enhance momentum transport under strong stable atmospheric stratifications. The second kind of non-stationarity was instead analyzed in Hicks _et al._ (2018) making use of data from different field campaigns, and showed that the surface heat flux can change so rapidly during the morning and late afternoon transition that the relations for equilibrium turbulence no longer hold. Numerical studies have also been recently conducted to study how exchange processes between the land surface and the atmosphere are modulated by non-stationarity in the ABL. In their study, Edwards Edwards _et al._ (2006) conducted a comparison of a prevailing single-column model based on equilibrium turbulence theories with the observations of an evening transition ABL, as well as results from large-eddy simulations (LES). Their findings emphasized the inadequacy of equilibrium turbulence theories in capturing the complex behavior of ABL flows during rapid changes in thermal surface forcing. This breakdown of equilibrium turbulence was particularly notable during the evening transition period, which is known for its rapid changes in thermal forcing. Momen & Bou-Zeid (2017) investigated the response of the Ekman boundary layer to oscillating pressure gradients, and found that quasi-equilibrium turbulence is maintained only when the oscillation period is much larger than the characteristic time scale of the turbulence. The majority of the efforts have focused on atmospheric boundary layer flow over modeled roughness, where the flow dynamics in the roughness sublayer--that layer of the atmosphere that extends from the ground surface up to about 2-5 times the mean height of roughness elements (Fernando, 2010)--are bypassed, and surface drag is usually evaluated via an equilibrium wall-layer model (see, e.g., Momen & Bou-Zeid (2017)), irrespective of the equilibrium-theory limitations outlined above. It hence remains unclear how unsteadiness impacts flow statistics and the structure of atmospheric turbulence in the roughness sublayer. Roughness sublayer flow directly controls exchanges of mass, energy, and momentum between the land surface and the atmosphere, and understanding the dependence of flow statistics and structural changes in the turbulence topology on flow unsteadiness is hence important in order to advance our ability to understand and predictively model these flow processes. This study contributes to addressing this knowledge gap by focusing on non-stationarity roughness sublayer flow induced by time-varying pressure gradients. Unsteady pressure gradients in the real-world ABL can be characterized by periodic and aperiodic variations in both magnitude and direction. In this study, we limit our attention to a pulsatile streamwise pressure-gradient forcing, consisting of a constant mean and a sinusoidal oscillating component. This approach has two major merits. First, the temporal evolution of flow dynamics and associated statistics, as well as structural changes in turbulence, can be easily characterized thanks to the time-periodic nature of the flow unsteadiness. Second, the time scale of the pulsatile forcing is well defined, and can hence be varied programmatically to encompass a range of representative flow regimes. Pulsatile turbulent flows over aerodynamically smooth surfaces have been the subject of active research in the mechanical engineering community because of their relevance across a range of applications; these include industrial (e.g., a rotating or poppet valve) and biological (blood in arteries) flows. The corresponding laminar solution is an extension of Stokes's second problem (Stokes, 1901), where the modulation of the flow field by unsteady pressure gradient is confined to a layer of finite thickness known as the "Stokes layer". The thickness of the Stokes layer \(\delta_{s}\) is a function of the pulsatile forcing frequency \(\omega\), i.e., \(\delta_{s}=2l_{s}\), where \(l_{s}=\sqrt{2\nu/\omega}\) is the so-called Stokes length scale and \(\nu\) is the kinematic viscosity of the fluid. In the turbulent flow regime, it has been found that the characteristics of the pulsatile flow are not only dependent on the forcing frequency, but also on the amplitude of the oscillation. Substantial efforts have been devoted to investigating this problem, from both an experimental (Ramaprian & Tu, 1980; Tu & Ramaprian, 1983; Ramaprian & Tu, 1983; Mao & Hanratty, 1986; Brereton _et al._, 1990; Tardu & Binder, 1993; Tardu, 2005) and a computational perspective (Scotti & Piomelli, 2001; Manna _et al._, 2012, 2015; Weng _et al._, 2016). Scotti & Piomelli (2001) drew an analogy to the Stokes length and proposed a turbulent Stokes length scale, which can be expressed in inner units as \[l_{t}^{+}=\frac{\overline{u}_{\tau}}{\nu}\left(\frac{2(\nu+\nu_{t})}{\omega} \right)^{1/2}\, \tag{1}\] where \(\overline{u}_{\tau}\) is the friction velocity based on the surface friction averaged over pulsatile cycles and \(\nu_{t}\) is the so-called eddy viscosity. Parameterizing the eddy viscosity in terms of the Stokes turbulent length scale, i.e., \(\nu_{t}=\kappa\overline{u}_{\tau}l_{t}\), where \(\kappa\) is the von Karman constant, and substituting into (1) one obtains \[l_{t}^{+}=l_{s}^{+}\left(\frac{\kappa l_{s}^{+}}{2}+\left(1+\left(\frac{ \kappa l_{s}^{+}}{2}\right)^{\frac{1}{2}}\right)\right). \tag{2}\] When \(l_{t}^{+}\) is large (e.g., when \(\omega\to 0\)), the flow is in a quasi-steady state. Under such a condition, the flow at each pulsatile phase resembles a statistically stationary boundary layer flow, provided that the instantaneous friction velocity is used to normalize statistics. As \(\omega\) increases and \(l_{t}^{+}\) becomes of the order of the open channel height \(L_{3}^{+}\), the entire flow is affected by the pulsation, i.e., time lags occur between flow statistics at different elevations, and turbulence undergoes substantial structural changes from its equilibrium configuration. When \(l_{t}^{+}<L_{3}^{+}/2\), the flow modulation induced by the pulsation is confined within the Stokes layer \(\delta_{s}^{+}=2l_{t}^{+}\). Above the Stokes layer, one can observe a plug-flow region, with the turbulence being frozen to its equilibrium configuration and simply advected by the mean flow pulsation. A few years later, Bhaganagar (2008) conducted a series of direct numerical simulations (DNS) of low-Reynolds-number pulsatile flow over transitionally rough surfaces. She found that flow responses to pulsatile forcing are generally similar to those in smooth-wall cases, when the roughness size is of the same order of magnitude as the viscous sublayer thickness. The only exception is that, as the pulsation frequency approaches the frequency of vortex shedding from the roughness elements, the longtime averaged velocity profile deviates significantly from that of the steady flow case due to the resonance between the pulsation and the vortex shedding. In the context of a similar flow system, Patil & Fringer (2022) also reported a comparable observation in their DNS study. In addition to the work of Bhaganagar (2008); Patil & Fringer (2022), pulsatile flow over small-scale roughness, e.g., sand grain roughness, has also been studied extensively in the oceanic context, i.e., combined current-wave boundary layers, which play a crucial role in controlling sediment transport and associated erosion in coastal environments (Grant & Madsen, 1979; Kemp & Simons, 1982; Myrhaug & Slaattelid, 1989; Sleath, 1991; Soulsby _et al._, 1993; Mathisen & Madsen, 1996; Fredsoe _et al._, 1999; Yang _et al._, 2006; Yuan & Madsen, 2015). The thickness of the wave boundary layer, which is the equivalent of the Stokes layer in the engineering community, is defined as \[\delta_{w}=\frac{2\kappa u_{\tau,max}}{\omega}\, \tag{3}\] where \(u_{\tau,max}=\sqrt{\tau_{max}}\), and \(\tau_{max}\) denotes the maximum of kinematic shear stress at the surface during the pulsatile cycle. Within the wave boundary layer, mean flow and turbulence are controlled by the nonlinear interaction between currents and waves. Above this region, the modulation of turbulence by waves vanishes. The wall-normal distribution of the averaged velocity over the pulsatile cycle deviates from the classic logarithmic profile, and is characterized by a "two-log" profile, i.e., the velocity exhibits a logarithmic profile with the actual roughness length within the wave boundary layer and a different one characterized by a larger roughness length further aloft (Grant & Madsen, 1986; Fredsoe _et al._, 1999; Yang _et al._, 2006; Yuan & Madsen, 2015). Such behavior was first predicted by a two-layer time-invariant eddy viscosity model by Grant & Madsen (1979), followed by many variants and improvements (Myrahao & Slaattelid, 1989; Sleath, 1991; Yuan & Madsen, 2015). On the contrary, pulsatile flow at high Reynolds numbers over large roughness elements, such as buildings, has received far less attention. Yu _et al._ (2022) conducted a series of LES of combined wave-current flows over arrays of hemispheres, which can be seen as a surrogate of reefs near the coastal ocean. They focused only on low Keulegan-Carpenter numbers \(KC\sim\mathcal{O}(1-10)\), where \(KC\) is defined as the ratio between the wave excursion \(U_{w}T\) and the diameter of the hemispheres, and \(U_{w}\) and \(T\) are the wave orbital velocity and the wave period, respectively (Keulegan _et al._, 1958). However, conclusions from Yu _et al._ (2022) cannot be readily applied to pulsatile flow over urban-like roughness (i.e., large obstacles with sharp edges), mainly because different surface morphologies yield distinct air-canopy interaction regimes under pulsatile forcings (Carr & Plesniak, 2017). Motivated by this knowledge gap, this study proposes a detailed analysis on the dynamics of the mean flow and turbulence in high-Reynolds-number pulsatile flow over idealized urban canopies. The analysis is carried out based on a series of LES of pulsatile flow past an array of surface-mounted cuboids, where the frequency and amplitude of the pressure gradient are programmatically varied. The LES technique has been shown capable of capturing the major flow features of pulsatile flow over various surface conditions (Scotti & Piomelli, 2001; Chang & Scotti, 2004). The objective of this study is to answer fundamental questions pertaining to the impacts of the considered flow unsteadiness on the mean flow and turbulence in the urban boundary layer: 1. Does the presence of flow unsteadiness alter the mean flow profile in a longtime-averaged sense? If so, how do such modifications reflect in the aerodynamic surface parameters? 2. To what extent does the unsteady pressure gradient impact the overall momentum transport and turbulence generation in a longtime-averaged sense within and above the canopy? (iii) How do the phase-averaged mean flow and turbulence behave in response to the periodically varying pressure gradient? How are such phase-dependent behaviors controlled by the oscillation amplitude and the forcing frequency? This paper is organized as follows. Section 2 introduces the numerical algorithm and the setup of simulations, along with the flow decomposition and averaging procedure. Results are presented and discussed in SS3. Concluding remarks are given in SS4. ## 2 Methodology ### Numerical procedure A suite of LES is performed using an extensively validated in-house code (Albertson & Parlange, 1999\(a\), \(b\); Bou-Zeid _et al._, 2005; Chamecki _et al._, 2009; Anderson _et al._, 2015; Fang & Porte-Agel, 2015; Li _et al._, 2016; Giometto _et al._, 2016). The code solves the filtered continuity and momentum transport equations in a Cartesian reference system, which read \[\frac{\partial u_{i}}{\partial x_{i}}=0\, \tag{1}\] \[\frac{\partial u_{i}}{\partial t}+u_{j}(\frac{\partial u_{i}}{\partial x_{j}} -\frac{\partial u_{j}}{\partial x_{i}})=-\frac{1}{\rho}\frac{\partial p^{*}}{ \partial x_{i}}-\frac{\partial\tau_{ij}}{\partial x_{j}}-\frac{1}{\rho}\frac{ \partial p_{\infty}}{\partial x_{1}}\delta_{i1}+F_{i}\, \tag{2}\] where \(u_{1}\), \(u_{2}\), and \(u_{3}\) are the filtered velocities along the streamwise (\(x_{1}\)), lateral (\(x_{2}\)), and wall-normal (\(x_{3}\)) direction, respectively. The advection term is written in the rotational form to ensure kinetic energy conservation in the discrete sense (Orszag & Pao, 1975). \(\rho\) represents the constant fluid density, \(\tau_{ij}\) is the deviatoric component of the subgrid-scale (SGS) stress tensor, which is evaluated via the Lagrangian scale-dependent dynamic (LASD) Smagorinsky model (Bou-Zeid _et al._, 2005). The LASD model has been extensively validated in wall-modeled simulations of unsteady atmospheric boundary layer flow (Momen & Bou-Zeid, 2017; Salesky _et al._, 2017) and in the simulation of flow over surface-resolved urban-like canopies (Anderson _et al._, 2015; Li _et al._, 2016; Giometto _et al._, 2016; Yang, 2016). Note that viscous stresses are neglected in the current study; this assumption is valid as the typical Reynolds number of the ABL flows is \(Re\sim\mathcal{O}(10^{9})\), and the flow is in the fully rough regime. \(p^{*}=p+\frac{1}{3}\rho\tau_{ii}+\frac{1}{2}\rho u_{i}u_{i}\) is the modified pressure, which accounts for the trace of SGS stress and resolved turbulent kinetic energy. The flow is driven by a spatially uniform but temporally periodic pressure gradient, i.e., \[-\partial p_{\infty}/\partial x_{1}=\rho f_{m}\left[1+\alpha_{p}\sin(\omega t )\right]\, \tag{3}\] where \(f_{m}\) denotes the mean pressure gradient. \(\alpha_{p}\) is a constant controlling the amplitude of the forcing, and \(\omega\) represents the forcing frequency. \(\delta_{ij}\) is the Kronecker delta tensor. Periodic boundary conditions apply in the wall-parallel directions, and free-slip boundary conditions are employed at the upper boundary. The lower surface is representative of an array of uniformly distributed cuboids, which serves as a surrogate of urban landscapes. This approach has been commonly adopted in studies of ABL, as this type of surface morphology is characterized by a limited number of length scales, making it amenable to analytical treatment (Reynolds & Castro, 2008; Cheng & Porte-Agel, 2015; Tomas _et al._, 2016; Basley _et al._, 2019; Omidvar _et al._, 2020). Such an approach is justified on the basis that one should first study a problem in its simplest setup before introducing additional complexities. Nonetheless, it is crucial to acknowledge that the introduction of randomness in roughness alters the flow characteristics and the generation of turbulence in rough-wall boundary layer flows. Xie _et al._ (2008) studied flows over random urban-like obstacles and found that turbulence features in the roughness sublayer are controlled by the randomness in the roughness. Giometto _et al._ (2016) conducted an LES study and highlighted that roughness randomness enhances the dispersive stress in the roughness sublayer. Chau & Bhaganagar (2012) carried out a series of DNS of flow over transitionally rough surfaces, demonstrating that different levels of roughness randomness lead to distinct turbulence structures in the near-wall region and subsequently affect turbulence intensities. Spatial derivatives in the wall-parallel directions are computed via a pseudo-spectral collocation method based on truncated Fourier expansions (Orszag, 1970), whereas a second-order staggered finite difference scheme is employed in the wall-normal direction. A second-order Adams-Bashforth scheme is adopted for time integration. Nonlinear advection terms are de-aliased via the 3/2 rule (Canuto _et al._, 2007; Margairaz _et al._, 2018). Roughness elements are explicitly resolved via a discrete-forcing immersed boundary method (IBM), which is also referred to as the direct forcing IBM in the engineering fluid mechanics community (Mohd-Yusof, 1996; Mittal & Iaccarino, 2005; Fang _et al._, 2011). The IBM was originally developed in Mohd-Yusof (1996) and first introduced to ABL studies by Chester _et al._ (2007). Since then, the IBM has been extensively validated in subsequent studies (e.g. Graham & Meneveau, 2012; Cheng & Porte-Agel, 2015; Giometto _et al._, 2016; Anderson, 2016; Yang & Anderson, 2018; Li & Bou-Zeid, 2019). Specifically, an artificial force \(F_{i}\) drives the velocity to zero within the cuboids, and an inviscid equilibrium logarithmic wall-layer model (Moeng, 1984; Giometto _et al._, 2016) is applied over a narrow band centered at the fluid-solid interface to evaluate the wall stresses. As shown in Appendix A, for flow over cuboids in the fully rough regime, the use of an equilibrium wall model does not impact the flow field significantly. Xie & Castro (2006) reached a similar conclusion for a comparable flow system. The incompressibility condition is then enforced via a pressure-projection approach Kim & Moin (1985). Figure 1 shows a schematic of the computational domain. The size of the domain is \([0,L_{1}]\times[0,L_{2}]\times[0,L_{3}]\) with \(L_{1}=72h\), \(L_{2}=24h\), and \(L_{3}=8h\), where \(h\) is the height of the cuboids. The planar and frontal areas of the cube array are set to Figure 1: Side (_a_) and planar view (_b_) of the computational domain. \(\lambda_{p}=\lambda_{f}=0.\overline{1}\). An aerodynamic roughness length of \(z_{0}=10^{-4}h\) is prescribed at the cube surfaces and the lower surface via the wall-layer model. With the chosen value of \(z_{0}\), the SGS pressure drag is a negligible contributor to the overall momentum balance (Yang & Meneveau, 2016). The domain is discretized using a uniform Cartesian grid \((N_{1},N_{2},N_{3})=(576,192,128)\) where each cube is resolved by \((n_{1},n_{2},n_{3})=(8,8,16)\) grid points. As shown in Appendix B, this resolution yields flow statistics--up to second-order moments--that are poorly sensitive to grid resolution. ### Averaging operations Given the time-periodic nature of the flow system, phase averaging is the natural approach to evaluate flow statistics in pulsatile flows (Scotti & Piomelli, 2001; Bhaganagar, 2008; Weng _et al._, 2016; Onder & Yuan, 2019). Phase averaging can be best understood as a surrogate of Reynolds ensemble averaging for time-periodic flows. The phase and intrinsic volume average (Schmid _et al._, 2019) (hereafter referred to as _phase-average_ for brevity) of a quantity of interest \(\theta\) can be defined as \[\langle\theta\rangle(x_{3},t)=\frac{1}{N_{p}}\sum_{n=1}^{N_{p}}\left(\frac{1} {V_{f}}\int_{x_{3}-\delta_{3}/2}^{x_{3}+\delta_{3}/2}\int_{0}^{L_{2}}\int_{0} ^{L_{1}}\theta(x_{1},x_{2},x_{3},t+nT)dx_{1}dx_{2}dx_{3}\right)\,,\quad 0 \leqslant t\leqslant T\;, \tag{4}\] where \(\langle\cdot\rangle\) denotes the phase averaging operation, \(V_{f}\) is a thin fluid slab of thickness \(\delta_{3}\) in the \(x_{3}\) direction, \(N_{p}\) denotes the number of the pulsatile cycles over which the averaging operation is performed, and \(T=2\pi/\omega\) is the time period of the pulsatile forcing. A given instantaneous quantity \(\theta\) can be decomposed as \[\theta(x_{1},x_{2},x_{3},t)=\langle\theta\rangle(x_{3},t)+\theta^{\prime}(x_{ 1},x_{2},x_{3},t)\;, \tag{5}\] where \((\cdot)^{\prime}\) denotes a departure of the instantaneous value from the corresponding phase-averaged quantity. A phase-averaged quantity can be further decomposed into a _longtime average_ and an _oscillatory_ component with zero mean, i.e., \[\langle\theta\rangle(x_{3},t)=\overline{\theta}(x_{3})+\widetilde{\theta}(x_{ 3},t)\;. \tag{6}\] This work relies on the Scotti & Piomelli (2001) approach to analyze the flow system; in this approach, an oscillatory quantity \(\widetilde{\theta}\) is split into two components: one corresponding to the flow oscillation at the forcing frequency (fundamental mode), and one which includes contributions from all of the remaining harmonics, i.e., \[\widetilde{\theta}(x_{3},t)=A_{\theta}(x_{3})\sin\left[\omega t+\phi_{\theta} (x_{3})\right]+e_{\theta}(x_{3})\, \tag{7}\] where \(A_{\theta}\) and \(\phi_{\theta}\) are the oscillatory amplitude of the fundamental mode and the phase lag with respect to the pulsatile forcing, respectively. These components are evaluated via minimization of \(\|e_{\theta}\|_{2}\) at each \(x_{3}\). ### Dimensional analysis and suite of simulations Having fixed the domain size, the aerodynamic surface roughness length, and the spatial discretization, the remaining physical parameters governing the problem are: (i) the oscillation amplitude \(\alpha\), defined as the ratio between the oscillation amplitude of \(\langle u_{1}\rangle\) at the top of the domain and the corresponding mean value; (ii) the forcing frequency \(\omega\); (iii) the friction velocity based on the mean pressure gradient \(\overline{u}_{\tau}=\sqrt{f_{m}L_{3}}\); and (iv) the height of roughness elements \(h\). The latter is a characteristic length scale of the flow in the urban canopy layer (UCL). System response is studied in time and along the \(x_{3}\) coordinate direction, so (v) the wall-normal elevation \(x_{3}\) and (vi) time \(t\) should also also be included in the parameter set. Note that the viscosity is not taken into account since the flow is in the fully rough regime. Choosing \(\overline{u}_{\tau}\) and \(h\) as repeating parameters, a given normalized longtime (\(\overline{Y}\)) and phase-average (\(\langle Y\rangle\)) quantity of interest can hence be written as \[\overline{Y}=f(\frac{x_{3}}{h},\frac{\omega h}{\overline{u}_{\tau}},\alpha)\,\quad\mbox{and}\quad\langle Y\rangle=g(\frac{x_{3}}{h},\omega t,\frac{\omega h }{\overline{u}_{\tau}},\alpha)\, \tag{8}\] respectively, where \(f\) and \(g\) are universal functions. (8) show that \(\overline{Y}\) and \(\langle Y\rangle\) only depend on two dimensionless parameters, namely \(\alpha\) and \(\omega T_{\rm h}\). \(T_{\rm h}=h/\overline{u}_{\tau}\) is the turnover time of the largest eddies in the UCL and can be best understood as a characteristic time scale of the flow in the UCL. \(\omega T_{\rm h}\sim T_{\rm h}/T\) is hence essentially the ratio between the turnover time of the largest eddies in the UCL (\(T_{\rm h}\)) and the pulsation time period (\(T\)). Also, note that given their identical mathematical formulation, the normalized \(\omega\) can be seen as an equivalent Strouhal number which has been used as a non-dimensional parameter to characterize pulsatile flows in early works on pulsatile flow. Also, note that \(\omega T_{\rm h}\) can be seen as an equivalent Strouhal number, which has been used as a non-dimensional parameter to characterize pulsatile flows in earlier studies of this flow system, such as Tardu _et al._ (1994), given their identical mathematical formulation. Four different values of \(\omega T_{\rm h}\) are considered in this study, namely \(wT_{\rm h}=\{0.05\pi,0.125\pi,0.25\pi,1.25\pi\}\). For \(\overline{u}_{\tau}\approx 0.1\) ms\({}^{-1}\) and \(h\approx 10\) m--common ABL values for these quantities (Stull, 1988)--the considered \(\omega T_{\rm h}\) set encompasses time scales variability from a few seconds to several hours, which are representative of submeso-scale phenomena (Mahrt, 2009; Hoover et al., 2015). In this study, we aim to narrow our focus to the _current-dominated_ regime, i.e., \(0<\alpha<1\). Larger values of \(\alpha\) would lead to a _wave-dominated_ regime, which behaves differently. The specific values (\(\alpha=\{0.2,0.4\}\)) have been chosen because they are sufficiently large to lead to an interesting flow response, yet sufficiently far from the wave-dominated regime; this enables us to focus on departures from stationarity induced by flow pulsation in the current-dominated regime. Due to the lack of a straightforward relation between the oscillatory pressure gradient (\(\alpha_{p}\)) and \(\alpha\), \(\alpha_{p}\) has been tuned iteratively to achieve the desired \(\alpha\). As shown in table 1, despite the optimization process, there are still discernible discrepancies between the target \(\alpha\) and its actual value. As also shown in previous works (Scotti & Piomelli, 2001; Bhaganagar, 2008), \(\alpha\) is highly sensitive to variations in \(\alpha_{p}\), which makes it challenging to obtain the desired \(\alpha\) values. The suite of simulations and corresponding acronyms used in this study are listed in table 1. A statistically stationary flow case (\(\alpha=0\)) is also carried out to highlight departures of pulsatile flow cases from the steady state condition. Simulations with pulsatile forcing are initialized with velocity fields from the stationary flow case; the \(\omega T_{\rm h}=\{0.25\pi,1.25\pi\}\) and \(\omega T_{\rm h}=\{0.05\pi,0.125\pi\}\) cases are then integrated in time over \(200T_{L_{3}}\) and \(400T_{L_{3}}\), respectively, where \(T_{L_{3}}=L_{3}/\overline{u}_{\tau}\) is the turnover time of the largest eddies in the domain. This approach yields converged phase-averaged flow statistics. The size of the time step \(\delta t\) is chosen to satisfy the Courant-Friedrichs-Lewy stability condition \((u\delta t)/\delta_{x}\leq 0.05\), where \(u\) is the maximum velocity magnitude at any given spatial location and time during a run, and \(\delta_{x}\) is the grid stencil in the computational domain. Instantaneous three-dimensional snapshots of the velocity and pressure fields are collected every \(T/16\) for the \(\omega T_{\rm h}=\{0.25\pi,1.25\pi\}\) cases and every \(T/80\) for the \(\omega T_{\rm h}=\{0.05\pi,0.125\pi\}\) cases, after an initial \(20T_{L_{3}}\) transient period. ## 3 Results and discussion ### Instantaneous velocity field To gain insights into the instantaneous flow field, figure 2 displays the streamwise fluctuating velocity within and above the UCL from the HM case at two different \(x_{3}\) planes and phases. The chosen two phases \(t=0\) and \(t=\pi/2\) correspond to the end of the deceleration and acceleration period, respectively. It is apparent from figure 2(_a_, _b_) that meandering low-momentum (\(u^{\prime}<0\)) streaks are flanked by adjacent high-momentum (\(u^{\prime}>0\)) ones. Comparing these two, turbulence structures at \(t=\pi/2\) appear smaller in size in both streamwise and spanwise directions. Additionally, within the UCL, apparent vortex shedding occurs on the lee side of the cubes at \(t=\pi/2\), while it is less pronounced at \(t=0\). This suggests that flow unsteadiness substantially modifies the flow field during the pulsatile cycle and is expected to impact the flow statistics substantially. \begin{table} \begin{tabular}{l c c c c} Acronym & Target \(\alpha\) & Actual \(\alpha\) & \(\alpha_{p}\) & \(\omega T_{\rm h}\) \\ LL & 0.2 & 0.17 & 2.4 & 0.05\(\pi\) \\ LM & 0.2 & 0.16 & 6.0 & 0.125\(\pi\) \\ LH & 0.2 & 0.16 & 12.0 & 0.25\(\pi\) \\ LVH & 0.2 & 0.16 & 60.0 & 1.25\(\pi\) \\ HL & 0.4 & 0.38 & 4.8 & 0.05\(\pi\) \\ HM & 0.4 & 0.36 & 12.0 & 0.125\(\pi\) \\ HH & 0.4 & 0.36 & 24.0 & 0.25\(\pi\) \\ HVH & 0.4 & 0.37 & 120.0 & 1.25\(\pi\) \\ SS & 0.0 & 0.0 & 0.0 & - \\ \end{tabular} \end{table} Table 1: List of LES runs. The naming convention for pulsatile flow cases is as follows. The first letter represents the oscillation amplitude: L for \(\alpha=0.2\) and H for \(\alpha=0.4\). The second and third letters denote the forcing frequencies: L for \(\omega T_{\rm h}=0.05\pi\), M for \(\omega T_{\rm h}=0.125\pi\), H for \(\omega T_{\rm h}=0.25\pi\), and VH for \(\omega T_{\rm h}=1.25\pi\). SS denotes the statistically stationary flow case. Figure 2: Instantaneous streamwise fluctuating velocity field at streamwise/cross-stream plane \(x_{3}=2h\) (_a_, _b_) and \(x_{3}=0.75h\) (_c_, _d_) from the HM case. Panels \(a\) and \(c\) correspond to \(t=0\), whereas panels \(b\) and \(d\) to \(t=\pi/2\). ### Longtime-averaged statistics #### 3.2.1 Longtime-averaged velocity profile Profiles of the longtime-averaged streamwise velocity are shown in figure 3. Flow unsteadiness leads to a horizontal shift of profiles in the proposed semi-logarithmic plot. This behavior is distinct from the "two-log" profile in flow over sand grain roughness (Fredsoe _et al._, 1999; Yang _et al._, 2006; Yuan & Madsen, 2015), and also in stark contrast to the one previously observed in current-dominated pulsatile flow over aerodynamically smooth surfaces, where the longtime-averaged field is essentially unaffected by flow unsteadiness (Tardu & Binder, 1993; Tardu, 2005; Scotti & Piomelli, 2001; Manna _et al._, 2012; Weng _et al._, 2016). As also apparent from figure 3, departures from the statistically stationary flow profile become more significant for larger values of \(\alpha\) and \(\omega\). Variations in the aerodynamic roughness length \(z_{0}\) and displacement height \(d\) parameters with \(\omega T_{\rm h}\) are shown in figure 4 for the considered canopy. These parameters are evaluated via the Macdonald _et al._ (1998) approach, where \(d\) is the barycenter height of the longtime-averaged pressure drag from the urban canopy, and \(z_{0}\) is determined via curve fitting. More specifically, \[d=\frac{\int_{0}^{h}\overline{D}(x_{3})x_{3}dx_{3}}{\int_{0}^{h}\overline{D}( x_{3})dx_{3}}\, \tag{1}\] where the wall-normal distribution of the instantaneous canopy pressure drag \(D\) is obtained by taking an intrinsic volume average of the pressure gradient, i.e., \[D(x_{3},t)=\frac{1}{V_{f}}\int_{x_{3}-\delta z/2}^{x_{3}+\delta z/2}\int_{0}^{ L_{2}}\int_{0}^{L_{1}}\frac{1}{\rho}\frac{\partial p}{\partial x}dx_{1}dx_{2} dx_{3}. \tag{2}\] Note that, in principle, one should also account for the SGS drag contribution in (2); in this work, we omit SGS contributions because they are negligible when compared to Figure 3: Wall-normal profiles of longtime-averaged streamwise velocity \(\overline{u}_{1}\) of high-amplitude cases (_a_) and low-amplitude cases (_b_). Line color specifies the forcing frequency: navy blue, \(\omega T_{\rm h}=0.05\pi\); dark green, \(\omega T_{\rm h}=0.125\pi\); light green, \(\omega T_{\rm h}=0.25\pi\); yellow-green, \(\omega T_{\rm h}=1.25\pi\). \(\overline{u}_{1}\) from the SS case, represented by the red dashed line, is included for comparison. Black solid line indicates the slope of \(1/\kappa\), with \(\kappa=0.4\). The shaded area highlights the fitting region for the estimation of the roughness length scale \(z_{0}\). the total drag--a direct result of the relatively small aerodynamic roughness length that is prescribed in the wall-layer model (see SS2.1). \(z_{0}\) is solved by minimizing the mean square error (MSE) between the longtime-averaged velocity and the law of the wall with \(\kappa=0.4\) in the \(x_{3}\in[2h,6h]\) interval, i.e., \[E=\|\overline{u}_{1}-\frac{\overline{u}_{\tau}}{\kappa}\log(\frac{x_{3}-d}{z_ {0}})\|_{2}. \tag{3}\] The fitting interval is highlighted in figure 3. The estimated \(z_{0}\) was found to be poorly sensitive to variations in the fitting interval within the considered range of values. Cheng _et al._ (2007) argued that MacDonald's method is accurate when surfaces are characterized by a low packing density--a requirement that is indeed satisfied in the considered cases. As apparent from figure 4, \(d\) is poorly sensitive to variations in both \(\alpha\) and \(\omega\) (variations across cases are within the \(\pm 3\%\) range). This behavior can be explained by considering that for flows over sharp-edged obstacles, such as the ones considered herein, flow separation patterns are poorly sensitive to variations in \(\alpha\) and \(\omega\), resulting in a rather constant total volume of wake regions and momentum deficits across cases, and constant longtime-averaged pressure on the surfaces of the cuboids. The \(d\) parameter is here evaluated as integral of the pressure gradient field over the surface area, so the above considerations provide a physical justification for the observed behavior. Note that this finding might not be generalizable across all possible roughness morphologies. For instance, Yu _et al._ (2022) showed that separation patterns in pulsatile flows over hemispheres feature a rather strong dependence on \(\alpha\) and \(\omega\), yielding corresponding strong variations in \(d\). Contrary to \(d\), the \(z_{0}\) parameter is strongly impacted by flow unsteadiness, and its value increases with \(\alpha\) and \(\omega\). Bhaganagar (2008) reported a similar upward shift of velocity profile in his simulations of pulsatile flow over transitionally rough surfaces at a low Reynolds number. She attributed the increase in \(z_{0}\) to the resonance between the unsteady forcing and the vortices shed by roughness elements, which is induced when the forcing frequency approaches that of the vortex shedding. However, such an argument does not apply to the cases under investigation, since we observed no spurious peaks in the temporal streamwise velocity spectrum. Rather, the increase in \(z_{0}\) stems from the quadratic relation between the phase-averaged canopy drag and velocity, as elaborated below. Figure 4: Normalized aerodynamic roughness length \(z_{0}\) (_a_) and displacement height \(d\) (_b_). Different colors correspond to different oscillation amplitudes: blue, \(\alpha=0.2\); red, \(\alpha=0.4\). The black square symbol denotes the reference SS case. #### 3.2.2 Phase-averaged drag-velocity relation The phase-averaged canopy drag \(\langle D\rangle\) and the local phase-averaged velocity \(\langle u_{1}\rangle\) at \(x_{3}/h\approx 0.8\) are shown in figure 5, and are representative of corresponding quantities at different heights within the UCL. Results from cases with three lower frequencies and that from the SS case cluster along a single curve, highlighting the presence of a frequency-independent one-to-one mapping between \(\langle D\rangle\) and \(\langle u_{1}\rangle\). As apparent from figure 6(_a_), at these three forcing frequencies, the interaction between the wind and the canopy layer is in a state of quasi-equilibrium, i.e., \(\langle D\rangle\) is in phase with \(\langle u_{1}\rangle\). Moreover, the shape of the aforementioned curve generally resembles the well-known quadratic drag law, which is routinely used to parameterize the surface drag in reduced order models for stationary flow over plant and urban canopies (Lettau, 1969; Raupach, 1992; Katul _et al._, 2004; Poggi _et al._, 2004; Macdonald _et al._, 1998; Coceal & Belcher, 2004). This finding comes as no surprise, given the quasi-equilibrium state of the \(\langle D\rangle-\langle u_{1}\rangle\) relation for the three lower forcing frequencies. On the other hand, as shown in figure 6(_b_), results from the highest-frequency cases (LVH and HVH) exhibit an orbital pattern, which stems from the time lag between \(\langle D\rangle\) and \(\langle u_{1}\rangle\). Artificially removing this time lag indeed yields a better clustering of data points from the highest-frequency cases along the quadratic drag law (see figure 5_b_). These findings suggest the following parameterization for \(\langle D\rangle\): \[\langle D\rangle(x_{3},t)=C_{d}(x_{3})\lambda_{f}\langle u_{1}\rangle\left| \langle u_{1}\rangle\right|(x_{3},t+\Delta t)\, \tag{10}\] where \(C_{d}\) is a sectional drag coefficient that is constant in time and does not depend on \(\alpha\) nor \(\omega\), and \(\Delta t\) accounts for the time lag between \(\langle D\rangle\) and \(\langle u_{1}\rangle\), which instead does indeed depend on \(\alpha\) and \(\omega\). Note that, throughout the considered cases, the wall-normal-averaged drag coefficient \(\int_{0}^{h}C_{d}dx_{3}/h\approx 0.9\)--a value that is similar to those previously reported ones for stationary flow over cube arrays (Coccal & Belcher, 2004) (note that the exact value depends on the formula used to define \(C_{d}\)). Morison _et al._ (1950) developed a semi-empirical model relating the phase-averaged drag generated by obstacles in an oscillatory boundary layer to a given phase-averaged velocity--a model that has been extensively used in the ocean engineering community to evaluate drag from surface-mounted obstacles (Lowe _et al._, 2005; Yu _et al._, 2018, 2022). The Morison model assumes that the total force applied to the fluid by obstacles consists of a quadratic drag term and an inertial term; the latter accounts for the added mass effect and the Froude-Krylov force arising as a direct consequence of the unsteady pressure field. One might argue that the Morison model could also be used to evaluate surface drag as a function or the phase-averaged velocity at a given \(x_{3}\) for the cases under consideration, but unfortunately, this is not the case. As shown in Patel & Witz (2013), the Morison model provides relatively accurate evaluations of obstacle drag when the phased-averaged acceleration at different \(x_{3}\) are in phase. This is not the case in this study, where important phase lags between phase-averaged accelerations at different wall-normal locations substantially degrade the accuracy of such a model. This behavior can be easily inferred from figure 16. In the following, we will make use of (10) to derive an alternative phenomenological surface-drag model for the considered flow system. #### 3.2.3 Mapping roughness length variability to longtime-averaged flow statistics \(z_{0}\) and \(d\) are input parameters of surface flux parameterizations that are routinely used in numerical weather prediction, climate projection, and pollutant dispersion models (see, e.g. Skamarock _et al._, 2008; Benjamin _et al._, 2016). These models are typically based on Reynolds-averaged Navier-Stokes closures, and feature time steps that can go from one hour up to several days. When departures from stationarity occur at a time scale that is much smaller than the time step of the model, model predictions are essentially longtime-averaged quantities, and the validity of surface flux parameterizations based on flow homogeneity and stationarity assumptions may break down. As a first step towards addressing this problem, this section proposes a phenomenological model relating \(z_{0}\) to longtime-averaged pulsatile-flow statistics. The longtime-averaged friction velocity can be written as \[\overline{u}_{\tau}^{2}=\int_{0}^{h}\overline{D}dx_{3}=\int_{0}^{h}C_{d} \lambda_{f}\overline{\left\langle u_{1}\right\rangle|\left\langle u_{1}\right \rangle|}dx_{3}\, \tag{11}\] where \(\overline{D}\) is the longtime-averaged surface drag. Note that the wall-normal structure of Figure 6: Time evolution of \(\left\langle D\right\rangle\) (black solid lines) and \(\left\langle u_{1}\right\rangle\) (red dashed lines) at \(x_{3}/h\approx 0.8\) from the HL (_a_) and LVH (_b_) cases. The acronyms of LES runs are defined in table 1. \(C_{d}\) is approximately constant in the UCL (not shown here), except in the vicinity of the surface, where local contributions to the overall drag are however minimal due to the small value of \(\langle u_{1}\rangle\). Thus, it is reasonable to assume \(C_{d}\) is constant along the wall-normal direction. Also, depending on \(\alpha\) and \(\omega\), the flow within the canopy might undergo a local reversal in the phase-averaged sense, meaning that \(\langle u_{1}\rangle<0\) at selected \(x_{3}\) locations. Assuming that there is no flow reversal within the UCL, i.e., \(\langle u_{1}\rangle\geq 0\) in \(z\leq h\), (10) can be written as \[\overline{u}_{\tau}^{2}=C_{d}\lambda_{f}\left(\int_{0}^{h}\overline{u}_{1}^{2 }dx_{3}+\int_{0}^{h}\overline{\overline{u}_{1}^{2}}dx_{3}\right)\, \tag{11}\] where the second term on the right-hand side of (11) is identically zero for the SS case. (11) essentially states that an unsteady canopy layer requires a lower longtime-averaged wind speed to generate the same drag of a steady canopy layer since quadratic drag contributions are generated by flow unsteadiness (the second term on the right-hand side of (11)). Note that \(\sqrt{\int_{0}^{h}(\cdot)^{2}dx_{3}/h}\) is an averaging operation over the UCL based on the \(L_{2}\) norm. Rearranging terms in (11) leads to \[\overline{u}_{1,\mathrm{avg}}=\sqrt{\frac{\overline{u}_{\tau}^{2}}{C_{d} \lambda_{f}h}-\frac{1}{h}\int_{0}^{h}\overline{\overline{u}_{1}^{2}}dx_{3}}\, \tag{12}\] and \[\overline{u}_{1,\mathrm{avg}}^{\mathrm{SS}}=\sqrt{\frac{\overline{u}_{\tau}^ {2}}{C_{d}\lambda_{f}h}}\, \tag{13}\] For the pulsatile cases and the SS case, respectively. Here \((\cdot)_{\mathrm{avg}}\) denotes the canopy-averaged quantity, and \((\cdot)^{\mathrm{SS}}\) represents a quantity pertaining to the SS case. As discussed in SS3.2.1, flow unsteadiness yields a shift of the \(\overline{u}_{1}\) profile with negligible variations in the \(d\) parameters when compared to the stationary flow with the same \(\overline{u}_{\tau}\). In terms of the law-of-the-wall, this behavior can be described as a variation in \(z_{0}\), i.e., \[\overline{u}_{1}^{\mathrm{SS}}-\overline{u}_{1}=\frac{\overline{u}_{\tau}}{ \kappa}\log\left(\frac{x_{3}-d}{z_{0}^{\mathrm{SS}}}\right)-\frac{\overline{ u}_{\tau}}{\kappa}\log\left(\frac{x_{3}-d}{z_{0}}\right)\, \tag{14}\] where (14) is valid for any \(x_{3}\) in the logarithmic region. The shift in the velocity profile is approximately constant for \(x_{3}\in[0,L_{3}]\), so one can write \[\overline{u}_{1}^{\mathrm{SS}}-\overline{u}_{1}\approx\overline{u}_{1, \mathrm{avg}}^{\mathrm{SS}}-\overline{u}_{1,\mathrm{avg}}\, \tag{15}\] and substituting (12)-(14) into (15) finally yields \[z_{0}=z_{0}^{\mathrm{SS}}\exp\left[\kappa\left(\sqrt{\frac{1}{C_{d}\lambda_{f }h}}-\sqrt{\frac{1}{C_{d}\lambda_{f}h}-\frac{\int_{0}^{h}\overline{\overline{ u}_{1}^{2}}dx_{3}/h}{\overline{u}_{\tau}^{2}}}\right)\right]. \tag{16}\] (16) is a diagnostic model relating variations in the \(z_{0}\) parameter to the UCL phase-averaged velocity variance--a longtime-averaged quantity. \(z_{0}\) estimates from (16) are compared against LES results in figure 7, using \(C_{d}=0.9\). It is apparent that the proposed model is able to accurately evaluate \(z_{0}\) for most of the considered cases. For the LVH, HH, and HHVW runs, \(z_{0}\) is overestimated by the model; these departures are attributed to the presence of flow reversal in the UCL, which contradicts the model assumptions. (16) highlights that, in the absence of flow reversal, \(z_{0}\) can be described as a monotonically increasing function of the \(\widetilde{u}_{1}\) variance in the UCL. As explained at the beginning of this section, this finding is important from a flow modeling perspective, because it relates a longtime-averaged flow statistic to the \(z_{0}\) parameter. Note that \(z_{0}^{\rm SS}\) can be accurately evaluated using any existing parameterization for stationary ABL flow over aerodynamically rough surfaces, including the Lettau (1969), Raupach (1992), and Macdonald _et al._ (1998) models. Further, \(C_{d}=0.9\) (see discussion in SS3.2.2) and \(\lambda_{f}\) and \(h\) are morphological parameters that are in general a-priori available. In weather forecasting and climate models, \(\widetilde{u}_{1}\) is an SGS quantity and would hence have to be parameterized as a function of longtime-averaged statistics that the model computes or from available in-situ or remote sensing measurements. #### 3.2.4 Longtime-averaged resolved Reynolds stress This section shifts the attention to longtime-averaged resolved Reynolds stresses. For all of the considered cases, contributions from SGS stresses account for \(<1\%\) of the total phase-averaged Reynolds stresses and are hence not discussed. Throughout the boundary layer, \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) profiles are indistinguishable from the SS one (not shown), indicating a weak dependence of such a quantity on \(\alpha\) and \(\omega\). Above the UCL, the divergence of \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) balances the longtime-averaged driving pressure gradient Figure 8: Longtime-averaged resolved turbulent kinetic energy \(\overline{k}=(\overline{u^{\prime}_{1}u^{\prime}_{1}}+\overline{u^{\prime}_{ 2}u^{\prime}_{2}}+\overline{u^{\prime}_{3}u^{\prime}_{3}})/2\) from high-amplitude cases (_a_) and low-amplitude cases (_b_). Line colors correspond to those used in figure 3. Figure 7: Comparison between \(z_{0}\) estimated via (3.11) and \(z_{0}\) from LES. Symbols and colors correspond to those used in figure 5. \(f_{m}\), i.e., \[-\frac{\partial\overline{u^{\prime}_{1}u^{\prime}_{3}}}{\partial x_{3}}+f_{m}=0. \tag{12}\] Since \(f_{m}\) does not vary across the considered cases and \(\overline{u^{\prime}_{1}u^{\prime}_{3}}(L_{3})=0\), a collapse of \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) profiles in this region was to be expected from the mathematical structure of the governing equations. Within the UCL, the longtime-averaged momentum budget reads \[-\frac{\partial\overline{u^{\prime}_{1}u^{\prime}_{3}}}{\partial x_{3}}+f_{m}- \overline{D}=0. \tag{13}\] Given that \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) profiles collapse in this region, \(\overline{D}\) is also expected to feature a weak dependence on \(\alpha\) and \(\omega\), which in turn explains the weak variations in the \(d\) parameter that were observed in SS3.2.1. Figure 8 depicts wall-normal profiles of the longtime-averaged resolved turbulent kinetic energy, which is defined as \[\overline{k}=(\overline{u^{\prime}_{1}u^{\prime}_{1}}+\overline{u^{\prime}_{2 }u^{\prime}_{2}}+\overline{u^{\prime}_{3}u^{\prime}_{3}})/2. \tag{14}\] Such a quantity features a relatively rapid increase in the UCL, which as discussed in Schmid et al. (Schmid _et al._, 2019), is due to dispersive contributions caused by the canopy geometry. wh Also note that the rapid decrease in the near-surface region signals the presence of variations over scales of variability smaller than the vertical grid stencil (Coceal _et al._, 2007). Flow unsteadiness has a relatively more important impact on such a quantity, especially for flow in the UCL and for cases with a high oscillation amplitude. An increase in \(\alpha\) generally results in more pronounced departures of \(\overline{k}\) profiles from the SS one. This behavior can be best explained by considering the shear production terms in the budget equation for \(\overline{k}\), i.e., \[\overline{\mathcal{P}}=-\overline{\langle u^{\prime}_{1}u^{\prime}_{3}\rangle \frac{\partial\langle u_{1}\rangle}{\partial x_{3}}}=\underbrace{-\overline{ u^{\prime}_{1}u^{\prime}_{3}}\frac{\partial\overline{u}_{1}}{\partial x_{3}}}_{ \overline{\mathcal{P}_{1}}}\underbrace{-\overline{u^{\prime}_{1}u^{\prime}_{3} }\frac{\partial\overline{u}_{1}}{\partial x_{3}}}_{\overline{\mathcal{P}_{2}}}\, \tag{15}\] where \(\overline{\mathcal{P}}_{1}\) represents the work done by \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) onto the longtime-averaged flow field, and \(\overline{\mathcal{P}}_{2}\) is the longtime average of the work done by the oscillatory shear stress \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) onto the oscillatory flow field. Figure 9(_a_) shows that \(\overline{\mathcal{P}}_{1}\) is poorly sensitive to variations in \(\alpha\) and \(\omega\). This behavior stems from the constancy of \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\partial\overline{u}_{1}/\partial x_{3}\) across cases (the latter can be inferred from the systematic shift of \(\overline{u}_{1}\) profiles in figure 3). Conversely, \(\overline{\mathcal{P}}_{2}\) from high-amplitude cases are generally larger than those from low-amplitude ones, mainly due to the higher \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\widetilde{u}_{1}\) values. Discrepancies in \(\overline{\mathcal{P}}_{2}\) among high-amplitude cases are larger than those among low-amplitude ones, which ultimately yields the observed variability in \(\overline{k}\). Further insight into the problem can be gained by looking at the normal components of the longtime-averaged resolved Reynolds stress tensor, which are shown in figure 10. In this case, it is apparent that increases in the oscillation frequency lead to a decrease in \(\overline{u_{1}^{\prime}u_{1}^{\prime}}\) and an increase in \(\overline{u_{2}^{\prime}u_{2}^{\prime}}\) and \(\overline{u_{3}^{\prime}u_{3}^{\prime}}\) within the UCL--a behavior that is especially apparent for the high-amplitude cases (figure 10(e,d,f)). These trends can be best understood by examining the pressure-strain terms from the budget equations of the longtime-averaged resolved Reynolds stresses, i.e., \[R_{ij}=\frac{\overline{p^{\prime}}}{\rho}(\frac{\partial u_{j}^{\prime}}{ \partial x_{i}}+\frac{\partial u_{i}^{\prime}}{\partial x_{j}}) \tag{16}\] \(R_{ii}\) are responsible for redistributing kinetic energy among the longtime-averaged normal Reynolds stresses (Pope 2000), and are shown in figure 11. With the exception of the very near surface region (\(x_{3}\lessapprox 0.2\)) where no clear trend can be observed, increases in \(\omega\) and \(\alpha\) yield a decrease in \(R_{11}\) and increase in \(R_{22}\) and \(R_{33}\), which justify the observed isotropization of turbulence in the UCL. ### Oscillatory fields This section shifts the focus to the time evolution of velocity and resolved Reynolds stresses during the pulsatile cycle. #### 3.3.1 Oscillation amplitude impacts on the oscillatory fields Flow statistics from the LL and HL simulations are here examined to study the impact of the oscillation amplitude (\(\alpha\)) on the oscillatory velocity and resolved Reynolds stresses. Only the LL and HL runs are discussed, as they were found to be representative of the observed behaviors for the other low and high-amplitude cases, respectively. Figure 12 contrasts the profiles of oscillatory velocity (\(\widetilde{u}_{1}\)) from the LL and HL runs Figure 11: Pressure redistribution terms \(R_{ii}\). (_a_, \(b\), _c_): low-amplitude cases; (_d_, \(e\), _f_): high-amplitude cases. Line colors correspond to those used in figure 3. Figure 12: Profiles of the oscillatory velocity \(\widetilde{u}_{1}\) from the LL (blue) and HL (red) cases at different phases of the pulsatile cycle. Profiles are \(T/4\) apart. at four equispaced phases during the pulsatile cycle. \(0<t<T/2\) and \(T/2<t<T\) are flow acceleration and deceleration periods, respectively. \(\widetilde{u}_{1}\) at the top of the domain is controlled by the \(\alpha\) parameter, so it is natural to use \(\overline{u}_{\tau}\alpha\) as a normalization constant to study the problem. Manna _et al._ (2012) investigated pulsatile open channel flow over an aerodynamically smooth surface, and showed that using such a normalization constant is indeed convenient as it leads to a collapse of \(\widetilde{u}_{1}\) profiles across cases with different amplitudes, even in the presence of strong flow reversal. This indicates that, at a given forcing frequency, the amplitude of the oscillatory velocity within the domain is proportional to that at the top of the domain. In this work, we show that such a scaling works well also in the presence of aerodynamically rough surfaces, as evidenced by the excellent collapse of \(\widetilde{u}_{1}/(\overline{u}_{\tau}\alpha)\) profiles in figure 12. This is not trivial, especially when considering the different scaling of surface drag between aerodynamically smooth and rough walls in the presence of flow unsteadiness. Figure 13: Profiles of the oscillatory resolved Reynolds shear stress \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) from the LL (blue) and HL (red) cases at different phases of the pulsatile cycle. Profiles are \(T/4\) apart. The oscillatory resolved turbulent kinetic energy can be defined as \[\widetilde{k}=\frac{1}{2}(\widetilde{u_{1}^{\prime}u_{1}^{\prime}}+\widetilde{u_ {2}^{\prime}u_{2}^{\prime}}+\widetilde{u_{3}^{\prime}u_{3}^{\prime}}). \tag{3.17}\] As shown in figure 13 and 14, both the oscillatory resolved Reynolds shear stress \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\widetilde{k}\) scale with \(\overline{u}_{\tau}^{2}\alpha\). Although not shown, the three oscillatory normal Reynolds stresses also obey such a scaling, suggesting that any change in \(\widetilde{k}\) is proportionally distributed to the three normal Reynolds stresses during the pulsatile cycle. As discussed in the following paragraphs, the scaling of \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\widetilde{k}\) is a direct consequence of the mild non-linearity in the production of \(\widetilde{k}\) and \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\). Subtracting the budget equation of \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) from that of \(\langle u_{1}^{\prime}u_{3}^{\prime}\rangle\), one obtains, \[\frac{\partial\widetilde{u_{1}^{\prime}u_{3}^{\prime}}}{\partial t}=\underbrace {-2\widetilde{u_{3}^{\prime}u_{3}^{\prime}}\frac{\partial\overline{u}_{1}}{ \partial x_{3}}}_{\widetilde{\mathcal{P}}_{uu,l1}}-2\widetilde{u_{3}^{\prime} u_{3}^{\prime}}\frac{\partial\widetilde{u}_{1}}{\partial x_{3}}-2\widetilde{u_{3}^{ \prime}u_{3}^{\prime}}\frac{\partial\widetilde{u}_{1}}{\partial x_{3}}+2 \widetilde{u_{3}^{\prime}u_{3}^{\prime}}\frac{\partial\widetilde{u}_{1}}{ \partial x_{3}}}_{\widetilde{\mathcal{P}}_{uu,nl}}+...\, \tag{3.18}\] where \(\widetilde{\mathcal{P}}_{13,l1}\) and \(\widetilde{\mathcal{P}}_{13,l2}\) are the linear production terms of \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\), while \(\widetilde{\mathcal{P}}_{13,nl}\) is the nonlinear production. Similarly, the budget equations of \(\widetilde{k}\) can be written as \[\frac{\partial\widetilde{k}}{\partial t}=\underbrace{-\widetilde{u_{1}^{ \prime}u_{3}^{\prime}}\frac{\partial\overline{u}_{1}}{\partial x_{3}}}_{ \widetilde{\mathcal{P}}_{k,l1}}\underbrace{-\widetilde{u_{1}^{\prime}u_{3}^{ \prime}}\frac{\partial\widetilde{u}_{1}}{\partial x_{3}}}_{\widetilde{ \mathcal{P}}_{k,l2}}-\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\frac{\partial \widetilde{u}_{1}}{\partial x_{3}}+2\widetilde{u_{1}^{\prime}u_{3}^{\prime}} \frac{\partial\widetilde{u}_{1}}{\partial x_{3}}}_{\widetilde{\mathcal{P}}_{k, nl}}+...\, \tag{3.19}\] where \(\widetilde{\mathcal{P}}_{k,l1}\) and \(\widetilde{\mathcal{P}}_{k,l2}\) are the linear production terms of \(\widetilde{k}\), while \(\widetilde{\mathcal{P}}_{k,nl}\) is the nonlinear production. As shown in figure 15, the nonlinear production term is substantially smaller than the sum of the corresponding linear productions for both \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\widetilde{k}\). Given that \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\), \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\), and \(\partial\overline{u}_{1}/\partial x_{3}\) from the LL and HL cases are similar, when \(\widetilde{u}_{1}\sim\overline{u}_{\tau}\alpha\) and \(\widetilde{u_{3}^{\prime}u_{3}^{\prime}}\sim\overline{u}_{\tau}^{2}\alpha\), the total production of \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) is \[\widetilde{\mathcal{P}}_{13}\approx\widetilde{\mathcal{P}}_{13,l1}+\widetilde {\mathcal{P}}_{13,l2}\sim\frac{\overline{u}_{\tau}^{3}\alpha}{h}\, \tag{3.20}\] Figure 15: Normalized production terms for \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) (_a_) and for \(\widetilde{k}\) (_b_) at \(x_{3}/h=1.5\) from the LL (blue) and HL (red) cases. Solid lines denote the linear production terms, and dash lines represent the nonlinear production terms. whereas that of \(\widetilde{k}\) is \[\widetilde{\mathcal{P}}_{k}\approx\widetilde{\mathcal{P}}_{k,l1}+\widetilde{ \mathcal{P}}_{k,l2}\sim\frac{\overline{u}_{\tau}^{3}\alpha}{h}. \tag{31}\] This in turn leads to the observed \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\sim\overline{u}_{\tau}^{2}\alpha\) and \(\widetilde{k}\sim\overline{u}_{\tau}^{2}\alpha\). This scaling is expected to fail under two conditions. First, when \(\alpha\) is sufficiently large, the contribution of \(\widetilde{\mathcal{P}}_{k,nl}\) and \(\widetilde{\mathcal{P}}_{k,nl}\) can no longer be neglected. Second, when the variations in \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) or \(\overline{u_{3}^{\prime}u_{3}^{\prime}}\) among cases with different \(\alpha\) are so large that the linear production terms do not scale with \(\overline{u}_{\tau}^{2}\alpha\) any more, which is what occurs within the UCL for the LVH and HVH cases (see figure 10), (20) and (31) do not hold. The above analysis has shown that the \(\alpha\) parameter primarily controls the amplitude of the oscillatory flow quantities, but has little impact on the wall-normal profiles of those quantities, which are instead controlled by \(\omega\), as will be shown in the following section. This behavior is also expected to hold in the smooth-wall setup (Manna _et al._, 2012), although the mechanism responsible for generating drag over aerodynamically-smooth surfaces is quite distinct. #### 3.3.2 Forcing frequency impacts on the oscillatory fields This section discusses how the oscillatory velocity and resolved Reynolds stresses respond to variations in the forcing frequency. Only low-amplitude cases will be considered since conclusions can be generalized across the considered runs. Figure 16 and 17 present the time evolution of \(\widetilde{u}\) and \(\partial\widetilde{u}/\partial x_{3}\). Three distinct frequency regimes can be identified. The first regime corresponds to the highest amongst the considered forcing frequencies, i.e., the LVH case. For this flow regime, the oscillation in \(\partial\widetilde{u}/\partial x_{3}\) is typically confined within the UCL. This behavior can be best explained when considering that the time period of the oscillation is comparable to the eddy turnover time of turbulence in the UCL, i.e., \(T\approx T_{h}\), which is the characteristic time scale for "information transport" within the UCL. At the three lower forcing frequencies, i.e., the LL, LM, and LH cases, on the contrary, the interaction between the roughness elements and the unsteady flow induces an oscillation in the shear rate, which has a phase lag of Figure 16: Space–time diagrams of \(\widetilde{u}_{1}/\overline{u}_{\tau}\) from the LL (_a_), LM (_b_), LH (_c_), and LVH (_d_) cases. Horizontal dashed lines identify the top of the UCL. roughly \(\pi/2\) with respect to the pulsatile forcing at the top of the UCL. This oscillating shear rate then propagates in the positive wall-normal direction while being progressively attenuated. The propagation speed of the oscillating shear rate appears to be constant for a given forcing frequency, which can be readily inferred by the constant tilting angle in the \(\partial\widetilde{u}_{1}/\partial x_{3}\) contours. The flow region affected by the oscillating shear rate defines the so-called "Stokes layer". For cases with two moderate frequencies, i.e., the LM and LH cases, the Stokes layer thickness (\(\delta_{s}\)) is smaller than the domain height \(L_{3}\). Above the Stokes layer, the slope of \(\widetilde{u}_{1}\) is nominally zero over the pulsatile cycle, and the flow in such a region resembles a plug-flow. On the contrary, in the LL case, the entire domain is affected by the oscillating shear rate, thus \(\delta_{s}>L_{3}\). Figure 18 and 19 depict the time evolution of \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\widetilde{u_{1}^{\prime}u_{1}^{\prime}}\), respectively. Although the contours of \(\widetilde{u_{2}^{\prime}u_{2}^{\prime}}\) and \(\widetilde{u_{3}^{\prime}u_{3}^{\prime}}\) are not shown, these quantites vary in a similar fashion as \(\widetilde{u_{1}^{\prime}u_{1}^{\prime}}\) during the pulsatile cycle. These space-time diagrams confirm that the considered frequencies encompass three distinct flow regimes. For the LVH case, time variations of the oscillatory resolved Reynolds stresses are essentially zero above the UCL. In cases with three lower frequencies, oscillatory resolved Reynolds stresses exhibit a similar behavior to \(\partial\widetilde{u}_{1}/\partial x_{3}\). Specifically, there appear oscillating waves propagating away from the UCL at a constant speed and meanwhile getting weakened. In the LM and LH cases, such oscillating waves are fully dissipated at the upper limit of the Stokes layer, above which the turbulence is "frozen" and passively advected. A visual comparison of the tilting angles in the contours of oscillatory resolved Reynolds stresses and \(\partial\widetilde{u}_{1}/\partial x_{3}\) reveals that the oscillating waves in these quantities feature similar propagation speeds. This behavior closely resembles the one observed in smooth-wall cases (Scotti & Piomelli, 2001; Manna _et al._, 2015). The physical interpretation is that when the oscillating shear rate is diffused upwards by the background turbulent flow, it interacts with the local turbulence via the mechanisms described in (3.18) and (3.19), thus inducing the observed oscillations in the resolved Reynolds stresses. To further quantify the propagation speeds of the oscillating waves in \(\partial\widetilde{u}_{1}/\partial x_{3}\) and oscillatory resolved Reynolds stresses, figure 20 present the phase lag of those quantities with respect to the pulsatile forcing. For a quantity \(\theta\), the propagation speed \(c_{\theta}\) is defined based on the slope of the phase lag, i.e., \[c_{\theta}=-\omega\frac{\partial x_{3}}{\partial\phi_{\theta}}. \tag{3.22}\] Table 2 summarizes the wave propagation speeds for \(\partial\widetilde{u}_{1}/\partial x_{3}\), \(-\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\), \(\widetilde{u_{1}^{\prime}u_{1}^{\prime}}\), \(\widetilde{u_{2}^{\prime}u_{2}^{\prime}}\), and \(\widetilde{u_{3}^{\prime}u_{3}^{\prime}}\) of each case. This again confirms that the oscillating waves propagate at a similar speed for the considered quantities. It is also noteworthy to point out that the speed of the propagating wave increases with \(\omega\). Three other observations can be made from figure 20. First, throughout the considered cases, there appears a marked phase lag of roughly \(\pi/6\) between \(\partial\widetilde{u}_{1}/\partial x_{3}\) and (negative) \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\), indicating a deviation from the Boussinesq eddy viscosity assumption. Weng _et al._ (2016) reported a similar finding, and they attributed such behavior to non-equilibrium effects arising when the time period of the pulsatile forcing is short compared to the local turbulence relaxation time so that the turbulence is not able to relax to its equilibrium state during the pulsation cycle. Second, the lack of phase lag among the oscillatory normal resolved Reynolds stresses implies that the oscillatory pressure-redistribution terms respond immediately to the change in \(\widetilde{u_{1}^{\prime}u_{1}^{\prime}}\). Third, the lifetimes of oscillating waves in the resolved Reynolds stresses and shear rate, which is inferred by the difference between the phase lags at the top of the UCL and at the upper limit of the Stokes layer, are no more than half of the oscillation time period, although they decrease with \(\omega\). They are considerably shorter than those in smooth-wall cases, which are typically larger than one oscillation period (Scotti & Piomelli, 2001; Manna _et al._, 2015; Weng _et al._, 2016). #### 3.3.3 Scaling of the Stokes layer thickness \(\delta_{s}\) is a quantity of interest across many applications, since it defines the region where the turbulence and the mean flow are out of equilibrium. In such a region, established turbulence theories may fail to capture flow dynamics that are of relevance for, e.g., surface drag and scalar dispersion. For the wave-current boundary flow, where the surface is typically transitionally rough, the wave boundary layer thickness--an equivalent of Stokes layer thickness--scales as \[\delta_{w}\sim\frac{\kappa u_{\tau,max}}{\omega}\, \tag{3.23}\] \begin{table} \begin{tabular}{l c c c c c} Case & \(c_{du1dx3}/\overline{u}_{\tau}\) & \(c_{-u1u3}/\overline{u}_{\tau}\) & \(c_{u1u1}/\overline{u}_{\tau}\) & \(c_{u2u2}/\overline{u}_{\tau}\) & \(c_{u3u3}/\overline{u}_{\tau}\) \\ LL & 0.50 & 0.51 & 0.51 & 0.49 & 0.48 \\ LM & 0.55 & 0.54 & 0.60 & 0.57 & 0.58 \\ LH & 0.72 & 0.71 & 0.77 & 0.76 & 0.74 \\ \end{tabular} \end{table} Table 2: Propagation speeds of oscillating waves in \(\partial\widetilde{u}_{1}/\partial x_{3}\) and oscillatory resolved Reynolds stresses. where \(u_{\tau,max}\) is the friction velocity based on the maximum phase-averaged wall stress during the pulsatile cycle (Grant & Madsen, 1979). Such a scaling argument is not valid for the current cases, even though the considered surface is also rough. As shown in SS3.3.1, normalized oscillatory velocity and resolved Reynolds stresses profiles collapse between cases with the same frequencies, implying that the Stokes layer thickness is only dependent on \(\omega\), whereas \(\tau_{max}\) is determined by both \(\alpha\) and \(\omega\). Rather, the scaling of \(\delta_{s}\) in the current cases is a trivial extension of the model first introduced by Scotti & Piomelli (2001), as discussed next. Let us recall from SS1 that the Stokes layer thickness for turbulent pulsatile flow over aerodynamically-smooth surface (Scotti & Piomelli, 2001) is defined as \[\delta_{s}=2\frac{\kappa\overline{u}_{\tau}}{\omega}(1+\sqrt{1+\frac{2\nu \omega}{\kappa^{2}\overline{u}_{\tau}^{2}}}). \tag{3.24}\] Here we apply two modifications to this model in order to make it applicable to the current rough-wall cases. First, given that the viscous stress is omitted, the molecular viscosity \(\nu\) can be neglected. Also, in the current cases, the oscillating shear rate is generated within the UCL rather than at the bottom surface (as in the smooth-wall cases), and the extent of the oscillating shear rate propagation defines the thickness of the Stokes layer. This behavior can be easily captured by augmenting \(\delta_{s}\) by the displacement height (\(d\)). Specifically, we draw an analogy to smooth-wall cases by taking \(d\) as the offset, since it is the virtual origin of the longtime-averaged velocity profile. \(d\) is a plausible choice of the offset since it captures the limiting behavior of the flow system as the canopy packing density varies. For instance, in the limit of \(\lambda_{p}\to 0\) (very sparse canopies), \(d=0\), i.e., the oscillating shear rate grows from the bottom of the domain. On the contrary, in the limit of \(\lambda_{p}\to 1\) (very dense canopies), \(d=h\), i.e., the oscillating shear layer starts at the top of the UCL. Based on these considerations, a phenomenological model for the Stokes layer thickness is \[\delta_{s}=\frac{4\kappa u_{\tau}}{\omega}+d. \tag{3.25}\] Note that, in the limit of \(\omega\to 0\), the Stokes layer no longer exists, rendering (3.25) invalid. At this limit, as previously stated in Scotti & Piomelli (2001), \(T\) is much larger than the turbulence relaxation time. As a result, the turbulence maintains a quasi-equilibrium state, and the flow statistics are indistinguishable from those of the corresponding equilibrium canopy layer flows, if scaled with the instantaneous inner/outer units. Figure 21 compares the predictions of (3.25) against LES results. Note that only low-amplitude cases are shown, since, as mentioned earlier, \(\delta_{s}\) only depends on \(\omega\). The upper limit of the Stokes layer is identified as the location where \(\sigma_{\bar{k}}^{2}\) is 1% of its maximum, where \[\sigma_{\bar{k}}^{2}=\frac{1}{T}\int_{0}^{T}(\frac{1}{2}(\widetilde{u_{1}^{ \prime}u_{1}^{\prime}}+\widetilde{u_{2}^{\prime}u_{2}^{\prime}}+\widetilde{u_{ 3}^{\prime}u_{3}^{\prime}}))^{2}dt. \tag{3.26}\] is the time variance of \(\widetilde{k}\). From figure 21, it is apparent that the estimated \(\delta_{s}\) compare very well with LES results. The estimation of \(\delta_{s}\) for the LL case is not shown in figure 21 because it exceeds the height of the computational domain. (3.25) can hence be used in future studies to identify the Stokes layer thickness for pulsatile flows over aerodynamically rough surfaces. ## 4 Conclusions This paper has examinined the impact of flow pulsation on longtime- and phase-averaged flow statistics in an open-channel flow over urban-like roughness. A series of LES of pulsatile flow past an array of cuboid elements has been carried out, programmatically varying the oscillation amplitude (\(\alpha\)) and frequency (\(\omega\)). The forcing frequencies have been chosen as a multiple of the characteristic frequency of turbulence in the UCL and encompass a range of values representative of submesoscale motions (Mahrt & Bou-Zeid 2020). The main findings and contributions of this study are outlined below. (i) Flow pulsation leads to an increase of the \(z_{0}\) parameter educed from longtime-averaged \(\overline{u}_{1}\) profiles, with larger \(\alpha\) and \(\omega\) values yielding a larger \(z_{0}\). On the contrary, \(d\) was found to be insensitive to variations in \(\alpha\) and \(\omega\). The increase of \(z_{0}\) was shown to be a direct consequence of the quadratic relation between the phase-averaged canopy drag \(\langle D\rangle\) and the phase-averaged velocity \(\langle u_{1}\rangle\), and this relation was leveraged to construct a phenomenological model for \(z_{0}\). The proposed model takes surface information and the variance of the phase-averaged velocity in the UCL as input parameters and captures the impact of flow unsteadiness on the \(z_{0}\) parameter in the absence of flow reversal. (ii) The wall-normal distributions of the longtime-averaged shear stress and canopy drag are unaltered by the flow unsteadiness. In contrast, the same cannot be said for longtime-averaged resolved normal Reynolds stresses, especially in the UCL. In particular, \(\overline{k}\) profiles were found to be relatively more sensitive to variations in \(\alpha\) via the longtime-averaged shear production of \(\overline{k}\). The highest frequency cases were also characterized by a relatively more isotropic turbulence field in the UCL, owing to a more efficient kinetic energy redistribution by the pressure-strain terms. (iii) The oscillation amplitudes of phase-averaged streamwise velocity and resolved Reynolds stresses scale with \(\alpha\). This behavior is due to the fact that the nonlinear production terms of \(\widetilde{u_{1}^{\prime}u_{3}^{\prime}}\) and \(\widetilde{k}\) are of relatively modest magnitude when compared to the linear ones. Increasing the pulsation amplitude might lead to more substantial contributions from nonlinear production terms and break down this scaling. (iv) For each case, profiles of oscillatory shear rate and resolved Reynolds stresses are characterized by oscillating waves which are advected away from the UCL at a constant speed while also being dissipated. \(\omega\) is found to determine both the speeds of the oscillating waves and the extent of these waves, which identifies the Stokes layer thickness (\(\delta_{s}\)). More specifically, \(\delta_{s}\) was found to increase with decreasing \(\omega\), whereas the wave speed increased with \(\omega\). The scaling of \(\delta_{s}\) has also been discussed, and findings have been used to propose a model for \(\delta_{s}\). All in all, flow pulsation is found to have a significant impact on both longtime-averaged and phase-averaged flow statistics, with nuanced dependencies on oscillation amplitude and frequency. The observed enhancement of the longtime-averaged surface drag, the isotropization of turbulence in the UCL, and the presence of a Stokes layer, amongst others, are expected to have important implications on the exchange of mass, energy, and momentum between the land surface and the atmosphere, as well as affect our ability to model these processes in weather forecasting and climate models. These models typically rely on surface flux parameterizations and theories that are based on flow stationarity assumptions and are not able to capture these behaviors correctly (see, e.g., Stensrud, 2007). The proposed phenomenological models for \(z_{0}\) and \(\delta_{s}\), as well as the identified scaling of phase-averaged flow statistics, contribute to advancing our understanding of flow unsteadiness in the ABL and offer a pathway for the development of improved surface flux parameterizations. Given the massive parameter space of unsteady ABL flow processes, it is also essential to acknowledge that several questions remain unanswered and deserve further investigation. For example, what is the impact of different types of periodic and aperiodic flow unsteadiness on turbulence statistics and topology? How are these variations in the structure of turbulence impacting land-atmosphere exchange rates of momentum, energy, and mass? Can prevailing surface flux parameterizations be modified to account for these impacts? Addressing these questions will be the subject of future studies. **Declaration of Interests.** The authors report no conflict of interest. **Acknowledgements.** The authors acknowledge support from the Department of Civil Engineering and Engineering Mechanics at Columbia University. This material is based upon work supported by, or in part by, the Army Research Laboratory and the Army Research Office under contract/grant number W911NF-22-1-0178. This work used the Stampede2 cluster at the Texas Advanced Computing Center through allocation ATM180022 from the Extreme Science and Engineering Discovery Environment (XSEDE), which was supported by National Science Foundation grant number #1548562. ## Appendix A Wall-layer modeling considerations As discussed in SS2, simulations have been conducted using an algebraic wall-layer model at the solid-fluid interface to evaluate tangential surface stresses. In this section, we show that the use of an equilibrium wall-layer model can be justified on the basis that the flow is in the fully rough aerodynamic regime. An LES of flow over a single cube is carried out at roughness Reynolds number \(Re_{\tau}=\overline{u}_{\tau}h/\nu=400\) (hereafter referred as to LES400), and results are compared with those from a DNS run (DNS400). At such a Reynolds number, the flow field is in fully rough regime, as also shown in Xie _et al._ (2008). The size of the computational domain is \([0,3h]\times[0,3h]\times[0,4h]\), and the planar and frontal area densities are the same as those in the main simulations of the study. The forcing frequency and the oscillation amplitude are \(\omega T_{h}=0.125\pi\) and \(\alpha=0.8\), respectively, which are comparable to the ones considered in the study. The grid resolution of LES400 follows the main simulations, which is \((n_{1},n_{2},n_{3})=(8,8,16)\) for each cube, and the identical wall-layer model as that in the main simulations is applied in the vicinity of the cube facets and the lower surface with the same roughness length scale. The grid resolution of DNS400 is \((n_{1},n_{2},n_{3})=(64,64,128)\) per cube. Such a grid resolution ensures that the ratio between the grid size \(\Delta=\sqrt[3]{\Delta_{1}\Delta_{2}\Delta_{3}}\) and the Kolmogorov scale \(\eta\) does not exceed 2, which has been proven sufficient for DNS of flow over fully rough surfaces (Zhang _et al._, 2022). In both simulations, the contribution from the tangential stresses at the cube facets and lower surface to the total surface drag remains below 1%, confirming that the flow is in the fully rough regime. Figure 22 and 23 display the phase-averaged velocity (\(\langle u_{1}\rangle\)) and turbulent kinetic energy (\(\langle k\rangle/\overline{u}_{\tau}^{2}\)), respectively. Profiles from the LES400 case are in good agreement with corresponding DNS quantities, with the maximum error in the LES400 profiles relative to those from the DNS400 case being approximately 1% and 6% for (\(\langle u_{1}\rangle\)) and \(\langle k\rangle\), respectively. The minor mismatches in \(\langle k\rangle\) can be partly explained by the fact that the SGS contribution to \(\langle k\rangle\) is zero for the LES400 case. It is suggested that, although the equilibrium assumption does not hold in a strict sense, the use of an equilibrium wall-layer model does not result in a noticeable impact on model results for the considered unsteady flow cases. Figure 23: Phase-averaged turbulent kinetic energy \(\langle k\rangle=\langle u_{i}^{\prime}u_{i}^{\prime}\rangle/2\) of LES400 (blue) and DNS400 (red). Figure 22: Phase-averaged velocity \(\langle u_{1}\rangle\) of LES400 (blue) and DNS400 (red). ## Appendix B Resolution sensitivity analysis To identify grid resolution requirements for simulations in this study, a grid-resolution sensitivity analysis has been conducted for the stationary flow case, i.e., \(\alpha=0\). The domain size for this analysis is \((36h,12h,4h)\), and we have studied the convergence of \(\overline{u}_{1}\), \(\overline{u^{\prime}_{1}u^{\prime}_{1}}\), and \(\overline{u^{\prime}_{1}u^{\prime}_{3}}\) profiles as the grid stencil is progressively reduced. Three grid resolutions have been considered, namely \((4,4,8)\), \((8,8,16)\), and \((12,12,24)\) on a per-cube basis. Note that the reduced domain size may have an impact on the evaluated flow statistics, but this serves the purpose of this analysis, since we are here only interested in quantifying relative variations of selected profiles across grid resolutions. Other numerical and physical parameters of the grid-sensitivity analysis simulations are set equal to the ones used in the main simulations. As apparent from figure 24, profiles from the \((8,8,16)\) case are essentially matching corresponding ones from the \((12,12,24)\) case, indicating that the chosen grid resolution for the pulsatile channel flow analysis is sufficient to yield grid-independent flow statistics (up to second order).
2306.06450
Movement bias in asymmetric landscapes and its impact on population distribution and critical habitat size
Ecologists have long investigated how demographic and movement parameters determine the spatial distribution and critical habitat size of a population. However, most models oversimplify movement behavior, neglecting how landscape heterogeneity influences individual movement. We relax this assumption and introduce a reaction-advection-diffusion equation that describes population dynamics when individuals exhibit space-dependent movement bias toward preferred regions. Our model incorporates two types of these preferred regions: a high-quality habitat patch, termed `habitat', which is included to model avoidance of degraded habitats like deforested regions; and a preferred location, such as a chemoattractant source or a watering hole, that we allow to be asymmetrically located with respect to habitat edges. In this scenario, the critical habitat size depends on both the relative position of the preferred location and the movement bias intensities. When preferred locations are near habitat edges, the critical habitat size can decrease when diffusion increases, a phenomenon called the drift paradox. Also, ecological traps arise when the habitat overcrowds due to excessive attractiveness or the preferred location is near a low-quality region. Our results highlight the importance of species-specific movement behavior and habitat preference as drivers of population dynamics in fragmented landscapes and, therefore, in the design of protected areas.
Vivian Dornelas, Pablo de Castro, Justin M. Calabrese, William F. Fagan, Ricardo Martinez-Garcia
2023-06-10T14:14:10Z
http://arxiv.org/abs/2306.06450v2
# How movement bias to attractive regions determines population spread and critical habitat size ###### Abstract Ecologists have long investigated how the demographic and movement parameters of a population determine its spatial spread and the critical habitat size that can sustain it. Yet, most existing models make oversimplifying assumptions about individual movement behavior, neglecting how landscape heterogeneity influences dispersal. We relax this assumption and introduce a reaction-advection-diffusion model that describes the spatial density distribution of a population with space-dependent movement bias toward preferred regions, including avoidance of degraded habitats. In this scenario, the critical habitat size depends on the spatial location of the habitat edges with respect to the preferred regions and on the intensity of the movement bias components. In particular, we identify parameter regions where the critical habitat size decreases when diffusion increases, a phenomenon known as the "drift paradox". We also find that biased movement toward low-quality or highly populated regions can reduce the population size, therefore creating ecological traps. Our results emphasize the importance of species-specific movement behavior and habitat selection as drivers of population dynamics in fragmented landscapes and, therefore, the need to account for them in the design of protected areas. Introduction Habitat destruction and fragmentation result in smaller and more isolated suitable habitat patches where extinctions are more likely to occur (Franklin et al., 2002; Lord and Norton, 1990; Nauta et al., 2022). The viability of a population in each of these patches depends on the balance between growth inside the patch and population losses, mainly stemming from dispersal through habitat edges. The interplay between these two processes sets the minimum area required to sustain the population and defines a patch-size threshold (Kierstead and Slobodkin, 1953). Thus, understanding the interaction between demographic and dispersal processes is key to determining critical patch sizes across species, which has important implications for conservation, such as in the design of protected areas or ecological corridors (Cantrell and Cosner, 1999; Ibagon et al., 2022). Additionally, determining the expected spatial pattern of population density in patches larger than the critical size can improve understanding of population responses to further habitat destruction. The importance of critical patch sizes and population spread in finite habitat patches has led researchers to test model predictions in microcosm experiments using microbial populations (Lin et al., 2004; Perry, 2005) as well as in large-scale experiments and observations (Ferraz et al., 2003; Pereira and Daily, 2006; Turner, 1996). Despite the biological understanding gained from these and other empirical studies, key aspects of complexity remain absent from theoretical modeling of critical patch size phenomena. The most common models to determine critical habitat sizes consist of a reaction-diffusion equation describing the spatio-temporal dynamics of a population density in a bounded region. Within this family of models, the simplest ones assume purely diffusive dispersal coupled with exponential growth and are commonly called KISS models (Kierstead and Slobodkin, 1953; Skellam, 1951). Due to these highly simplified assumptions, KISS models lead to analytical expressions for the critical patch size. More recently, researchers have refined movement descriptions by including space-dependent diffusion within the patch (Colombo and Anteneodo, 2018; Dos Santos et al., 2020), responses to habitat edges (Cronin et al., 2019; Fagan et al., 1999; Maciel and Lutscher, 2013), and various sources of non-random movement, such as a constant external flow (Pachepsky et al., 2005; Ryabov and Blasius, 2008; Speirs and Gurney, 2001; Vergni et al., 2012) or a chemoattractant secreted by the population (Kenkre and Kumar, 2008). Other studies have explored more complex growth dynamics, such as Allee effects (Alharbi and Petrovskii, 2016; Cronin et al., 2020), time-varying environments (Zhou and Fagan, 2017), or heterogeneity in population growth, either through time-dependent demographic rates (Ballard et al., 2004; Colombo and Anteneodo, 2016) or by introducing a finer spatial structure of habitat quality within the patch (Cantrell and Cosner, 2001; Fagan et al., 2009; Maciel and Lutscher, 2013). Finally, a few studies have combined space-dependent demographic rates with migration toward higher quality regions within the patch, in the presence of an environmental gradient, to determine critical patch sizes (Cantrell and Cosner, 1991, 2001; Cantrell et al., 2006) or population spatial distributions depending on the type of boundary conditions (Belgacem and Cosner, 1995). Despite this significant endeavor to refine classical KISS models, some movement features which are present in most species and which directly impact population spread remain underexplored. For example, individuals often show a tendency to move toward certain habitat regions where they concentrate, which makes population ranges smaller than the total amount of habitat available (Kapfer et al., 2010; Van Moorter et al., 2016). While a considerable effort has focused on understanding how and why individuals show these patterns of space use (Jeltsch et al., 2013; Nathan et al., 2008), their population-level consequences, especially in fragmented landscapes, have been less explored. Motivated by colonial central-place foragers such as ants, beavers, and colonial seabirds, one particular study obtained, numerically, the critical patch size when the home-range centers for all the individuals in the population are at the center of the habitat patch (Fagan et al., 2007). Overall, however, the lack of a more general theoretical framework limits the current understanding of how habitat selection within a fragmented landscape determines the spatial distribution and critical patch size for a given population. Here we take a first step to fill this theoretical gap by extending classical KISS models to account for space-dependent deterministic movement. We study how this additional movement component influences population spread in a heterogeneous one-dimensional landscape and the critical habitat patch size that ensures population survival. We consider the simple one-dimensional scenario with a finite high-quality habitat patch embedded in a low-quality "matrix" with high mortality. Using both numerical and analytical methods, we measure critical patch size and spatial patterns of population density for different matrix mortality levels. We also vary the intensity of two deterministic space-dependent movement components relative to random dispersal: a bias to preferred landscape locations and avoidance of degraded habitats. We find that the total population lost due to habitat degradation and the critical patch size depend nonlinearly on the parameters that control the different movement components as well as on the spatial distribution of habitat relative to the landscape regions where individuals move more slowly. Overall, our results emphasize the importance of incorporating covariates between movement behavior and landscape features when investigating population dynamics in heterogeneous landscapes. ## 2 Material and Methods ### Model formulation We consider a one-dimensional heterogeneous landscape with a habitat patch embedded in an infinite matrix (see Fig. 1a). The left and right habitat patch edges are located at \(x=x_{\text{L}}\) and \(x=x_{\text{R}}\), respectively, and the habitat patch size is \(L=|x_{\text{R}}-x_{\text{L}}|\). The landscape is occupied by a single-species population, which we describe via a continuous density field \(u(x,t)\). This population density changes in space and time due to demographic processes and dispersal. For the birth/death dynamics, we assume that the population follows logistic growth with net reproduction rate \(r\) and intraspecific competition intensity \(\gamma\). The net growth rate is constant within each type of region but different between regions: \(r(x)=r_{\text{\tiny H}}>0\) inside the habitat patch (high-quality, low-mortality region) and \(r(x)=r_{\text{\tiny M}}<0\) in the matrix (low-quality, high-mortality region). The matrix mortality rate \(r_{\text{\tiny M}}\) defines the degree of habitat degradation, with the limit \(r_{\text{\tiny M}}\rightarrow-\infty\) representing complete habitat destruction. For finite mortality rates, whether an individual dies in the matrix or not is determined by the mortality rate itself and the time the individual spends in the matrix. Therefore, when the matrix is not immediately lethal, the population density outside the habitat patch is not zero. For dispersal, we consider two different movement components: random dispersal with constant diffusion coefficient \(D\), and a deterministic tendency of individuals to move toward attractive regions with space-dependent velocity \(v(x)\), therefore accounting for the effect of landscape heterogeneity in movement behavior. Importantly, this attractive term in our model generates movement bias toward regions that are not necessarily of higher habitat quality. The actual velocity of an individual is thus equal to \(v(x)\) plus a stochastic contribution that comes from diffusion. Combining these demographic and movement processes, the dynamics of the population density is given by \[\frac{\partial u(x,t)}{\partial t}=r(x)u(x,t)-\gamma u(x,t)^{2}+D\frac{ \partial^{2}u(x,t)}{\partial x^{2}}-\frac{\partial}{\partial x}\bigg{(}v(x)u( x,t)\bigg{)}. \tag{1}\] The functional form of the advection velocity \(v(x)\) depends on landscape features, with attractive locations corresponding to \(x\) coordinates with slower velocity. We consider two different types of attractive regions. First, we incorporate a tendency to move toward an attractive _location_ with velocity \(v_{\text{\tiny P}}(x)\). This term could represent, for example, attraction toward a chemoattractant source or toward a special resource, such as a watering hole. We choose \[v_{\text{\tiny P}}(x)=-\tau_{\text{\tiny P}}^{-1}(x-x_{\text{\tiny P}}), \tag{2}\] where we assumed that the velocity at which individuals tend to move toward attractive landscape regions increases linearly with the distance to the focus of attraction. This is similar to how simple data-driven models for range-resident movement implement attraction to home-range center at the individual level (Dunn and Gipson, 1977; Noonan et al., 2019). The prefactor \(\tau_{\text{\tiny P}}^{-1}\) is the attraction rate toward the attractive location and defines the typical time that individuals take to re-visit \(x_{\text{\tiny P}}\). In the following, we use \(x_{\text{\tiny P}}=0\) in all our calculations, such that the locations of the habitat edges are measured relative to the focus of attraction. Second, we consider that individuals in the matrix tend to return to the habitat patch with velocity \(v_{\text{\tiny M}}(x)\), and therefore, we incorporate an additional attraction term biasing movement from the matrix toward its closest habitat edge. Again, we consider a linear spatial dependence, but now only for individuals in the matrix: \[v_{\text{\tiny M}}(x)=\begin{cases}-\tau_{\text{\tiny M}}^{-1}(x-x_{\text{\tiny L }}),&x<x_{\text{\tiny L}}\\ 0,&x_{\text{\tiny L}}\leq x\leq x_{\text{\tiny R}}\\ -\tau_{\text{\tiny M}}^{-1}(x-x_{\text{\tiny R}}),&x>x_{\text{\tiny R}}\end{cases} \tag{3}\] The prefactor \(\tau_{\text{\tiny M}}^{-1}\) is the edge attraction rate that modulates the strength of the matrix-to-habitat attraction \(v_{\text{\tiny M}}(x)\). In the habitat, \(v_{\text{\tiny M}}(x)=0\), whereas in the matrix, it is equal to \(\tau_{\text{\tiny M}}^{-1}\) multiplied by the distance to the closest edge. Moreover, the velocity \(v_{\text{\tiny M}}(x)\) always points toward the habitat patch, therefore biasing the movement of the individuals in the matrix toward the habitat-matrix edges. This matrix avoidance drift assumes that individuals remain aware of the direction in which the favorable habitat is located, which extends previous models for movement response to habitat edges that act only at the habitat-matrix boundary (Cronin et al., 2019; Fagan et al., 1999; Maciel and Lutscher, 2013). Putting together the movement toward the attractive location and the matrix avoidance bias, we obtain a velocity of the form \(v(x)=v_{\text{\tiny p}}(x)+v_{\text{\tiny M}}(x)\) (see Fig. 1b for \(v(x)\) and Fig. 1c for the population spread emerging from it). We provide a summary of the model parameters in Table 1. Figure 1: Model schematics. (a) Landscape features, showing a high-quality habitat (\(r>0\)) and an attractive location (represented as a watering hole at position \(x=x_{\text{\tiny P}}=0\)) surrounded by low-quality matrix regions (\(r<0\)). Habitat edges are located at \(x=x_{\text{\tiny L}}\) and \(x=x_{\text{\tiny R}}\). In this example, \(x_{\text{\tiny R}}\) is positive. The total habitat size is \(L=|x_{\text{\tiny R}}-x_{\text{\tiny L}}|\). (b) Spatial dependence of the drift velocity (deterministic movement component). Velocity \(v_{\text{\tiny P}}\) points to the attractive location [Eq. 2]. For individuals in the matrix, an additional term (\(v_{\text{\tiny M}}\)) points to the edges [Eq. 3]. (c) Emerging stationary population density distribution \(u_{*}(x)\), peaking to the left of \(x=0\). ### Model analysis We analyze the stationary solutions of Eq. (1) using a combination of semi-analytical linear stability analysis and numerical simulations of the full nonlinear equation. We use both approaches in the \(r_{\mbox{\tiny M}}\to-\infty\) limit and perform only numerical simulations in the more general case with finite \(r_{\mbox{\tiny M}}\). #### 2.2.1 Linear stability analysis of the extinction state when \(r_{\mbox{\tiny M}}\to-\infty\) In the \(r_{\mbox{\tiny M}}{\to}-\infty\) limit, individuals die instantaneously upon reaching the matrix, and we can replace the dynamics of the population density in the matrix by absorbing boundary conditions at the habitat edges, \(u(x_{\mbox{\tiny L}},t)=u(x_{\mbox{\tiny R}},t)=0\). In this regime, the movement component that attracts individuals to the habitat edge, \(v_{\mbox{\tiny M}}\), has no effect on the dynamics, and we can perform a linear stability analysis of the extinction solution \(u(x,t\to\infty)\equiv u_{s}(x)=0\) to determine the habitat configurations \((x_{\mbox{\tiny L}},x_{\mbox{\tiny R}})\) that lead to population extinction for a given set of movement parameters. To perform this linear stability analysis, we neglect the quadratic term in the logistic growth and take the limit \(t\to\infty\) in Eq. 1, which is equivalent to setting \(\partial_{t}u(x,t)=0\). In this limit, Eq. 1 becomes an ordinary differential equation with solution \[u_{s}(x)=\exp\left(-\frac{x^{2}}{2r_{\mbox{\tiny P}}D}\right)\left[aH_{r_{ \mbox{\tiny P}}\gamma}\left(\frac{x}{\sqrt{2Dr_{\mbox{\tiny P}}}}\right)+\,b \,\,_{1}F_{1}\left(-\frac{r_{\mbox{\tiny P}}}{2};\frac{1}{2};\frac{x^{2}}{2Dr _{\mbox{\tiny P}}}\right)\right], \tag{4}\] where \(a\) and \(b\) are constants that we can obtain from the boundary conditions. \({}_{1}F_{1}\) is the confluent hypergeometric function of the first kind, and \(H_{n}(x)\) is the Hermite polynomial, with \(n\) being a real, not necessarily integer, number (Arfken and Weber, 1999). Imposing absorbing boundary conditions at the habitat edges on Eq. 4, we obtain a system of two equations for \(a\) and \(b\) that we can use to determine the stability of the solution \(u_{s}(x)=0\). For this system of equations to have non-trivial solutions (that is, different from \(a=b=0\)), its determinant has to be zero. With this condition for the determinant and assuming that \(x_{\mbox{\tiny L}}\) is fixed, we obtain a transcendental equation in \(x_{\mbox{\tiny R}}\) that we can solve numerically to obtain the critical location of the right habitat edge, \(x_{\mbox{\tiny R,C}}\). \begin{table} \begin{tabular}{|c|c|c|} \hline Sym. & Parameter & Dimensions \\ \hline \(r_{\mbox{\tiny M}}\) & Habitat net growth rate & time\({}^{-1}\) \\ \(r_{\mbox{\tiny M}}\) & Matrix net rate & time\({}^{-1}\) \\ \(\gamma\) & Intensity of intraspecific compnet. & (time \(\times\) density)\({}^{-1}\) \\ \(D\) & Diffusion coefficient & space\({}^{2}\)/time \\ \(x_{\mbox{\tiny P}}\) & Center attractive location & space \\ \(x_{\mbox{\tiny L}}\) & Position of left habitat edge & space \\ \(x_{\mbox{\tiny R}}\) & Position of right habitat edge & space \\ \(\tau_{\mbox{\tiny P}}^{-1}\) & Attraction rate to attractive location & time\({}^{-1}\) \\ \(\tau_{\mbox{\tiny M}}^{-1}\) & Matrix-to-habitat attraction rate & time\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 1: Summary of model parameters with symbols and dimensions. Units are arbitrary. Specific values are provided in figure captions. #### 2.2.2 Numerical solution of the nonlinear model equation We perform all numerical simulations using a central Euler method starting from a random positive initial condition for \(u\). In the \(r_{\text{M}}\to-\infty\) limit, we further ensure that the initial condition obeys the absorbing boundary conditions at the habitat edges. We integrate Eq. 1 for a variety of habitat patch sizes keeping \(x_{\text{L}}\) constant and decreasing \(x_{\text{R}}\) systematically until the population reaches a stationary extinction state. Using this procedure, we can calculate \(x_{\text{R,C}}\) and hence the critical patch size defined as the habitat patch size at which the steady-state total population transitions from non-zero to zero (see Fig. 2 for spatial patterns of population density as \(x_{\text{R}}\) decreases, with \(x_{\text{R}}>x_{\text{R,C}}\)). Finally, we also use these numerical solutions of Eq. 1 to measure population loss due to habitat degradation. For this purpose, we introduce a dimensionless quantity, \(\eta\), defined as the total population size sustained by a finite habitat patch of size \(L\) divided by the total population size sustained by an infinite habitat patch. Such _remaining population fraction_ is thus \[\eta\equiv\frac{N_{\text{T}}}{N_{\text{T}}^{\infty}}, \tag{5}\] where \(N_{\text{T}}\) and \(N_{\text{T}}^{\infty}\) are the total population sizes for finite and infinite habitat patches, respectively. We obtain these population sizes by integrating the population density over the entire landscape, including the matrix. ## 3 Results ### Perfectly absorbing matrix: the \(r_{\text{M}}\to-\infty\) limit We first consider the simplest scenario in which individuals die instantaneously after they reach the habitat edges. In this limit, the population density is always zero in the matrix and, therefore, the movement component that biases individuals in the matrix toward the habitat edges is irrelevant. Movement is thus solely driven by random diffusion and the bias toward the attractive location \(x=x_{\text{P}}=0\). In large habitat patches, space-dependent movement leads to the accumulation of population density very close to regions with slower movement. However, as the habitat patch decreases in size and regions with slower movement get closer to one of the habitat edges, the spatial pattern of population density changes due to mortality at the habitat edge and the maximum of population density shifts further away from the attractive location and towards the patch center (Fig. 2). This asymmetric pattern of space occupation due to space-dependent movement contrasts with well-known results for purely diffusive movement, for which population density reaches its maximum in the center of the habitat patch (Holmes et al., 1994), and significantly alters population loss owing to habitat degradation and the critical patch size. First, to understand population loss due to habitat degradation, we use the remaining population fraction, \(\eta\), defined in Eq. (5). This remaining population fraction is maximal when the attractive location is at the same distance from the two habitat edges, and it decays symmetrically about the line \(x_{\text{R}}=-x_{\text{L}}\). Moreover, this decay is sharper for stronger bias toward the attractive location (Fig. 3 and S1). Finally, when the distance between the attractive location and one of the habitat edges is sufficiently large, further increasing the habitat size does not change the remaining population fraction because population loss through habitat edges is negligible, except for \(\tau_{\rm p}^{-1}=0\). Regarding the critical patch size, when the bias to the attractive location is strong, represented by higher values of \(\tau_{\rm p}^{-1}\), the population is localized around the attractive location \(x_{\rm p}=0\), and it goes extinct when the attractive location is within the habitat patch but close to one of its edges (Fig. 3a, b). When the bias to the attractive location decreases, however, the population can survive even if the attractive location is outside the habitat and the mortality in the matrix is infinite (Fig. 2c and 3c). This scenario would correspond to a situation where habitat destruction places the attractive location outside the habitat and individuals have not adapted their movement behavior to this landscape modification. As a result, individuals preferentially move toward regions with low habitat quality, which can be understood as an example of an ecological trap (Lamb et al., 2017; Robertson and Hutto, 2006; Weldon and Haddad, 2005). To further investigate how the distance between the attractive location and the habitat edges determines the critical patch size for different movement parameters, we calculate the critical location of the right habitat edge \(x_{\rm R,C}\) assuming that the left habitat edge is fixed and far from the attractive location. In these conditions, if \(x_{\rm R}\) is also large, mortality through the left edge is negligible, but it becomes significant for smaller habitat patches. This setup mimics a situation where an initially large patch shrinks due to continued habitat destruction, slowly enough that the population distribution is at equilibrium for each particular habitat configuration, until it reaches a critical size and the population collapses. We find that the critical patch size is a nontrivial function of the intensity of the movement bias toward the attractive location and the distance between habitat edges and the attractive location. When \(\tau_{\rm P}^{-1}=0.1\), \(x_{\rm R,C}=0\) regardless of the value of \(D\). For \(\tau_{\rm P}^{-1}>0.1\), movement bias is so strong that the attractive location must be within the habitat patch to avoid individuals entering the matrix and dying at a rate that cannot be outbalanced by population growth within the habitat (red region in Fig. 4 and Fig. S2). Moreover, due to strong bias toward the attractive location, the population is concentrated around that location. Increasing the diffusion coefficient \(D\) makes \(x_{\rm R,C}\) increase because the population spreads out, and individuals become more likely to reach the matrix and die. Increasing \(\tau_{\rm P}^{-1}\) from \(\tau_{\rm P}^{-1}=0.1\) for a fixed diffusion coefficient \(D\), we find a non-monotonic relationship between \(x_{\rm R,C}\) and \(\tau_{\rm P}^{-1}\). First, the critical patch size increases with \(\tau_{\rm P}^{-1}\) because the population concentrates around the attractive location and are more likely to reach the matrix and die. For even higher attraction rate, the population concentrates very narrowly around the attractive location and individuals do not reach the habitat edge. As a result, the critical patch size decreases with \(\tau_{\rm P}^{-1}\). For \(\tau_{\rm P}^{-1}<0.1\), \(x_{\rm R,C}\) is negative, which means that the population can persist even when the attractive location is in the matrix (blue region in Fig. 4). In this low-\(\tau_{\rm P}^{-1}\) regime, \(x_{\rm R,C}\) increases with \(\tau_{\rm P}^{-1}\) because less random movement increases the relative contribution of the movement bias to the population flux through the edge. Similarly, the critical patch size decreases with increasing \(D\) for values of \(\tau_{\rm P}^{-1}\) not too far from \(\tau_{\rm P}^{-1}=0.1\). This negative correlation between critical patch size and diffusion appears because a more random movement reduces the tendency of individuals to cross the habitat edge (see Fig. S2b for curves of \(x_{\rm R,C}\) versus \(D\) with fixed \(\tau_{\rm P}^{-1}\)). This phenomenon, known as the "drift paradox," has been previously observed in organisms inhabiting streams, rivers, and estuaries where downstream drift is continuously present and extinction is inevitable in the absence of diffusion (Pachepsky et al., 2005; Speirs and Gurney, 2001). However, as \(D\) continues to increase and random diffusion dominates dispersal, the critical patch size increases due to population loss via diffusion through both habitat edges. Finally, for very low values of \(\tau_{\rm P}^{-1}\), diffusion controls the population flux through habitat edges and the behavior of the critical patch size converges to the theoretical prediction of the purely diffusive case, \(L_{\text{\tiny C}}^{\text{\tiny D}}=\pi\sqrt{D/r_{\text{\tiny H}}}\)(Kierstead and Slobodkin, 1953). ### Partially absorbing matrix and the effect of matrix-to-habitat bias Considering finite \(r_{\text{\tiny M}}\) allows us to investigate how changes in movement behavior, once individuals reach the matrix, can alter the spatial pattern of population density and the critical patch size. If individuals in the matrix do not tend to return to the habitat (\(\tau_{\text{\tiny M}}^{-1}\approx 0\)), the population density decays into the matrix exponentially, and the critical patch size increases with matrix mortality rate (Fig. 5; Ludwig et al. 1979, Ryabov and Blasius 2008). For low values of \(\tau_{\text{\tiny M}}^{-1}\), the tendency to return from the matrix to the habitat edges reduces how much the population penetrates the matrix and increases the population density inside the habitat, especially close to the edges (Fig. 5a). The spatial distribution of the population has a skewness that reaches its maximum when the attractive location is in the matrix (Fig. 5b). For large enough \(\tau_{\text{\tiny M}}^{-1}\), the edges act as almost hard walls, and the term representing the tendency to return from the matrix to the nearest habitat edge behaves effectively as reflecting boundary conditions in Eq. (1). In this limit, the population survives for any habitat size (Maciel and Lutscher, 2013). The accumulation of individuals around habitat edges suggests a potential tradeoff between a decrease in mortality in the matrix due to the attraction to habitat edges and an increase in intraspecific competition due to higher population densities in the habitat. To investigate the impact of this tradeoff on population loss due to habitat degradation, we measure the fraction of the population that remains for a given patch size relative to the value for an infinite habitat patch, \(\eta\). We perform this measurement for several values of the matrix mortality rate \(r_{\text{\tiny M}}\) and the returning rate to habitat edges \(\tau_{\text{\tiny M}}^{-1}\), which are the two main parameters controlling the accumulation of population density at habitat edges. We consider a scenario with the attractive location at the center of the habitat patch, which is the limit where we have a weaker accumulation of individuals at habitat edges and, therefore, the regime in which the tradeoff between matrix mortality and intraspecific competition around habitat edges has a weaker effect on population dynamics. At high matrix mortality rates, the population does not survive (\(\eta=0\)), except for very high returning rates \(\tau_{\text{\tiny M}}^{-1}\) (Fig. 6). When the matrix mortality rate decreases, \(\eta\) increases and remains a monotonically increasing function of \(\tau_{\text{\tiny M}}^{-1}\). For \(r_{\text{\tiny M}}\) closer to zero, however, \(\eta\) becomes a non-monotonic function of \(\tau_{\text{\tiny M}}^{-1}\). For these values of the matrix mortality rate, increasing the returning rate to habitat edges is initially detrimental to the total population size because it leads to higher intraspecific competition at the habitat edges, which outweighs the decrease in mortality in the matrix. In other words, the density distribution does not penetrate the matrix as far (Fig. 5a) while, inside the habitat, competition does not allow for a large enough increase in population, and so the total population decreases. Consequently, the habitat edge itself behaves as an ecological trap in this regime, and our model recovers a behavior similar to previous observations for insects (Ries and Fagan, 2003; Ries et al., 2004). Above a critical value of \(\tau_{\text{\tiny M}}^{-1}\) at which \(\eta\) is minimal, further increasing the returning rate to habitat edges becomes beneficial for population persistence because now very few individuals enter the matrix and reduced matrix mortality outweighs the increased intraspecific competition at habitat edges. For infinite return rate \(\tau_{\text{\tiny M}}^{-1}\), all the curves for different values of the matrix mortality rate \(r_{\text{\tiny M}}\) converge to the same value because individuals do not penetrate the matrix. For \(\tau_{\text{\tiny M}}^{-1}\to\infty\) and \(\tau_{\text{\tiny P}}^{-1}=0\), one has \(u_{\text{\tiny s}}=r/\gamma\) inside the habitat and \(u_{\text{\tiny s}}=0\) in the matrix (dashed line in Fig. 5). The existence of a non-monotonic dependence of population size on advection strength is reminiscent of a behavior reported in a different scenario for a model with advection towards a continuous environmental gradient (Belgacem and Cosner, 1995). ## 4 Discussion We studied the spatial dynamics of a population in a finite habitat surrounded by an infinite matrix, considering different ratios between matrix mortality and habitat reproduction rates. We additionally incorporated space-dependent deterministic movement through an advection term that attracts individuals toward specific landscape locations, including habitat edges. This advection term can create spatial distributions of population density that are asymmetric with respect to the center of the patch, especially when the patch size is small and attractive regions lie near habitat edges. This result could explain why, in certain species, populations tend to accumulate in the periphery of the species historical range following geographical range contraction (Channell and Lomolino, 2000a,b). Moreover, our results show that both the habitat carrying capacity and critical size depend nonlinearly, sometimes non-monotonically, on movement and demographic parameters and the location of the habitat edges relative to regions of slower movement. Recent work has also found nonlinear and non-monotonic relationships between movement and landscape parameters underlying the stability of prey-predator systems in fragmented landscapes (Dannemann et al., 2018; Nauta et al., 2022). These findings emphasize the importance of untangling the various contributions determining individual movement, including environmental covariates, when designing conservation strategies such as refuges in fragmented landscapes or marine protected areas (Gaston et al., 2002; Gerber et al., 2003). Specifically, for very low yet non-zero bias intensities, we find a range of values for the diffusion coefficient for which the critical habitat size decreases with increasing diffusion. This counterintuitive phenomenon is known as the "drift paradox" (Pachepsky et al., 2005). On the opposite limit, if movement bias toward the attractive location is very strong, the population becomes ultra-localized and its survival depends on whether the attractive site is in the habitat patch or the matrix; if it is in the patch, the population will persist, but if it is in the matrix, the population will go extinct. In between these two limits, for weak bias toward the attractive location, further increasing bias intensity increases the critical habitat size when the attractive site is inside the habitat but not too far from both edges. Moreover, populations are still viable for these weak bias intensities even if habitat destruction places the attractive location inside the matrix, creating an ecological trap. Ecological traps are often related to human landscape interventions (Robertson and Hutto, 2006; Schlaepfer et al., 2002) such as the construction of bird nest cavities in regions with generally worse conditions than those where the birds would naturally build their nests (Krams et al., 2021). Roads can also act as ecological traps. For example, female bears with their cubs are often attracted to roads due to higher forage availability and to avoid potential male infanticide, increasing their risk of being killed in vehicle collisions (Northrup et al., 2012; Penteriani et al., 2018). Our model also suggests that movement responses to changes in habitat quality, such as the tendency of individuals to return from the matrix to habitat edges, can result in the accumulation of population density around habitat edges, even when attractive locations are centered in the habitat patch. This accumulation of population density reduces the quality of regions nearby habitat edges relative to the surrounding matrix and turn the neighborhoods of habitat edges into ecological traps. This population crowding nearby habitat edges could, however, be eliminated by density-dependent dispersal, which was not included in the our model. Animal responses to changes in habitat fragmentation, such as the matrix avoidance term included in our model, might be relevant in regulating demographic responses to habitat destruction. Quantifying correlations between movement behavior, habitat quality, and population density in animal tracking data could help to understand the impact of further habitat destruction on population viability. More generally, the existence of ecological traps suggests that movement patterns exhibited by individuals upon habitat destruction do not correspond to an evolutionarily stable strategy (Hastings, 1983). However, because ecological traps do not necessarily lead to population extinctions in our model, individuals could potentially adapt their movement behavior to avoid newly degraded regions. Different non-uniform space utilization patterns and preference for specific habitat locations are ubiquitous in nature. We consider that all individuals in the population have the same movement behavior and thus share habitat preferences. This assumption is an accurate modeling choice for certain species, such as central-place foragers (Fagan et al., 2007). Very often, however, habitat preferences vary across individuals in a population, which might impact how individuals interact with one another (Martinez-Garcia et al., 2020; Noonan et al., 2021). Incorporating individual-level variability in space utilization would inform how populations of range-resident and territorial species would respond to habitat destruction, and is one of the future directions that could be explored based on this work. However, while attractiveness can sometimes be quantified in terms of environmental covariates (Mueller et al., 2008) or by knowing the locations of landscape features like watering holes, other times it will be difficult or impossible to quantify, for example when "attractiveness" depends on the unknown distribution of a particular prey species. Future theoretical research should aim to increasingly fill this gap between existing models describing empirically observed patterns of animal movement and higher level ecological processes. ## Acknowledgments We thank Silas Poloni, Eduardo H. Colombo, and Chris Cosner for their critical reading of the manuscript. This work was partially funded by the Center of Advanced Systems Understanding (CASUS), which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament; the Sao Paulo Research Foundation (FAPESP, Brazil) through postdoctoral fellowships No. 2021/10139-2 and No. 2022/13872-5 (P.d.C) and No. 2020/04751-4 (V.D.), BIOTA Young Investigator Research Grant No. 2019/05523-8 (R.M-G); ICTP-SAIFR grant no. 2021/14335-0 (P.d.C) and No. 2016/01343-7 (V.D., P.d.C., R.M-G); the Simons Foundation through grant no. 284558FY19 (R.M-G); the Estonian Research Council through grant PRG1059 (V.D.). The National Science Foundation (NSF, USA) grant DBI1915347 supported the involvement of J.M.C. and W.F.F. This research was supported by resources supplied by the Center for Scientific Computing (NCC/GridUNESP) of the Sao Paulo State University (UNESP).
2308.10946
Effective interactions for the SM fermion mass hierarchy and their possible UV realization
We built an extended 2HDM theory with a spontaneously broken $U(1) _{X}$ global symmetry, where the tree level Universal Seesaw Mechanism generates the mass hierarchy of the Standard Model charged fermions and the Zee-Babu mechanism produces tiny active neutrino masses. The third family of SM charged fermions gets tree level masses from Yukawa interactions involving the Higgs doublets $H_1$ (for the top quark) and $H_2$ (for the bottom quark and tau lepton). The model under consideration is consistent with SM fermion masses and mixings, with the muon and electron $g-2$ anomalies and successfully accommodates the constraints arising from charged lepton flavor violation and meson oscillations. The proposed model predicts rates for charged lepton flavor violating decays within the reach of forthcoming experiments.
A. E. Cárcamo Hernández, Diego Restrepo, Ivan Schmidt, Óscar Zapata
2023-08-21T18:00:06Z
http://arxiv.org/abs/2308.10946v2
# Effective interactions for the SM fermion mass hierarchy and their possible UV realization. ###### Abstract We built an extended 2HDM theory with a spontaneously broken \(U(1)_{X}\) global symmetry, where the tree level Universal Seesaw Mechanism generates the mass hierarchy of the Standard Model charged fermions and the Zee-Babu mechanism produces tiny active neutrino masses. The third family of SM charged fermions gets tree level masses from Yukawa interactions involving the Higgs doublets \(H_{1}\) (for the top quark) and \(H_{2}\) (for the bottom quark and tau lepton). The model under consideration is consistent with SM fermion masses and mixings, with the muon and electron \(g-2\) anomalies and successfully accommodates the constraints arising from charged lepton flavor violation and meson oscillations. The proposed model predicts rates for charged lepton flavor violating decays within the reach of forthcoming experiments. ## I Introduction The neutrino masses have been well-established to be exceedingly small [1; 2]. A natural explanation for this phenomenon is the tree-level realization of the Weinberg operator [3] via the type I-III seesaw mechanisms [4; 5; 6; 7; 8; 9] containing five dimensional operators. In order to extend this strategy to all charged matter particles within the Standard Model (SM), the universal seesaw mechanism (USM) [10; 11; 12; 13] has been developed. One of the significant advantages of USM is that it predicts the existence of much lighter mediators compared to those involved in the usual seesaws for neutrinos, which in turn can be solved by using either alternative higher-dimensional effective operators or radiative realizations of the Weinberg operator. Additionally, the quark sector poses further challenges in explaining the observed mixing and mass pattern, necessitating a more elaborate approach that combines higher-dimensional effective operators or incorporates radiative contributions within a extended Higgs sector. In this work we invoke to the USM within the framework of a two Higgs doublet model (2HDM) [14], which can be considered a minimal extension of the SM, with a global horizontal symmetry \(U(1)_{X}\) to generate the masses of the first and second generation of SM quarks and SM charged leptons. Relying on this symmetry, the third generation charged fermions obtain their masses thanks to the Yukawa interactions with the Higgs doublets as follows \[-\mathcal{L}_{d=4}=y_{3}^{(u)}\overline{q}_{3L}\widetilde{H}_{1}\,u_{3R}+y_{3 }^{(d)}\overline{q}_{3L}H_{2}\,d_{3R}+\sum_{i=1}^{3}y_{i}^{(e)}\overline{l}_ {iL}H_{2}\,e_{3R}+\text{h.c.}\,, \tag{1}\] where \(H_{1}\) and \(H_{2}\) are the \(SU\left(2\right)_{L}\) scalar doublets. Hence that the heaviness of the top quark with respect to the bottom quark and tau lepton arises from a mild hierarchy between the vacuum expectation values of \(H_{1}\) and \(H_{2}\). To correctly reproduce the mixing and masses pattern in the quark sector [15; 16] through USM we further extend the scalar sector with two gauge singlet fields \((\sigma,\eta)\) and a hyperchargeless weak \(SU(2)_{L}\) triplet (\(\Delta\)), all of them with non zero charges under the \(U(1)_{X}\) symmetry. With this setup, one positivity for the the source of the rest of quark masses are the effective operators of dimension \(d=6\) \[-\mathcal{L}_{d=6}^{(q)}= \sum_{n=1}^{2}\left[\gamma_{n}^{(d)}\overline{q}_{nL}H_{2}d_{2R} +\gamma_{n}^{(u)}\overline{q}_{nL}\widetilde{H}_{2}u_{2R}\right]\frac{\sigma^ {*}\eta}{\Lambda^{2}}+\sum_{n=1}^{2}\left[\alpha_{n}^{(d)}\overline{q}_{nL} \Delta^{\dagger}H_{2}d_{1R}+\alpha_{n}^{(u)}\overline{q}_{nL}\Delta\widetilde{H }_{2}u_{1R}\right.\] \[\left.+\,\alpha_{3n}^{(d)}\overline{q}_{nL}\Delta^{\dagger}H_{1} d_{3R}+\alpha_{3n}^{(u)}\overline{q}_{nL}\Delta^{\dagger}\widetilde{H}_{2}u_{3R} \right]\frac{\sigma^{*}}{\Lambda^{2}}+\text{h.c.}. \tag{2}\] Here \(\gamma_{n}^{(d)},\gamma_{n}^{(u)},\alpha_{n}^{(d)},\alpha_{n}^{(u)},\alpha_{3n }^{(d)},\alpha_{3n}^{(u)}\) are Yukawa couplings, \(\Lambda\) signals the energy scale at which new potential fermionic degrees of freedom enter. The terms in the second row are required to generate the mixings with the third family. On the other hand, the masses of the electron and muon also arise from the \(d=6\) operators \[-\mathcal{L}_{d=6}^{(l)}=\sum_{i=1}^{3}\gamma_{i}^{(l)}\overline{l}_{iL}H_{2} e_{2R}\frac{\sigma^{*}\eta}{\Lambda^{2}}+\sum_{i=1}^{3}\alpha_{i}^{(l)} \overline{l}_{iL}\Delta^{\dagger}H_{2}e_{1R}\frac{\sigma^{*}}{\Lambda^{2}}+ \text{h.c.}\,, \tag{3}\] with \(\gamma_{i}^{(l)}\) and \(\alpha_{i}^{(l)}\) Yukawa couplings. In our framework, the gauge singlet scalar fields \(\sigma\) and \(\eta\) acquire vacuum expectation values (VEVs) at an energy scale slightly below the \(\Lambda\) scale of the fermionic seesaw mediators. Since these VEVs, \(v_{\sigma}\) and \(v_{\eta}\), closely approach the \(\Lambda\) scale, we employ six-dimensional Yukawa operators as opposed to the conventional five-dimensional ones, which are present in the usual type I-III seesaws. This choice aims to provide a more intrinsic rationale for the relatively small masses observed in the first and second generations of SM charged fermions. The hierarchical contrast in VEVs between the triplet (\(v_{\Delta}\)) and singlet scalars offers a potential explanation for the observed mass disparity between the first and second generation of SM charged fermions. By choosing the set of free \(U(1)_{X}\) charges to be \((H_{1},q_{3},l,q_{n},\Delta,\sigma,\eta)\), the complete set of charges allowing all the above \(d=4,6\) interaction terms are1 Footnote 1: In our notation we will denote a field and its \(U(1)_{X}\)–charge with the same symbol, i.e. \(X(\varphi)=\varphi\). \[u_{1}= -2\Delta+H_{1}+q_{3}\,, u_{2}= -\Delta-\eta+H_{1}+q_{3}\,, u_{3}= H_{1}+q_{3}\,,\] \[d_{1}= 2\Delta-H_{1}+2q_{n}-q_{3}+2\sigma\,, d_{2}= \Delta-\eta-H_{1}+2q_{n}-q_{3}+2\sigma\,, d_{3}= \Delta-H_{1}+q_{n}+\sigma\,,\] \[e_{1}= 2\Delta-H_{1}+l+q_{n}-q_{3}+2\sigma\,, e_{2}= \Delta-\eta-H_{1}+l+q_{n}-q_{3}+2\sigma\,, e_{3}= \Delta-H_{1}+l+q_{n}-q_{3}+\sigma\,,\] \[H_{2}= -\Delta+H_{1}-q_{n}+q_{3}-\sigma\,. \tag{4}\] Here, the three lepton doublets have the same \(X\)-charge \(l\), and \(q_{1}=q_{2}\equiv q_{n}\). We have checked with Sym2Int[17; 18] that the charge assignment only allows the effective operators claimed here. The neutrino sector deserves a different treatment due to the smallness of their masses. On one hand, since the lepton doublets have the same \(X\)-charge, effective Yukawa operators with right-handed neutrinos and powers of scalar singlets (or trace of the square of the scalar triplets) are automatically allowed after fixing their common \(X\)-charge. In absence of effective operators with lepton number violation, only Dirac neutrinos masses could be realized. On the other hand, to get further supression for Majorana neutrino masses one may look for lepton number violation via effective operators of at least dimension \(d=7\), such as \[\mathcal{L}_{\nu}=\sum_{i,j=1}^{3}\,\alpha_{ij}l_{iL}^{a}l_{jL}^{b}H_{2}^{c}H_ {2}^{d}\epsilon_{ab}\epsilon_{cd}\frac{\sigma^{*2}}{\Lambda^{2}}+\text{h.c.}, \tag{5}\] which in turn would deliver an additional charge condition, \[\sigma=\frac{1}{2}\left(-\Delta+H_{1}+l-q_{n}+q_{3}\right). \tag{6}\] In the present study, we contemplate the interplay of two suppression mechanisms for neutrino mass, namely radiative realization of effective operators of high dimensionality [19]. Concretely, we consider the \(d=11\) lepton number violating effective operators [20; 21] \[-\mathcal{L}_{d=11}^{(\nu)}=\frac{1}{\Lambda^{5}}\sum_{i,j,k,s=1}^{3}\overline{l }_{iL}l_{jL}^{c}\overline{l}_{kL}l_{sL}^{c}\left[\alpha_{ijks}^{11}\,\overline{ e_{1R}^{c}}e_{1R}^{c}\frac{\Delta^{*2}}{\Lambda^{2}}+\alpha_{ijks}^{22} \overline{e_{2R}^{c}}e_{2R}\frac{\eta^{2}}{\Lambda^{2}}+\alpha_{ijks}^{33} \overline{e_{3R}^{c}}e_{3R}\frac{\sigma^{2}}{\Lambda^{2}}\right]+\text{h.c.}, \tag{7}\] in such a way that when \(\sigma\), \(\eta\) and \(\Delta\) develop a nonzero VEV neutrino Majorana masses may be generated at two-loop level through the Zee-Babu mechanism [22; 23]. Here \(l_{iL}\) and \(e_{iR}\) (\(i=1,2,3\)) are the SM \(SU(2)_{L}\) leptonic doublets and right handed SM charged leptonic fields, respectively. In this work we propose a low scale renormalizable model where the set of \(d=6\) effective operators responsible for the implementation of the Universal Seesaw mechanism that produces the SM charged fermion mass hierarchy, are generated at tree level. This is done through the introduction of several sets of chiral fields transforming under the SM and \(U(1)_{X}\) symmetries. Furthermore, the above given \(d=11\) effective operators giving rise to the tiny active neutrino masses are generated at two-loop level through the Zee-Babu mechanism. To illustrate the viability of the model we perform a phenomenological analysis taking into account the current constraints coming from low energy physics, anomalous magnetic moments of leptons, charged lepton flavor violation and meson oscillations. The main novelty of our model is thus that it has the implementation of both the Universal Seesaw mechanism to generate the SM charged fermion mass hierarchy and the Zee-Babu mechanism for producing the tiny active neutrino masses within the framework of an extended 2HDM theory with moderate particle content and just one extra \(U(1)_{X}\) global symmetry. The rest of the paper is organized as follows. In section II, we outline the particle content, the interactions of the model and discuss its implications in SM fermion masses and mixings. The phenomenological consequences of the model concerning \(g-2\) muon and electron anomalies as well as charged lepton flavor violation are analyzed in section III. In section IV, we discuss the implications of the model in meson mixings. Finally, we conclude in section V. ## II Ultraviolet realization Our aim is to implement the USM to generate the masses of the first and second generations of SM charged fermions, with the third generation of SM charged fermions getting their masses via Yukawa interactions with \(SU(2)_{L}\) scalar doublets \(H_{1}\) (for the top quark) and \(H_{2}\) (for the bottom quark and tau lepton). Hence we start from a 2HDM extended by an extra \(U\left(1\right)_{X}\) symmetry, assumed to be global for simplicity. The \(U\left(1\right)_{X}\) is assumed to be spontaneously broken as well as softly broken. The ultraviolet completion of the USM is achieved by introducing the following set of chiral fermionic fields: \[F_{iL},F_{iR},\Psi_{jL},\Psi_{jR},\hskip 28.452756pti=1,2,3,4;\ j=1,2. \tag{8}\] The fields \(F_{i(L,R)}\) are color triplets, with \(F_{1,3}\) weak doublets and \(F_{2,4}\) weak singlets, whereas the fields \(\Psi_{j(L,R)}\) are color singlets, with \(\Psi_{1}\) being a weak doublet and \(\Psi_{2}\) a weak singlet. The scalar spectrum is extended by one hyperchargeless \(SU(2)_{L}\) scalar triplet \(\Delta\), two electrically neutral \((\sigma,\eta)\) and five electrically charged scalars \((\xi,\,\rho,\,\zeta_{1,2,3})\). The extra fermion and scalar content with the corresponding electroweak and \(U(1)_{X}\) quantum numbers are displayed in Table 1. The electrically neutral gauge singlet scalars and the scalar triplet together with the charged exotic vector-like fermions are required for the implementation of the USM that generates the first and second generation of SM charged fermion masses. The gauge singlet scalar \(\sigma\) provides tree level masses to the charged exotic vector like fermions, whereas the scalar singlet \(\eta\) generates mixings between left-handed charged exotic fermions and the second family of right handed SM fermions. Moreover, the scalar doublet \(H_{2}\), besides generating the bottom quark and tau lepton masses, also generates mixings between left-handed charged exotic fermions and the first family of right handed SM fermions. Furthermore, the mixings between the left-handed SM fermionic fields and right-handed heavy fermionic seesaw mediators inducing the first generation of SM charged fermion masses, arise from Yukawa interactions involving the scalar triplet \(\Delta\). Besides these scalars, the inclusion of the electrically charged scalars is necessary for the implementation of the two-loop level Zee-Babu mechanism that generates the tiny active neutrino masses. It is worth stressing that the VEV hierarchy between \(v_{\Delta}\) and \(v_{\eta}\) and \(v_{\sigma}\) is crucial for explaining the mass hierarchy between the first and second family of SM charged fermions. Let us note that, whereas the VEV of the singlets can be free parameters, \(v_{\Delta}\) should be at most few GeV in order to successfully comply with the constraints arising from the oblique \(T\) parameter [16]. By using Eqs. (4) and (6), the ultraviolet completion of the effective model demands that the set of new \(U(1)_{X}\)-charges is \[\xi= -2l, \rho= -4l,\] \[\zeta_{1}= -2e_{1}=-2(\Delta+2l), \zeta_{2}= -2e_{2}=-2(2l-\eta)-2(2l-\eta), \zeta_{3}= -2e_{3}=-\Delta+H_{1}-3l-q_{n}+q_{3}, \tag{9}\] where the set of free charges remains \((\Delta,H_{1},q_{3},l,q_{12})\). For the extra fermions the \(H\)-charges are fixed to \[F_{1L}= \frac{1}{2}\left(-3\Delta+H_{1}+l+q_{n}+q_{3}\right),\ F_{1R}=q_ {n}-\Delta,\ F_{2L}=-\Delta+H_{1}+q_{3},\ F_{2R}=\frac{1}{2}\left(-\Delta+H_{ 1}-l+q_{n}+q_{3}\right),\] \[F_{3L}= \frac{1}{2}\left(\Delta+H_{1}+l+q_{n}+q_{3}\right),\ F_{3R}= \Delta+q_{n},\ F_{4L}=l+q_{n},\ F_{4R}=\frac{1}{2}\left(\Delta-H_{1}+l+3q_{n}- q_{3}\right),\] \[\Psi_{1L}= \frac{1}{2}\left(\Delta+H_{1}+3l-q_{n}+q_{3}\right),\ \Psi_{1R}=\Delta+l,\ \Psi_{2L}=2l,\ \Psi_{2R}=\frac{1}{2}\left(\Delta-H_{1}+3l+q_{n}-q_{3}\right). \tag{10}\] With the charge assignment the quark Yukawa interactions are \[-\mathcal{L}_{Y}^{(q)} =\sum_{n=1}^{2}x_{n}^{(u)}\overline{q}_{nL}\widetilde{H}_{2}F_{2R }+z_{u}\overline{F}_{2L}\eta u_{2R}+\sum_{n=1}^{2}x_{n}^{(d)}\overline{q}_{nL} H_{2}F_{4R}+z_{d}\overline{F}_{4L}\eta d_{2R}+\sum_{n=1}^{2}w_{n}^{(d)} \overline{q}_{nL}\Delta^{\dagger}F_{3R}\] \[+r_{d}\overline{F}_{3L}H_{2}d_{1R}+\sum_{n=1}^{2}w_{n}^{(u)} \overline{q}_{nL}\Delta F_{1R}+r_{u}\overline{F}_{1L}\widetilde{H}_{2}u_{1R}+ y_{1}^{(F)}\overline{F}_{1L}\sigma F_{1R}+y_{2}^{(F)}\overline{F}_{2L}\sigma F _{2R}\] \[+y_{3}^{(F)}\overline{F}_{3L}\sigma F_{3R}+y_{4}^{(F)}\overline{F} _{4L}\sigma F_{4R}+k_{u}\overline{F}_{3L}\widetilde{H}_{2}u_{3R}+k_{d} \overline{F}_{3L}H_{1}d_{3R}+\text{h.c.}, \tag{11}\] whereas the charged lepton Lagrangian is given by \[-\mathcal{L}_{Y}^{(l)}=\sum_{i=1}^{3}x_{i}^{(l)}\overline{l}_{iL}H_{2}\Psi_{2 R}+z_{l}\overline{\Psi}_{2L}\eta e_{2R}+\sum_{i=1}^{3}w_{i}^{(l)}\overline{l}_{iL} \Delta^{\dagger}\Psi_{1R}+r_{l}\overline{\Psi}_{1L}H_{2}e_{1R}+\sum_{i=1,2}y_{ i}^{(\Psi)}\overline{\Psi}_{iL}\sigma\Psi_{iR}+\text{h.c.}. \tag{12}\] These interactions allow us to construct the seesaw diagrams displayed in figures 1 and 2, which lead to the following \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline & \(SU(3)_{C}\) & \(SU(2)_{L}\) & \(U(1)_{Y}\) & \(U(1)_{X}\) \\ \hline \(F_{1(L,R)}\) & 3 & 2 & \(\frac{1}{6}\) & \(F_{1(L,R)}\) \\ \(F_{2(L,R)}\) & 3 & 1 & \(\frac{2}{3}\) & \(F_{2(L,R)}\) \\ \(F_{3(L,R)}\) & 3 & 2 & \(\frac{1}{6}\) & \(F_{3(L,R)}\) \\ \(F_{4(L,R)}\) & 3 & 1 & \(-\frac{1}{3}\) & \(F_{4(L,R)}\) \\ \hline \(\Psi_{1(L,R)}\) & 1 & 2 & \(-\frac{1}{2}\) & \(\Psi_{1(L,R)}\) \\ \(\Psi_{2(L,R)}\) & 1 & 1 & -1 & \(\Psi_{2(L,R)}\) \\ \hline \end{tabular} \begin{tabular}{|l|c|c|c|c|} \hline & \(SU(3)_{C}\) & \(SU(2)_{L}\) & \(U(1)_{Y}\) & \(U(1)_{X}\) \\ \hline \(H_{1,2}\) & 1 & 2 & \(\frac{1}{2}\) & \(H_{1,2}\) \\ \(\sigma\) & 1 & 1 & 0 & \(\sigma\) \\ \(\eta\) & 1 & 1 & 0 & \(\eta\) \\ \(\Delta\) & 1 & 3 & 0 & \(\Delta\) \\ \(\xi^{\pm}\) & 1 & 1 & \(\pm 1\) & \(\xi\) \\ \(\rho^{\pm\pm}\) & 1 & 1 & \(\pm 2\) & \(\rho\) \\ \(\zeta^{\pm\pm}_{1,2,3}\) & 1 & 1 & \(\pm 2\) & \(\zeta_{1,2,3}\) \\ \hline \end{tabular} \end{table} Table 1: Extra fermion and scalar content with the electroweak and \(U(1)_{X}\) quantum numbers. mass matrices for up and down type quarks, \[M_{U} = \tag{13}\] \[M_{D} = \tag{14}\] and for the charged leptons, \[M_{l} = \left(\begin{array}{cc}C_{E}&A_{E}\\ B_{E}&M_{E}\end{array}\right),\hskip 14.226378ptA_{E}=\left(\begin{array}{cc} \frac{w_{1}v_{\Delta}}{\sqrt{2}}&\frac{x_{1}v_{2}}{\sqrt{2}}\\ \frac{w_{2}v_{\Delta}}{\sqrt{2}}&\frac{x_{2}v_{2}}{\sqrt{2}}\\ \frac{w_{2}v_{\Delta}}{\sqrt{2}}&\frac{x_{2}v_{2}}{\sqrt{2}}\end{array}\right), \hskip 14.226378ptB_{E}=\left(\begin{array}{cc}\frac{r_{1}v_{2}}{\sqrt{2}}&0&0 \\ 0&\frac{z_{1}v_{\eta}}{\sqrt{2}}&0\end{array}\right),\hskip 14.226378ptC_{E}= \left(\begin{array}{cc}0&0&y_{1}^{(e)}\frac{v_{2}}{\sqrt{2}}\\ 0&0&y_{2}^{(e)}\frac{v_{2}}{\sqrt{2}}\\ 0&0&y_{3}^{(e)}\frac{v_{2}}{\sqrt{2}}\end{array}\right). \tag{15}\] The remaining entries of \(M_{U},M_{D}\) and \(M_{L}\) are diagonal matrices with \[B_{U} =\] \[C_{D} = \tag{16}\] As follows from the charged fermion Yukawa terms, the heavy vector-like quarks mix with the SM charged fermions of the first and second generation, thus triggering a tree-level seesaw mechanism. This in turn generates the first and second generation charged fermion masses. Futhermore, the masses of the bottom quark and tau lepton are generated from Yukawa interactions involving a second \(SU(2)_{L}\) scalar doublet which acquires a VEV \(v_{2}\) at the GeV Figure 1: Feynman diagrams contributing to the entries of the SM quark mass matrices. Here, \(n=1,2\). scale, whereas the first scalar doublet, which gets VEV \(v_{1}\) at the electroweak scale, generates the top quark mass. Consequently, the SM charged fermion mass matrices are given by: \[\widetilde{M}_{U} = C_{U}-A_{U}M_{T}^{-1}B_{U}=\left(\begin{array}{ccc}-\frac{r_{u}w _{1}^{(u)}v_{\Delta}v_{2}}{2m_{\mu}v_{1}}&-\frac{z_{u}x_{1}^{(u)}v_{2}v_{\eta}} {2m_{\mu}v_{2}}&-\frac{k_{u}w_{1}^{(d)}v_{\Delta}v_{2}}{2m_{\mu}v_{3}}\\ -\frac{r_{u}w_{2}^{(u)}v_{\Delta}v_{2}}{2m_{\mu}v_{1}}&-\frac{z_{u}x_{1}^{(u)} v_{2}v_{\eta}}{2m_{\mu}v_{2}}&-\frac{k_{u}w_{1}^{(d)}v_{\Delta}v_{2}}{2m_{\mu}v_{3}} \\ 0&0&\frac{y_{1}^{(u)}v_{2}}{\sqrt{2}}\end{array}\right)=\left(\begin{array}{ cccc}C_{1}^{(1)}&C_{0}^{(2)}&C_{0}^{(3)}\\ C_{u}^{(4)}&C_{u}^{(5)}&C_{u}^{(6)}\\ 0&0&C_{u}^{(7)}\end{array}\right),\] \[\widetilde{M}_{D} = C_{D}-A_{D}M_{B}^{-1}B_{D}=\left(\begin{array}{ccc}-\frac{r_{ d}w_{1}^{(d)}v_{\Delta}v_{2}}{2m_{\mu}v_{2}}&-\frac{z_{d}x_{1}^{(d)}v_{2}v_{\eta}} {2m_{\mu}v_{4}}&-\frac{k_{d}w_{1}^{(d)}v_{\Delta}v_{1}}{2m_{\mu}v_{3}}\\ -\frac{r_{u}w_{2}^{(d)}v_{\Delta}v_{2}}{2m_{\mu}v_{3}}&-\frac{z_{d}x_{1}^{(d)} v_{2}v_{\eta}}{2m_{\mu}v_{4}}&-\frac{k_{d}w_{2}^{(d)}v_{\Delta}v_{1}}{2m_{\mu}v_{3}} \\ 0&0&\frac{y_{2}^{(u)}v_{2}}{\sqrt{2}}\end{array}\right)=\left(\begin{array}{ cccc}C_{d}^{(1)}&C_{d}^{(2)}&C_{d}^{(3)}\\ C_{d}^{(4)}&C_{d}^{(5)}&C_{d}^{(6)}\\ 0&0&C_{d}^{(7)}\end{array}\right),\] \[\widetilde{M}_{l} = C_{E}-A_{E}M_{E}^{-1}B_{E}=\left(\begin{array}{ccc}-\frac{r_{ l}w_{1}^{(l)}v_{\Delta}v_{2}}{2m_{\mu}v_{1}}&-\frac{z_{l}x_{1}^{(l)}v_{2}v_{\eta}} {2m_{\mu}v_{2}}&\frac{y_{1}^{(e)}v_{2}}{\sqrt{2}}\\ -\frac{r_{u}w_{1}^{(l)}v_{\Delta}v_{2}}{2m_{\mu}v_{1}}&-\frac{z_{l}x_{1}^{(l)} v_{2}v_{\eta}}{2m_{\mu}v_{2}}&\frac{y_{1}^{(e)}v_{2}}{\sqrt{2}}\\ -\frac{r_{l}w_{1}^{(u)}v_{\Delta}v_{2}}{2m_{\mu}v_{1}}&-\frac{z_{l}x_{1}^{(l)} v_{2}v_{\eta}}{2m_{\mu}v_{2}}&\frac{y_{1}^{(e)}v_{2}}{\sqrt{2}}\end{array}\right). \tag{17}\] On the other hand, tiny neutrino masses arise from the interplay of the Yukawa interactions \[-\mathcal{L}_{Y}^{(\nu)}=\sum_{i,j=1}^{3}\kappa_{ij}\overline{l_{iL}^{c}}l_{jL }\xi^{+}+\sum_{i=1}^{3}\gamma_{i}\overline{e_{iR}^{c}}e_{iR}\zeta_{i}^{++}+ \text{h.c.}\,, \tag{18}\] and the interactions from the scalar potential \[\mathcal{V}\supset\left[\lambda_{7}\zeta_{1}^{--}\Delta^{\dagger 2}+\lambda_{8} \zeta_{2}^{--}\eta^{2}+\lambda_{9}\zeta_{3}^{--}\sigma^{2}+\mu_{\xi\rho}\xi^{-} \xi^{-}\right]\rho^{++}+\text{h.c.}\,. \tag{19}\] The resulting mass matrix at two loop level for the light active neutrinos (see Fig. 3) takes the form \[M_{\nu}=\sum_{r=1}^{4}\sum_{k=1}^{4}\frac{\mu_{\xi\rho}\left(R_{CC}\right)_{4r} }{48\pi^{2}m_{k}^{2}}\kappa\widetilde{M}_{l}G_{k}^{\dagger}\widetilde{M}_{l}^ {T}\kappa^{T}J\left(\frac{m_{\chi_{i}^{--}}^{2}}{m_{\xi^{+}}^{2}}\right),\ ## III Charged Leptons Phenomenology In this section we will analyze the implications of our model in charged lepton flavor violation as well as in the muon and electron anomalous magnetic moments. Despite considerable experimental endeavors aimed at detecting signals of lepton rare decays, evidence of such processes remains elusive [24; 25]. It is worth noting that these experimental pursuits have not only achieved great sensitivity [26; 27] but are also on the brink of achieving substantial improvements in the near future, in some cases even by several orders of magnitude [28; 29]. In contrast, the most recent experimental result for \((g-2)_{\mu}\)[30] reported by the The Muon \(g-2\) Collaboration improves its previous result by more than a factor of two [31], potentially exhibiting deviations from the prediction of the Standard Model [32]. In the present model, the branching ratio for the \(l_{i}\to l_{j}\gamma\) decay receives one loop level contributions from electrically charged and doubly charged scalars, taking the form [24] \[Br\left(l_{i}\to l_{j}\gamma\right)=\frac{\alpha_{\rm em}}{48\pi G_{F}^{2}} \left(\sum_{s=1}^{3}\left|\frac{\left(\left(R_{C}\right)_{3s}\right)^{2}\left( \kappa^{\dagger}\kappa\right)_{ji}}{m_{\varphi_{k}^{\pm}}^{2}}\right|^{2}+16 \sum_{k=1}^{3}\left|\frac{\left(G_{k}^{\dagger}G_{k}\right)_{ji}}{m_{\chi_{k} ^{\pm\pm}}^{2}}\right|^{2}\right). \tag{22}\] In order to simplify our analysis, we consider a simplified benchmark scenario corresponding to the alignment limit, where the \(H_{2R}^{0}\), \(\eta_{R}\), \(\Delta_{R}^{0}\) and \(\sigma_{R}\) do not mix with the neutral CP even part of \(H_{1}\), i.e., \(H_{1R}^{0}\). Furthermore, we assume that \(\sigma\) does not mix with the remaining scalar fields. In that scenario \(H_{1R}^{0}\) is identified with the 126 GeV SM like Higgs boson. In that scenario, the heavy neutral CP even \(H_{2R}^{0}\), \(\eta_{R}\), \(\Delta_{R}^{0}\), neutral CP odd \(H_{2I}^{0}\), \(\eta_{I}\), \(\Delta_{I}^{0}\), electrically charged scalars \(H_{2}^{\pm}\), \(\xi^{\pm}\) and \(\Delta_{R}^{\pm}\) and doubly charged scalars (in the interaction basis) relevant for the \(g-2\) anomalies and charged lepton flavor violating processes are related with the corresponding scalars in the mass basis by the following relations: \[\left(\begin{array}{c}H_{2R}^{0}\\ \eta_{R}\\ \Delta_{R}^{0}\end{array}\right)=R_{H}\left(\begin{array}{c}S_{1}\\ S_{2}\\ S_{3}\end{array}\right),\qquad\left(\begin{array}{c}H_{2I}^{0}\\ \eta_{I}\\ \Delta_{I}^{0}\end{array}\right)=R_{A}\left(\begin{array}{c}A_{1}\\ A_{2}\\ A_{3}\end{array}\right),\qquad\left(\begin{array}{c}H_{2}^{\pm}\\ \xi^{\pm}\\ \Delta_{R}^{\pm}\end{array}\right)=R_{C}\left(\begin{array}{c}\varphi_{1}^{ \pm}\\ \varphi_{2}^{\pm}\\ \varphi_{3}^{\pm}\end{array}\right),\qquad\left(\begin{array}{c}\zeta_{1}^{ \pm\pm}\\ \zeta_{2}^{\pm\pm}\\ \varphi_{3}^{\pm\pm}\end{array}\right)=R_{C}\left(\begin{array}{c}\chi_{1}^{ \pm\pm}\\ \chi_{2}^{\pm\pm}\\ \chi_{3}^{\pm\pm}\end{array}\right) \tag{23}\] where \(R_{H}\), \(R_{A}\), \(R_{C}\) and \(R_{CC}\) are real orthogonal \(3\times 3\) rotation matrices, respectively. On the other hand, the muon and electron anomalous magnetic moments receive contributions due the virtual exchange of heavy neutral (charged) scalars and charged (neutral) leptons running in the internal lines of the one loop Figure 3: Feynman diagrams contributing to the entries of the neutrino mass matrix. Here, \(i,j,k,r=1,2,3\) and \(n=1,3\). vertex diagram. Then, the contributions to the muon and electron anomalous magnetic moments read \[\Delta a_{\mu} =\sum_{j=1}^{3}\frac{\operatorname{Re}\left(\beta_{2}\vartheta_{2}^{ \ast}\right)m_{\mu}^{2}}{8\pi^{2}}\left[\left(R_{H}\right)_{1j}\left(R_{H} \right)_{2j}I_{S}^{(\mu)}\left(m_{\Psi_{2}},m_{S_{j}}\right)+\left(R_{A} \right)_{1j}\left(R_{A}\right)_{2j}I_{A}^{(\mu)}\left(m_{\Psi_{2}},m_{A_{j}} \right)\right]\] \[+\sum_{j=1}^{3}\frac{\operatorname{Re}\left(\varkappa_{2}\varrho_{2 }^{\ast}\right)m_{\mu}^{2}}{8\pi^{2}}\left[\left(R_{H}\right)_{1j}\left(R_{H} \right)_{3j}I_{S}^{(\mu)}\left(m_{\Psi_{1}},m_{S_{j}}\right)+\left(R_{A} \right)_{1j}\left(R_{A}\right)_{3j}I_{A}^{(\mu)}\left(m_{\Psi_{1}},m_{A_{j}} \right)\right]\] \[+\sum_{j=1}^{3}\frac{\operatorname{Re}\left(\varkappa_{2}\varrho_{2 }^{\ast}\right)m_{\mu}m_{\Psi_{1}}}{8\pi^{2}m_{\varphi_{j}^{\pm}}^{2}}\left(R_ {C}\right)_{1j}\left(R_{C}\right)_{3j}J\left(\frac{m_{\Psi_{1}}^{2}}{m_{\varphi _{j}^{\pm}}^{2}}\right)-\frac{m_{\mu}^{2}}{24\pi^{2}}\left(\sum_{j=1}^{3}\frac {\left(\left(R_{C}\right)_{3j}\right)^{2}\left(\kappa^{\dagger}\kappa\right)_{2 2}}{m_{\varphi_{j}^{\pm}}^{2}}+4\sum_{k=1}^{3}\frac{\left(G^{\dagger}G\right) _{22}}{m_{\chi_{k}^{\pm\pm}}^{2}}\right), \tag{24}\] and \[\Delta a_{e} =\sum_{j=1}^{3}\frac{\operatorname{Re}\left(\beta_{1}\vartheta_{ 1}^{\ast}\right)m_{e}^{2}}{8\pi^{2}}\left[\left(R_{H}\right)_{1j}\left(R_{H} \right)_{2j}I_{S}^{(e)}\left(m_{\Psi_{2}},m_{S_{j}}\right)+\left(R_{A}\right) _{1j}\left(R_{A}\right)_{2j}I_{A}^{(e)}\left(m_{\Psi_{2}},m_{A_{j}}\right)\right]\] \[+\sum_{j=1}^{3}\frac{\operatorname{Re}\left(\varkappa_{1}\varrho_ {1}^{\ast}\right)m_{e}^{2}}{8\pi^{2}}\left[\left(R_{H}\right)_{1j}\left(R_{H} \right)_{3j}I_{S}^{(e)}\left(m_{\Psi_{1}},m_{S_{j}}\right)+\left(R_{A}\right) _{1j}\left(R_{A}\right)_{3j}I_{A}^{(e)}\left(m_{\Psi_{1}},m_{A_{j}}\right)\right]\] \[+\sum_{j=1}^{3}\frac{\operatorname{Re}\left(\varkappa_{1}\varrho_ {1}^{\ast}\right)m_{e}m_{\Psi_{1}}}{8\pi^{2}m_{\varphi_{j}^{\pm}}^{2}}\left(R_ {C}\right)_{1j}\left(R_{C}\right)_{3j}J\left(\frac{m_{\Psi_{1}}^{2}}{m_{\varphi _{j}^{\pm}}^{2}}\right)-\frac{m_{e}^{2}}{24\pi^{2}}\left(\sum_{j=1}^{3}\frac{ \left(\left(R_{C}\right)_{3j}\right)^{2}\left(\kappa^{\dagger}\kappa\right)_{1 1}}{m_{\varphi_{j}^{\pm}}^{2}}+4\sum_{k=1}^{3}\frac{\left(G_{k}^{\dagger}G_{k} \right)_{11}}{m_{\chi_{k}^{\pm\pm}}^{2}}\right), \tag{25}\] where \[\beta_{1} =\sum_{i=1}^{3}x_{i}^{(l)}\left(V_{lL}^{\dagger}\right)_{1i}, \vartheta_{1} =z_{l}\left(V_{lR}\right)_{21}, \beta_{2} =\sum_{i=1}^{3}x_{i}^{(l)}\left(V_{lL}^{\dagger}\right)_{2i}, \vartheta_{2} =z_{l}\left(V_{lR}\right)_{22}, \tag{26}\] \[\varkappa_{1} =\sum_{i=1}^{3}w_{i}^{(l)}\left(V_{lL}^{\dagger}\right)_{1i}, \varrho_{1} =r_{l}\left(V_{lR}\right)_{11}, \varkappa_{2} =\sum_{i=1}^{3}w_{i}^{(l)}\left(V_{lL}^{\dagger}\right)_{2i}, \varrho_{2} =r_{l}\left(V_{lR}\right)_{12}, \tag{27}\] being \(m_{S_{j}}\), \(m_{A_{j}}\), \(m_{\varphi_{j}^{\pm}}^{2}\) (\(j=1,2,3\)), \(m_{\chi_{k}^{\pm\pm}}^{2}\) and \(m_{\Psi_{k}}\), (\(k=1,2\)) the masses of the CP even neutral \(S_{j}\), CP odd neutral \(A_{j}\), electrically charged \(\varphi_{j}^{\pm}\), doubly charged scalars \(\chi_{k}^{\pm\pm}\) and charged exotic vector like fermions \(\Psi_{k}\), respectively. Furthermore, the \(I_{S,A}\left(m_{\Psi},m\right)\) and \(J\left(r\right)\) loop functions have the form [33; 34; 35; 24; 36] \[I_{S,A}^{(\mu,e)}\left(m_{\Psi},m\right)=\int_{0}^{1}dx\frac{x^{2}\left(1-x\pm \frac{m_{\Psi}}{m_{\mu,e}}\right)}{m_{\mu,e}^{2}x^{2}+\left(m_{\Psi}^{2}-m_{\mu, e}^{2}\right)x+m^{2}\left(1-x\right)}, J\left(r\right)=\frac{-1+r^{2}-2r\ln r}{\left(r-1\right)^{3}}, \tag{28}\] where \(V_{lL}\) and \(V_{lR}\) are the rotation matrices that diagonalize \(\widetilde{M}_{E}\) according to the relation \[V_{lL}^{\dagger}\widetilde{M}_{E}V_{lR}=\operatorname{diag}\left(m_{e},m_{\mu},m_ {\tau}\right). \tag{29}\] Considering that the muon and electron anomalous magnetic moments are constrained to be in the ranges [37; 38] \[\left(\Delta a_{\mu}\right)_{\rm exp} = \left(2.51\pm 0.59\right)\times 10^{-9},\] \[\left(\Delta a_{e}\right)_{\rm exp} = \left(4.8\pm 3.0\right)\times 10^{-13}. \tag{30}\] We plot in Figure 4 (top left panel) the correlation between the electron and muon anomalous magnetic moments. Furthermore, in this figure are also displayed the correlations of the muon anomalous magnetic moment with the mass \(m_{\varphi^{\pm}}\) of the electrically charged scalar \(\varphi_{1}^{\pm}\) heavy (top right panel) as well as with the exotic charged lepton masses \(m_{\psi_{1}}\) and \(m_{\psi_{2}}\) (bottom panels). Besides that, Figure 5 displays the correlation between Branching ratio for the \(\mu\to e\gamma\) decay and the muon anomalous magnetic moment. To generate these plots, for the sake of simplicity we have considered in our analysis, the following benchmark scenario: \[m_{S_{1}} = m_{S},\qquad m_{S_{2}}=m_{S}+\Delta,\qquad m_{S_{3}}=m_{S}+2\Delta, \tag{31}\] \[m_{A_{1}} = m_{A},\qquad m_{A_{2}}=m_{A}+\Delta,\qquad m_{A_{3}}=m_{A}+2\Delta,\] (32) \[m_{\varphi_{1}^{\pm}} = m_{\varphi^{\pm}},\qquad m_{\varphi_{2}^{\pm}}=m_{\varphi^{\pm}} +\Delta,\qquad m_{\varphi_{3}^{\pm}}=m_{\varphi^{\pm}}+2\Delta,\] (33) \[\Delta = 100\;{\rm GeV},\qquad m_{\chi_{\pm}^{\pm\pm}}=10\;{\rm TeV}, \qquad k=1,2. \tag{34}\] and we have varied the masses \(m_{S}\), \(m_{A}\), \(m_{\varphi^{\pm}}\), \(m_{\psi_{1}}\) and \(m_{\psi_{2}}\) in the ranges \[1\;{\rm TeV}\leq m_{S},m_{A}\leq 10\;{\rm TeV},\qquad 3\;{\rm TeV}\leq m_{ \varphi^{\pm}}\leq 5\;{\rm TeV},\qquad 1\;{\rm TeV}\leq m_{\psi_{1}},m_{ \psi_{2}}\leq 5\;{\rm TeV}. \tag{35}\] As indicated by Figures 4 and 5, our model can successfully accommodate the experimental values of the muon and electron anomalous magnetic moments and is consistent with the constraints arising from charged lepton flavor violation. Furthermore, the branching ratio for the \(\mu\to e\gamma\) decay can reach values of the order of \(10^{-13}\), which is within the reach of future experimental sensitivity, thus making our model testable by the fourthcoming experiments. In what follows we briefly comment about the implications of the model in the charged lepton flavor violating (CLFV) decays \(\mu^{-}\to e^{-}e^{+}e^{-}\), \(\tau^{-}\to\mu^{-}\mu^{+}\mu^{-}\), \(\tau\to\mu^{+}\mu^{-}e^{-}\). It is worth mentioning that these CLFV decays take place at tree level thanks to the virtual exchange of the doubly charged scalars of the model. Thus, the experimental upper bounds of these decays [27] can be used to set constraints on the \(\frac{|\gamma_{i}\gamma_{j}^{*}|^{2}}{m_{\chi^{\pm}}^{\pm}}\) (\(i,j,k=1,2,3\)) ratios. For instance, for the \(\mu^{-}\to e^{-}e^{+}e^{-}\) decay, whose branching ratio has the experimental upper bound of \(10^{-12}\), one gets the constaint \(|\gamma_{i}\gamma_{j}^{*}|<2.3\times 10^{-5}\left(\frac{m_{\chi_{j}^{\pm\pm}}}{TeV} \right)^{2}\)[24], which is successfully fullfilled in our model for appropiate values of the parameters. Figure 4: Top: correlations between the muon and electron anomalous magnetic moments (left panel) and of the muon anomalous magnetic moment with the mass \(m_{\varphi^{\pm}}\) of the electrically charged scalar \(\varphi_{1}^{\pm}\) (right panel). Bottom: correlations of the muon anomalous magnetic moment with the heavy exotic charged lepton masses \(m_{\psi_{1}}\) (left panel) and \(m_{\psi_{2}}\) (right panel). Finally, to close this section we provide a brief qualitative discussion about the implications of our model in the electric dipole moment of the neutron. As pointed out in Refs. [39; 40], the electric dipole moment of the neutron in multiHiggs doublet models has several sources: i) tree level CP violating scalar exchange, which give rise to four-fermion operators involving the up- and down-type quarks; ii) the CP-violating three-gluon operator, so called Weinberg operator and the Barr-Zee type two-loop diagrams contributing to the electric dipole moment and chromo-electric dipole moments of the up- and down-type quarks. It is worth mentioning that the first and third sources of the electric dipole moment of the neutron are suppressed by the small values of the light quark masses [39; 40]. Thus, we expect that the leading contribution to the electric dipole moment of the neutron in our extended 2HDM theory will arise from the CP violating two loop level self gluon trilinear interaction involving the virtual exchange of electrically charged scalars as well as top and bottom quarks as in Refs. [39; 40]. Therefore, the bound on the electric dipole moment of the neutron \(|d_{e}|\leqslant 1.1\times 10^{-29}e\)_cm_[41] can be used to set constraints on the ratio between CP violating parameter combinations and squared charged scalar masses, as discussed in detail in Ref. [39]. Recent comprehensive studies of the implications of multiHiggs doublet models in the electric dipole moment of the neutron are performed in Refs. [40; 42; 43]. A detailed numerical analysis of the electric dipole moment of the neutron in this model is beyond the scope of the present work and will be done elsewhere. ## IV Meson mixings In this section we discuss the implications of our model in the Flavour Changing Neutral Current (FCNC) interactions in the down type quark sector. Given that quark Yukawa interactions include two scalar doublets, there will be tree level flavor changing neutral currents (FCNC) mediated by neutral scalars and pseudoscalars exchange that will give rise to \(K^{0}-\bar{K}^{0}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson oscillations, which can be described by the following effective Hamiltonians: \[\mathcal{H}_{\rm eff}^{(K)}\!\!=\!\sum_{j=1}^{3}\kappa_{j}^{(K)}\left(\mu\right) \mathcal{O}_{j}^{(K)}\left(\mu\right),\qquad\quad\mathcal{H}_{\rm eff}^{(B_{d })}\!\!=\!\sum_{j=1}^{3}\kappa_{j}^{(B_{d})}\left(\mu\right)\mathcal{O}_{j}^{( B_{d})}\left(\mu\right),\qquad\quad\mathcal{H}_{\rm eff}^{(B_{s})}\!\!=\!\sum_{j=1}^{3} \kappa_{j}^{(B_{s})}\left(\mu\right)\mathcal{O}_{j}^{(B_{s})}\left(\mu\right), \tag{36}\] Here \(\mathcal{O}_{j}^{(K)}\), \(\mathcal{O}_{j}^{(B_{d})}\) and \(\mathcal{O}_{1}^{(B_{s})}\) corresponds to four fermion operators generated after integrating out the scalars and pseudoscalars that mediate the tree level FCNC interactions producing the \(K^{0}-\bar{K}^{0}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson Figure 5: Correlation between \(Br\left(\mu\to e\gamma\right)\) and the muon anomalous magnetic moment. oscillations. These four fermion operators are given by: \[{\cal O}_{1}^{(K)} = (\overline{s}_{R}d_{L})\left(\overline{s}_{R}d_{L}\right),\qquad \qquad{\cal O}_{2}^{(K)}=(\overline{s}_{L}d_{R})\left(\overline{s}_{L}d_{R} \right),\qquad\qquad{\cal O}_{3}^{(K)}=(\overline{s}_{R}d_{L})\left(\overline{s }_{L}d_{R}\right), \tag{37}\] \[{\cal O}_{1}^{(B_{d})} = \left(\overline{d}_{R}b_{L}\right)\left(\overline{d}_{R}b_{L} \right),\qquad\qquad{\cal O}_{2}^{(B_{d})}=\left(\overline{d}_{L}b_{R}\right) \left(\overline{d}_{L}b_{R}\right),\qquad\qquad{\cal O}_{3}^{(B_{d})}=\left( \overline{d}_{R}b_{L}\right)\left(\overline{d}_{L}b_{R}\right),\] (38) \[{\cal O}_{1}^{(B_{s})} = (\overline{s}_{R}b_{L})\left(\overline{s}_{R}b_{L}\right),\qquad \qquad{\cal O}_{2}^{(B_{s})}=(\overline{s}_{L}b_{R})\left(\overline{s}_{L}b_{ R}\right),\qquad\qquad{\cal O}_{3}^{(B_{s})}=(\overline{s}_{R}b_{L})\left( \overline{s}_{L}b_{R}\right), \tag{39}\] Besides that \(\kappa_{j}^{(K)}\), \(\kappa_{j}^{(B_{d})}\) and \(\kappa_{j}^{(B_{s})}\) (\(j=1,2,3\)) are the corresponding Wilson coefficients which are given by: \[\kappa_{1}^{(K)} = \frac{x_{h\overline{s}_{R}d_{L}}^{2}}{m_{h}^{2}}+\sum_{n=1}^{3} \left(\frac{x_{S_{n}\overline{s}_{R}d_{L}}^{2}}{m_{S_{n}}^{2}}-\frac{x_{A_{n} \overline{s}_{R}d_{L}}^{2}}{m_{A_{n}}^{2}},\right) \tag{40}\] \[\kappa_{2}^{(K)} = \frac{x_{h\overline{s}_{L}d_{R}}^{2}}{m_{h}^{2}}+\sum_{n=1}^{3} \left(\frac{x_{S_{n}\overline{s}_{L}d_{R}}^{2}}{m_{S_{n}}^{2}}-\frac{x_{A_{n} \overline{s}_{L}d_{R}}^{2}}{m_{A_{n}}^{2}}\right),\] (41) \[\kappa_{3}^{(K)} = \frac{x_{h\overline{s}_{R}d_{L}}x_{h\overline{s}_{L}d_{R}}}{m_{h }^{2}}+\sum_{n=1}^{3}\left(\frac{x_{S_{n}\overline{s}_{R}d_{L}}x_{S_{n} \overline{s}_{L}d_{R}}}{m_{S_{n}}^{2}}-\frac{x_{A_{n}\overline{s}_{R}d_{L}}x_ {A_{n}\overline{s}_{L}d_{R}}}{m_{A_{n}}^{2}}\right), \tag{42}\] \[\kappa_{1}^{(B_{d})} = \frac{x_{h\overline{d}_{R}b_{L}}^{2}}{m_{h}^{2}}+\sum_{n=1}^{3} \left(\frac{x_{H_{n}\overline{d}_{R}b_{L}}^{2}}{m_{S_{n}}^{2}}-\frac{x_{A_{n} \overline{d}_{R}b_{L}}^{2}}{m_{A_{n}}^{2}}\right), \tag{43}\] \[\kappa_{2}^{(B_{d})} = \frac{x_{h\overline{d}_{L}b_{R}}^{2}}{m_{h}^{2}}+\sum_{n=1}^{3} \left(\frac{x_{S_{n}\overline{d}_{L}b_{R}}^{2}}{m_{S_{n}}^{2}}-\frac{x_{A_{n} \overline{d}_{L}b_{R}}^{2}}{m_{A_{n}}^{2}}\right),\] (44) \[\kappa_{3}^{(B_{d})} = \frac{x_{h\overline{d}_{R}b_{L}}^{2}x_{h\overline{d}_{L}b_{R}}}{m _{h}^{2}}+\sum_{n=1}^{3}\left(\frac{x_{S_{n}\overline{s}_{L}\overline{d}_{R}b_ {L}}^{2}x_{S_{n}\overline{d}_{L}b_{R}}}{m_{S_{n}}^{2}}-\frac{x_{A_{n}\overline{ d}_{R}b_{L}}^{2}x_{A_{n}\overline{d}_{L}b_{R}}}{m_{A_{n}}^{2}}\right), \tag{45}\] \[\kappa_{1}^{(B_{s})} = \frac{x_{h\overline{s}_{R}b_{L}}^{2}}{m_{h}^{2}}+\sum_{n=1}^{3} \left(\frac{x_{S_{n}\overline{s}_{R}b_{L}}^{2}}{m_{S_{n}}^{2}}-\frac{x_{A_{n} \overline{s}_{R}b_{L}}^{2}}{m_{A_{n}}^{2}}\right), \tag{46}\] \[\kappa_{2}^{(B_{s})} = \frac{x_{h\overline{s}_{L}b_{R}}^{2}}{m_{h}^{2}}+\sum_{n=1}^{3} \left(\frac{x_{S_{n}\overline{s}_{L}b_{R}}^{2}}{m_{S_{n}}^{2}}-\frac{x_{A_{n} \overline{s}_{L}b_{R}}^{2}}{m_{A_{n}}^{2}}\right),\] (47) \[\kappa_{3}^{(B_{s})} = \frac{x_{h\overline{s}_{R}b_{L}}x_{h\overline{s}_{L}b_{R}}}{m_{h }^{2}}+\sum_{n=1}^{3}\left(\frac{x_{S_{n}\overline{s}_{R}b_{L}}^{2}x_{S_{n} \overline{s}_{L}b_{R}}}{m_{S_{n}}^{2}}-\frac{x_{A_{n}\overline{s}_{R}b_{L}}x_ {A_{n}\overline{s}_{L}b_{R}}}{m_{A_{n}}^{2}}\right), \tag{48}\] The \(K-\bar{K}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson mass splittings receive contributions due to Standard Model (SM) interactions as well as contributions arising from new physics (NP). These meson mass splittings are given by \[\Delta m_{K}=\Delta m_{K}^{(\rm SM)}+\Delta m_{K}^{(\rm NP)},\qquad\quad \Delta m_{B_{d}}=\Delta m_{B_{d}}^{(\rm SM)}+\Delta m_{B_{d}}^{(\rm NP)},\qquad \quad\Delta m_{B_{s}}=\Delta m_{B_{s}}^{(\rm SM)}+\Delta m_{B_{s}}^{(\rm NP)}, \tag{49}\] where \(\Delta m_{K}^{(\rm SM)}\), \(\Delta m_{B_{d}}^{(SM)}\) and \(\Delta m_{B_{s}}^{(\rm SM)}\) are the SM contributions, while \(\Delta m_{K}^{(\rm NP)}\), \(\Delta m_{B_{d}}^{(\rm NP)}\) and \(\Delta m_{B_{s}}^{(\rm NP)}\) are the contributions arising from tree level flavor changing neutral scalar interactions. The new physics contributions for the \(K-\bar{K}\), \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) meson oscillations obtained in our model take the form \[\Delta m_{K}^{(\rm NP)}=\frac{8}{3}f_{K}^{2}\eta_{K}B_{K}m_{K} \left[r_{2}^{(K)}\kappa_{3}^{(K)}+r_{1}^{(K)}\left(\kappa_{1}^{(K)}+\kappa_{2}^{(K) }\right)\right], \tag{50}\] \[\Delta m_{B_{d}}^{(\rm NP)}=\frac{8}{3}f_{B_{d}}^{2}\eta_{B_{d}}B _{B_{d}}m_{B_{d}}\left[r_{2}^{(B_{d})}\kappa_{3}^{(B_{d})}+r_{1}^{(B_{d})}\left( \kappa_{1}^{(B_{d})}+\kappa_{2}^{(B_{d})}\right)\right], \tag{51}\] \[\Delta m_{B_{s}}^{(\rm NP)}=\frac{8}{3}f_{B_{s}}^{2}\eta_{B_{s} }B_{B_{s}}m_{B_{s}}\left[r_{2}^{(B_{s})}\kappa_{3}^{(B_{s})}+r_{1}^{(B_{s})} \left(\kappa_{1}^{(B_{s})}+\kappa_{2}^{(B_{s})}\right)\right]. \tag{52}\] Using the following numerical values of the meson parameters [44; 45; 46; 47; 48; 49; 50]: \[\left(\Delta m_{K}\right)_{\rm exp} = \left(3.484\pm 0.006\right)\times 10^{-12}\,{\rm MeV},\qquad \qquad\left(\Delta m_{K}\right)_{\rm SM}=3.483\times 10^{-12}\,{\rm MeV}\,,\] \[f_{K} = 155.7\,{\rm MeV}\,,\qquad\qquad B_{K}=0.85,\qquad\qquad\eta_{K} =0.57\,,\] \[r_{1}^{(K)} = -9.3,\qquad\qquad r_{2}^{(K)}=30.6,\qquad\qquad m_{K}=(497.611\pm 0.013)\,\,{\rm MeV}, \tag{53}\] \[\left(\Delta m_{B_{d}}\right)_{\rm exp} = \left(3.334\pm 0.013\right)\times 10^{-10}\,{\rm MeV},\qquad \qquad\left(\Delta m_{B_{d}}\right)_{\rm SM}=\left(3.653\pm 0.037\pm 0.019\right) \times 10^{-10}\,{\rm MeV}\,,\] \[f_{B_{d}} = 188\,{\rm MeV},\qquad\qquad B_{B_{d}}=1.26,\qquad\qquad\eta_{B_ {d}}=0.55\,,\] \[r_{1}^{(B_{d})} = -0.52,\qquad\qquad r_{2}^{(B_{d})}=0.88,\qquad\qquad m_{B_{d}}=(5 279.65\pm 0.12)\,\,{\rm MeV}, \tag{54}\] \[\left(\Delta m_{B_{s}}\right)_{\rm exp} = \left(1.1683\pm 0.0013\right)\times 10^{-8}\,{\rm MeV},\qquad \qquad\left(\Delta m_{B_{s}}\right)_{\rm SM}=\left(1.1577\pm 0.022\pm 0.051 \right)\times 10^{-8}\,{\rm MeV}\,,\] \[f_{B_{s}} = 225\,{\rm MeV},\qquad\qquad B_{B_{s}}=1.33,\qquad\qquad\eta_{B_ {s}}=0.55\,,\] \[r_{1}^{(B_{s})} = -0.52,\qquad\qquad r_{2}^{(B_{s})}=0.88,\qquad\qquad m_{B_{s}}=(5 366.9\pm 0.12)\,\,{\rm MeV}. \tag{55}\] Figure 6 displays the correlation between the \(\Delta m_{K}\) mass splitting and the masses \(m_{S}\) and \(m_{A}\) of the lightest non SM CP-even and CP-odd scalars. In our numerical analysis, we have considered the neutral CP even and CP odd scalar masses in the ranges described in the benchmark scenario chosen in section III. Furthermore, for the sake of simplicity, we have set the couplings of the flavor changing neutral Yukawa interactions that produce the \(K^{0}-\bar{K}^{0}\) oscillations to be equal to \(10^{-6}\). As indicated in Figure 6, our model can successfully accommodate the experimental constraints arising from \(K^{0}-\bar{K}^{0}\) meson oscillations. We have numerically checked that the obtained values for the \(\Delta m_{B_{d}}\) and \(\Delta m_{B_{s}}\) mass splittings are also consistent with the experimental data on meson oscillations for flavor violating Yukawa couplings equal to \(10^{-4}\) and \(5\times 10^{-4}\) for the \(B_{d}^{0}-\bar{B}_{d}^{0}\) and \(B_{s}^{0}-\bar{B}_{s}^{0}\) mixings, respectively. ## V Conclusions We have constructed an extended 2HDM theory with a spontaneously broken \(U(1)_{X}\) global symmetry, where the scalar content has been enlarged by the inclusion of a \(SU(2)_{L}\) scalar triplet and several electrically neutral, charged and doubly charged scalar singlets, whereas the fermion sector is augmented by adding \(SU(2)_{L}\) doublet and singlet charged vector like fermions. The extended particle content allows the implementation of an extended seesaw mechanism that generates the first and second generation of the SM charged fermion masses. In our proposed theory, one \(SU(2)\) scalar doublet does acquire a vacuum expectation value (VEV) at the electroweak symmetry breaking scale thus generating the top quark mass, whereas the other scalar doublet gets a VEV of few GeVs thus providing the bottom quark Figure 6: Correlation between the \(\Delta m_{K}\) mass splitting and the masses \(m_{S}\) ((left panel)) and \(m_{A}\) (right panel) of the lightest non SM CP-even and CP-odd scalars. and tau lepton masses. In our setup, the tiny masses of the light active neutrinos are produced by a two loop level Zee-Babu mechanism mediated by electrically charged and doubly charged scalars as well as by SM charged leptons. The model under consideration is consistent with the current pattern of SM fermion masses and mixings, with the muon and electron anomalous magnetic moments and allows to successfully accommodate the constraints arising from charged lepton flavor violation and meson oscillations. We also have shown that the rate for the charged lepton flavor violating \(\mu\to e\gamma\) decay reach values within the reach of the future experimental sensitivity, thus making the model under consideration testable by the forthcoming experiments. ###### Acknowledgements. AECH and IS are supported by ANID-Chile FONDECYT 1210378, 1190845, ANID PIA/APOYO AFB220004, and ANID Programa Milenio code ICN2019_044. The work of DR and DZ is supported by Sostenibilidad UdeA, UdeA/CODI Grant 2020-33177, and Micinciacias Grants CD 82315 CT ICETEX 2021-1080 and 80740-492-2021. AECH thanks Universidad de Antioquia for hospitality where this work was started.
2305.10762
The evolution of k-shell in syndication networks reveals financial performance of venture capital institutions
Venture capital (VC) is a relatively newly emergent industry that is still subject to large uncertainties in China. Therefore, building a robust social network with other VC institutions is a good way to share information, various resources, and benefit from skill and knowledge complementarity to against risks. Strong evidences indicate that better networked VC institutions are of a better financial performance, however, most of previous works overlook the evolution of VC institutions and only focus on some simple topology indicators of the static syndication network, which also neglects higher-order network structure and cannot give a comprehensive evaluation. In this paper, based on VC investment records in the Chinese market, we construct temporal syndication networks between VC institutions year by year. As k-shell decomposition considers higher-order connection patterns, we employ k-shell as an evaluation of the influence of VC institutions in syndication networks. By clustering time series of k-shell values, the VC institutions in China fall into five groups that are quite different from each other on financial performances and investment behaviors. This, in turn, proves the power of our method that only based on proper sequential network properties, we can reveal their financial investment performance. Compared to other network centrality measurements, k-shell is a better indicator that is indicated by a smaller intra-group distance and a larger inter-group distance.
Ruiqi Li, Jing Liang, Cheng Cheng, Xiaoyan Zhang, Longfeng Zhao, Chen Zhao, H. Eugene Stanley
2023-05-18T07:06:49Z
http://arxiv.org/abs/2305.10762v1
The evolution of k-shell in syndication networks reveals financial performance of venture capital institutions ###### Abstract Venture capital (VC) is a relatively newly emergent industry that is still subject to large uncertainties in China. Therefore, building a robust social network with other VC institutions is a good way to share information, various resources, and benefit from skill and knowledge complementarity to against risks. Strong evidences indicate that better networked VC institutions are of a better financial performance, however, most of previous works overlook the evolution of VC institutions and only focus on some simple topology indicators of the static syndication network, which also neglects higher-order network structure and cannot give a comprehensive evaluation. In this paper, based on VC investment records in the Chinese market, we construct temporal syndication networks between VC institutions year by year. As k-shell decomposition considers higher-order connection patterns, we employ k-shell as an evaluation of the influence of VC institutions in syndication networks. By clustering time series of k-shell values, the VC institutions in China fall into five groups that are quite different from each other on financial performances and investment behaviors. This, in turn, proves the power of our method that only based on proper sequential network properties, we can reveal their financial investment performance. Compared to other network centrality measurements, k-shell is a better indicator that is indicated by a smaller intra-group distance and a larger inter-group distance. ## Introduction Since the origin of the venture capital (VC) industry after World War II, there already had been several boom-and-busk cycles in the Western world [1], and the institutionalization of the VC industry is commonly dated back to three events in 1978 and 1980 [2]. By contrast, the history of Chinese VC institutions, as well as state-owned ones, only dates back to 1985 [3, 4]. Venture capital is still a relatively newly emergent industry in China, in which government policies keep changing, the governance structure is immature, and information asymmetry always bothers investors [5, 6]. A few years ago, the Chinese State Council publicly calls for more government financing, especially commercial banks, in venture capital to get the state to take part in the nation's technology boom [6]. All those factors mentioned before make the Chinese VC investment environment highly uncertain and short-term rational calculation of investors often in vain [7]. Therefore, building a robust social network with other VC institutions is a good way to access information [8, 9], other VC institutions' deal flows on the reciprocal basis [10], resources [11, 12, 13] and benefit from the skill and knowledge complementarity [14, 15, 16], diversity [17, 10, 18] to against uncertainties [8], free riding and opportunism behaviors [19, 20, 21, 22], improve screening [23], and gain reputations [24, 25, 26]. In China, influenced by the phenomenon of "_guaxiv_[27, 26] that values long-term social relationship and long-term financial returns, the networking behavior of Chinese VC institutions is quite ubiquitous and profound (see Fig. 1b,c). Chinese VC institutions tend to form "clan-like group" [28, 7, 29], which is different from western "club-like group" [28, 30], and applies different rules of social exchanges for different types of _guaxiv_[29]. With the formation of this kind of communities [31], the order would emerge: there will be "big brothers" [32, 7] that are usually top-tier VC institutions in the core of the network with high reputation and great influence, and some "rookies" at the fringe of the network [33, 7]. Big brothers also tend to have better deal flows, better accessibility to information [2], and stronger bargaining power to the portfolio companies they invest in [34]. For example, some evidence showed that offers made by VC institutions with a high reputation were three times more likely to be accepted and at a 10-14% discount [34]. If a young VC institution can hop onto an eventually successful deal led by a marquee firm (i.e., big bother), it can gain some of the marquee firm's luster and become visible to the public [23, 25]. And sometimes, for new entrants, the existing densely connected syndication network (or better termed as cliques) formed by incumbent VC institutions can be a potential barrier for them to enter the market [24]. Generally, more densely networked market is harder to enter for a new entrant [24]. Establishing connections with incumbents to access their local information, expertise, or contacts is generally a good way to get over such barrier, yet the result still depends on the strategical reactions of incumbents by balancing between potential gains (e.g., access to the home market of the new entrant [24] or pooling capital for risky tryouts [7]) and pressures from its peers with the afraid of gradually loosing bargaining power due to competitions by introducing outsiders into their local market [24]. VC institutions tend to routinely cooperate with a small set of VC institutions, which also indicates that syndication relationships are relatively exclusive and stable [2, 7, 24]. So the position and the evolution of the position (i.e., another type of growth trajectory) of a VC institution in the syndication network is critical for its investment activities and consequently its financial performance [2, 24]. Thus it is crucial to identify the position and evaluate the influence of VC institutions in the syndication network with proper network indicators. The syndication network of VC institutions has been studied for decades, however, most previous works tend to use some simple centrality indicators of the topology of a static network (such as degree, in-degree, out-degree, betweenness, etc.) as a proxy of the influence of a VC institution [2, 10, 21, 24, 35, 36]. However, these indicators cannot give a very proper evaluation of the influence of the VC institutions, because not only whom you know [2] and how your friends connect with others, which can be better described by higher-order structure [37, 38], rather than just how many people you know matters a lot. More importantly, as the VC syndication network is growing and evolving, static descriptions based on snapshots lost important information about VC institutions on how it evolves, and how it affects the investment performance over a longer period. And previous works usually construct a directed network between VC institutions, where, for each joint-investment, the lead investor (usually identified as the one with the largest investment amount) points to other co-investors (there are no links between those non-dominant co-investors) [2, 24, 35]. Constructing such directed networks neglects the reciprocity nature of the mutual cooperation relatioship and will lose information between other non-lead co-investors in the joint-investment. Besides, especially in the Chinese VC industry, sometimes the lead investor cannot be inferred from the investment amount [7, 33]. When a case is too risky, the average amount invested is generally less [8], and the lead investor may ask other followers to have a larger share; though aware of the potential risk, followers may still take the deal in exchange for establishing a better _guanxi_ with big brothers and potential long-term financial returns [7]. In addition, such financial related data may not always be available to researchers, especially for some fast growing markets in developing countries, which is also the case in other fields [39]. By contrast, the syndication information (i.e., whether two VC institutions have joint-investment or not) is easier to obtain and less noisy. In this work, with the Simutong dataset that records venture capital investment events in the Chinese VC market [4], we construct syndication networks between VC institutions year by year. We employ k-shell decomposition [37, 40, 41] as an evaluation of the influence of VC institutions in syndication networks, and validate its effectiveness via comparing with other popular centrality measurements. From the evolution of their k-shell values, VC institutions in the Chinese market are classified into five groups that are quite different from each other on financial investment performances. This in turn proves the power of our method that only basing on proper sequential network properties, we can reveal their financial performance. The results of this work would be helpful to give suggestions for limited partners on their investments in VC institutions, and for startup companies on approaching VC institutions. ## Results ### A glance at VC industry in China. Since the origins of the modern private equity industry (VC is a type of private equity) in 1946, there have been several boom-and-busk cycles worldwide [1], while, in China, the VC industry is still a newly emergent industry with a much shorter history. In 1985, the first VC institution in China (the China New-tech Venture Capital Corporation, CNVCC) was established, which was fully owned by the Ministry of Science and Technology [3]. In 1992, IDG entered the Chinese VC market, which is the first foreign VC institution in China, via developing a good relationship with local governments [3]. At the very first, with strong information asymmetry and the lacking of relevant laws and regulations [3], there were a few VC institutions (see Fig. 1a). The number of VC institutions that have investment activities in each year grows substantially after 2013 (see the red line in Fig.1a). The number of VC investments and the number of start-up companies that got investments from VC institutions increased with a similar trend and roughly manifests five-year boom-and-bust cycles (see green and blue lines in Fig. 1a; for clarity, in this work, we only use the term "company" to refer to the start-up enterprise that got invested by a VC institution). And since 2016, the number of VC institutions that made investments keeps at a high level, while the number of startup companies is decreasing, which may resemble the situation in a few years since 1985 in the United States when too much money was chasing too few deals [8]. With the rapid growth, it is urgent to gain a better understanding of the Chinese VC market. Networking between VC institutions via making joint-investments is ubiquitous and profound. In the Chinese VC market, the majority VC institutions are the ones ever made joint-investment with others (see Fig. 1b), there were only around one-fourth of VC institutions that made investments solely, which are referred to as "lone wolf", until 2013 (see inset in Fig. 1b) and around one-fifth of them until 2020(see Fig. 1b), and the average number of investments made by them are quite small (see the blue dashed vertical lines in Fig. 1b). The number of startup companies invested by "lone wolves" only takes up a relatively small fraction of the whole market (roughly 15%), by comparison, the vast majority are made by the "co-investor" VC institutions. Furthermore, for "co-investor" VC institutions, we find that joint-investments take up a significant fraction (the average is around 67%) of their investment activities, especially for big VC institutions (see Fig. 1c). It is almost impossible to observe a big VC institution with a small number of joint-investments. And such a trend is also stable over time, the situation in 2013 (see the inset in Fig. 1c) is qualitatively the same with the case in 2020 (see Fig. 1c). Whether growing big requires extensive cooperation with others or extensive cooperation makes a VC institution grow big is worth closer future studies. In this work, we only focus on VC institutions that ever made joint-investments with others. As VC investments involve long-term engagement and possible further capital investment, we also analyze the distribution of duration between consecutive investment stages [8, 16] (see Fig. 1d). After making investments in the seed stage of a startup, on average, it takes roughly 18.6 months for the startup to get to the initial stage. From the initial stage to the expansion stage, the average duration is 33.4 months, and 65.1 months to get to the mature stage. And the tail part of the distribution is wider for the latter two cases. However, if a startup receives no further investment, the distribution of duration to exit is quite similar and manifests a long tail, which can be well approximated by an exponential distribution \(P\propto e^{-3.2\Delta month}\). This indicates that with a longer time, the probability of exit will drop rapidly (see Fig. 1e). More intriguingly, for both successful (IPO or M&A) and unsuccessful (liquidation, buyback, etc.) types of exit, their distributions all can be well approximated by an exponential distribution (see Fig. 8 in Appendix), whose coefficients are quite close to the ensemble one with all types of exit (see Fig. 1e). The fraction of startups going to the next stage or exit in different ways are shown in Fig. 1f. Roughly one-third of startups after receiving seed stage or initial stage investment can go to the next stage, and this probability drops to roughly one-tenth for startups in the expansion stage. The fraction of IPO and M&A all increases for startups at the expansion or mature stages, meanwhile, the fraction of unsuccessful exits also increases. Yet, a cruel fact is that the majority of startups are the ones with no further updates on their status, which may receive no further investments or no longer exist - roughly 60% for seed and initial Figure 1: (a) The number of VC investments, the number of startup companies got funded, and the number of VC institutions in China. (b) The complementary cumulative distribution function (CCDF) of the number of invested startup companies by VC institutions that have (“co-investor”) and have not (“lone wolf”) ever made joint-investment with others, respectively. (c) The number of joint-investments versus the number of all investments of every co-investor. Each dot represents a VC institution. (d) The distribution of waiting duration of consecutive VC investments between different stages. In the legend, \(n\) denotes the number of corresponding cases. (e) The distribution of duration between the final stage of VC investment and the exit, which includes all types with both successful (IPO or M&A) and unsuccessful ones (e.g., liquidation, buybacks). (f) The fraction of startup companies that receive the next round of financing (denoted as “Next”), exit through IPO or M&A or other less successful ways (“Others”), and also the ones that has no further updates on their status (“Waiting”). stages, and increases to more than 70% for expansion and mature stages (see Fig. 1f). This also indicates that the risk of failure does not necessarily decrease for expansion and mature stages. ### Constructing temporal syndication networks. Until the end of 2013, there were more than 33,000 VC investment activities that span most industries and regions in China. Each investment record details the name of the investor (usually a VC institution, sometimes an individual investor or angel), the start-up company got invested, and basic information about the start-up, including its industry category and headquarters location, the date of investment, the investment amount (we converted the foreign currency into RMB), investment stage (which includes seed, initial, expansion, and mature). Since we only focus on VC institutions, we eliminate those investments made by individuals and undisclosed investors. After filtering such records, there are more than 30,700 investment records left, more than half of which are co-investments. Based on investment activity records, we identify joint-investment between VC institutions as the ones made on the same start-up company on the same date. In the syndication network, nodes represent VC institutions, and an edge between two VC institutions in the network indicates that they two had a joint-investment. We do not regard the investments at different rounds as joint-investment since the investments between different rounds are not necessarily direct collaboration and more uncertain to infer whether they two have social connections. For example, some VC institutions that make investments in earlier stages may exit in later rounds when new investors buy the shares of existing investors. Although all the VC institutions that made investments in the same company might have a position on the startup company's board of directors, the nature of the relations between them is also harder to identify. In each year, we construct an accumulative syndication network, i.e., if two VC institutions ever had a joint-investment in or before that year, then they two will have a link in the network. The reason for constructing an accumulative network is that a previous joint-investment and collaboration relationship might last for a much longer time [42], and a successful collaboration might further strengthen the relationship [33, 43]. Constructing a dynamic network that only comprises join-investment relation in a certain year will lose such long-term information, and setting a time window also can be arbitrary. To avoid further bias, we do not make such a setting. The interplay between the memory effect and the forgetting effect regarding collaboration relation is an important topic that is worth future investigation. In such a way, we get twenty-four networks from the year 1990 to 2013 (see Fig. 2). Before 2014, there were around 5,543 VC institutions in China that have at least one investment record, and 4,080 of them have made at least one joint-investment with one another, with only 644 of which were not in the giant component of the network (see Fig.2). The reason why we focus on syndication networks until the end of 2013 is due to the fact that for better evaluating the investment performance, we need several more years of data on exit events (see Fig. 1e). In this study, we use exit events data before the year 2021, with seven more years, most of the startups that got invested before 2014 would exist. ### Evolution of k-shell of VC institutions. Previous work [2] indicates that the centrality of nodes, i.e., the influence (or importance) of a node in a network, is related to their investment performances, yet, the growth trajectory of a VC institution is largely unknown. Most previous works use degree, betweenness, or eigenvector centrality as proxies of the prominence of VC institutions, and only focus on static networks at a snapshot. With recent development in network science, k-shell (also named k-core) has been proven to be a better indicator on measuring the influence of a node [37, 40], and it naturally reveals a core-periphery structure (see Appendix for more details of k-shell decomposition). Generally, the nodes in the central core will form a complete graph, in which every node is connected with others [37, 44]. We discover that along the increase of the whole network, the number of shells in the whole network, which represents a sort of hierarchical structure, becomes larger over past years (see the blue dashed line in Fig. 3a). The number of nodes in the most central core (termed the nuclei) increases over time as well but has some fluctuations (see the green line in Fig. 3a). By applying k-shell decomposition on all twenty-four networks from the year 1990 to 2013, for each VC institution, we can get the evolution of its k-shell value from the first year when it entered the Chinese market to the year 2013. Such a time series encodes great information regarding the growth trajectory of VC institutions in the syndication network, and it can be related to the investment performance of VC institutions [2]. ### Classification of VC institutions. Due to the fact that the time-series of k-shell values are length variant (e.g., older VC institutions have a longer time series since their entry year, while younger firms have a shorter sequence), we extract four features of each sequence to embed time series in a lower dimensional vector space - the length of the sequence (_years_), the k-shell value in the year 2013 (\(ks_{2013}\)), the difference between \(ks_{2013}\) and the initial k-shell value in the entry year (defined as \(\delta_{\text{ks}}=ks_{2013}-ks_{entry}\)), and also the area below the curve of the time-series (i.e., \(area=\sum_{j=entry}^{2013}ks_{j}\)). In order to make the data of different measurements comparable in magnitude, we scale the data first (see Methods). Then through the hierarchical clustering algorithm, we reveal that there are five groups of VC institutions in the Chinese VC industry (see Fig. 4). And surprisingly, although we only use proper network indicators to make the classification and not using financial-related data at all, these groups have quite different investment performances (see Table 1). Notes: \(N\) is the size of the group; \(\langle\cdot\rangle\) indicates the average value of entities in each group, for example, \(\langle years\rangle\) is, on average, the number of years of VC institutions in the group since their entry of Chinese VC market; \(\langle ks_{2013}\rangle\) is the final average k-shell value of VC institutions in 2013; \(\langle\delta_{ks}\rangle\) is the difference between \(ks_{2013}\) and k-shell value \(ks_{entry}\) in the entry year, which is related to their growth trajectories; \(\langle Amt_{tot}\rangle\) is the average total investment amount; \(\langle Amt_{IPO}\rangle\) is the average investment amount for investments on startups that got listed; \(\langle\#R_{tot}\rangle\) is the average number of rounds of investments (for a certain company, VC institutions may invest for several rounds); \(\langle\#R_{IPO}\rangle\) is the average number of rounds of investments in the company which got listed; \(\langle\#C_{tot}\rangle\) is the average number of companies they had invested (i.e., the size of their portfolio); \(\langle HC_{IPO}\rangle\) is the number of companies they had invested that got listed; \(\langle IPO\rangle\) is the average IPO rate of VC institutions, which equals the number of companies got listed \(\#C_{IPO}\) divided by the number of all startups they ever invested \(\#C_{tot}\); \(HEI\) is the hawk-eye index proposed by us, which is defined in Eq. 1, and \(\langle HEI\rangle\) the average of \(HEI\) of VC institutions in each group. \begin{table} \begin{tabular}{l|c c c|c c c c c c c|c c} \hline \hline Description & \(N\) & \(\langle Years\rangle\) & \(\langle ks_{2013}\rangle\) & \(\langle\delta_{ks}\rangle\) & \(\langle Amt_{tot}\rangle\) & \(\langle Amt_{IPO}\rangle\) & \(\langle R_{tot}\rangle\) & \(\langle R_{IPO}\rangle\) & \(\langle HC_{IPO}\rangle\) & \(\langle HC_{tot}\rangle\) & \(\langle IPO\rangle\) & \(\langle HEI\rangle\) \\ \hline big brothers & 151 & 14.311 & 11.033 & 6.702 & 4442.503 & 1365.455 & 41.417 & 9.093 & 5.483 & 23.974 & 0.260 & 0.945 \\ u.f.m. & 372 & 11.828 & 3.446 & 1.543 & 875.065 & 256.443 & 9.266 & 1.755 & 1.234 & 6.694 & 0.185 & 0.596 \\ rookies & 2633 & 2.730 & 3.513 & 0.659 & 426.627 & 57.761 & 3.933 & 0.732 & 0.607 & 2.090 & 0.239 & 0.466 \\ outsiders & 532 & 6.318 & 2.395 & 0.682 & 391.468 & 43.112 & 3.671 & 0.774 & 0.615 & 2.667 & 0.228 & 0.469 \\ rising stars & 392 & 6.952 & 9.390 & 3.684 & 1714.302 & 429.966 & 15.656 & 3.332 & 2.393 & 9.954 & 0.258 & 0.706 \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of VC institutions in different groups. Figure 2: The syndication network in the year 2013. The color of nodes corresponds to the classifications obtained from clustering time series of their k-shell values over past years. Blue, orange, green, purple, and red corresponds to big brothers, rising stars, unsuccessful first movers, outsiders, and rookies, respectively. The size of nodes is also in line with classification labels. Ideally, to measure financial investment performance, we should use return on investments (ROI) for each deal, or at least the internal rate of return (IRR) of each fund. However, such data are usually not accessible to researchers as VC funds generally only disclose it to their limited partners who invested in the fund. Some evidence shows that the average VC fund writes off 75.3% of its investments [45], which implies that VC funds earn their capital gains from quite a small subset of their portfolio companies, especially those a few successful exit events via an IPO [2]. So we employ mostly commonly used measurements of investment performance and proxy of VC institution's expertise - the total number of startups in its portfolio (i.e., how many companies it ever invested, denoted as \(\langle\#C_{tot}\rangle\) in Table 1), IPO rate (the fraction of companies got listed in VC institution's portfolio, denoted as \(\langle ipo\rangle\)), the total amount of capital invested by the VC institution (\(\langle Amt_{tot}\rangle\)), the total number of investment rounds (\(\langle\#R_{tot}\rangle\)) [25, 46, 47, 48, 49, 35]. And we added three more similar IPO-related indicators as measurements of investment performance - the total amount of capital invested by the VC institution on the companies that got listed (\(\langle Amt_{IPO}\rangle\)), the total number of companies in its portfolio that got listed (\(\langle\#C_{IPO}\rangle\)), the total number of investment rounds on the companies that got listed (\(\langle\#R_{IPO}\rangle\)). Generally, with more investments in a startup that got listed eventually, the VC institution would gain a more favorable return. So, we also introduce a new measurement named "Hawk-Eye Index" (HEI) based on these variables to depict their foresight on investment, which is defined as \[HEI=\frac{Amt_{IPO}/Amt_{tot}}{\#R_{IPO}/\#R_{tot}}. \tag{1}\] HEI is proportional to the ratio of the amount invested in the companies got listed to the total investment amount, and inversely proportional to the ratio of the number of rounds of investment in the companies that got listed to the total rounds, which equals the total number of investments. It is worth noting that a VC institution might invest several rounds in the same startup company, which is also a reflection of the confidence and judgment of a VC institution on the future development of the startup. If HEI is high, it indicates that the VC institution invested more money in companies from a larger set of choices. A higher HEI indicates that the VC institution invests its capital more wisely and has better judgment in foreseeing the prospects of the startup company. As shown in Table 1, one group of VC institutions is the "Big Brothers" that invest a lot startups and invest with large amounts of capital. They entered the Chinese market relatively early, but they are not necessarily the oldest VC institutions in China (the "unsuccessful first-movers" denoted as u.f.m in Table 1 have a comparable entry time, also see Fig. 4a). Apart from having a highest IPO rate, these big brothers also have the highest HEI, which indicates that they invest a significant fraction of their capital in a few startups, which got listed, out of a large set of choices. And they also have the highest investment diversity on both industry (see Fig. 4c) and regions (see Fig. 10 in Appendix). Another group is "unsuccessful first-movers" (u.f.m.), which are also pioneers in the Chinese VC industry with an early entry year among all five groups (see Fig. 4a), yet their centrality is quite low in the syndication network, and the investment performances are not good (see Table 1). Their average IPO rate is the lowest among all five groups, and their average HEI is not very high. We plot the average number of investments of each group every year since 1995 (see Fig. 4b), it is clear that u.f.m. remains at a low level of investment activities. We can clearly see that before 1998, the u.f.m. was roughly at a similar level with big brothers; yet after the year 1998, these two groups diverged - big brothers go up and u.f.m. keeps low, which is just similar to new entrants even in recent years (see the red line in Fig. 4b). Our results indicate that older VC institutions are not necessarily successful, although they were indeed able to raise successive funds, which is also the case in the Western VC Figure 3: (a) The evolution of the number of shells (blue dashed line that corresponds to the left y-axis) and the size of the nuclei (i.e., the most central core, green line that corresponds to the right y-axis) of syndication networks. (b) The VC institutions that ever entered the nuclei from 1990 to 2013. Each column corresponds to a VC institution, and each row corresponds to a certain year. The dark blue square indicates that the corresponding VC institution was in the nuclei, and the light blue square indicates entering the Chinese VC market but not in the nuclei that year. in the same manner as in the industry [25]. It is worth noting that we only use data before 2014 to make classifications, the dashed lines in Fig. 4b is a further testing of our results with recent data. Another group is "rookies" that are new entrants at the fringe of the network and invest in a few startups with the smallest amount and the lowest HEI among all five groups. Rookies take up the majority among all VC institutions, and they are also the youngest (see Fig. 4a) and may face fiercer competitions with peers. They tend to form more cliques (i.e., complete subgraph, in which nodes are connected to all other nodes) between themselves (see Fig. 4c). Another group is "rising stars", most of them are not pioneers like those big brothers or u.f.m., but they have the second-highest IPO rate, HEI, investment amount, and size of the portfolio. Compared to the big brothers, which have a setback during the financial crisis around the year 2008 (see blue line in Fig. 4b), rising stars seemingly spotted some opportunities and gained a better development momentum since then. And they tend to form many quite large cliques with each other (see Fig. 4c), which may indicate large-scale close collaboration and better information flow between them. Even when normalized by the size of the group (see inset in Fig. 4c), the size of their federation is still quite high, which is only slightly lower than those big brothers. Rising stars also have a pretty high diversity on invested industry (see Fig. 4c) and regions (see Fig. 10 in the Appendix). Apart from being most influential entities in the network [37], recent advances also show that nodes in the most central core (i.e., with the maximum k-shell value) is critical for maintaining cooperative system [50] from collapse. The VC institutions that ever entered the nuclei (i.e., the most central core of the syndication network) are generally big brothers or rising stars. And it is worth noting that since 2008, the VC institutions in the nuclei are some rising stars, which also reflects the uprising trend of new forces and profound changing in the Chinese VC market. In addition, a few rookies also enter the nuclei via making broad Figure 4: (a) The distribution of the duration of practicing since their entry year. Big brothers and unsuccessful first-movers (u.f.m.) are of similar age; rising stars and outsiders are also of similar age. (b) The average number of investments, which includes both solo and joint-investments, made by VC institutions in each group. Data after 2013 is shown as dashed lines, which still have a clear separation between groups and further confirm the predictive power of the partition. In addition, there are two interesting bifurcation phenomena: big brothers and u.f.m. diverged after the year 1998; rising stars and outsiders further diverged after the year 2007. (c) The distribution of industry entropy of VC institutions in each group. Big brothers and rising stars invest in more types of industry and have a larger diversity. A similar trend holds for the region entropy (see Fig. 10 in the Appendix). (d) The distribution of cliques formed by VC institutions within each group, and these cliques are not overlapping with each other; (inset) the corresponding distribution that is normalized by the size of each group. (e) The cumulative probability distribution of the number of connections to big brothers of VC institutions in other groups. The average number is indicated by vertical dash lines. For outsiders and rookies, as their average value is smaller than 1, which is not shown in the figure. (f) The moving average of syndication fraction with big brothers drops as the times of investment increase. The time window for the moving average is five. connections with important players, and these rookies would highly probably become key players in the future as well. The last group is "outsiders", they have a similar entry year with rising stars, yet from the year 2007, differences between these two groups become larger and larger on the number of investments (see Fig. 4b). Almost half of them do not have any connections with big brothers (see Fig. 4d), and the average number of connections with big brothers is only 0.61, which is similar to rookies whose average number is 0.39. These VC institutions have the second-lowest average IPO rate and the second-smallest portfolio size, and after the year 2013, they continue to have a quite low level of investments (see the dashed purple line in Fig. 4a). Table 1 and Fig. 4e also indicate that the connections to big brothers might be a critical factor in their success. Rising stars have the highest number of connections with big brothers, and then u.f.m., which might be due to the fact that these two groups are of a similar age, while outsiders and rookies generally have a few connections with big brothers. Moreover, with the increase of the number of investments, the syndication probability with big brothers is decreasing, which indicates that with more experience accumulated, they might less rely on big brothers. Although previous works show that better-networked VC institutions experience significantly better investment performance [2, 35], our work may serve as preliminaries to a possible theory towards the success or growth dynamics of small VC institutions. In addition, from Fig. 4b, we can clearly see two peaks in 2007 and 2011 which coincides with the hot secondary stock market and rapid appreciation of RMB in 2007 and economic bubbles in 2011 in China. As shown in Fig. 4b, we can observe that after the year 2011, both big brothers and rising stars declined in the number of investments. And almost all these groups hop on the opportunity in 2015, which is attributed by government support (e.g., "Internet Plus" and "Made in China 2025" initiatives), tech sector growth, and rapid economic growth. Apart from making joint-investment, which are more public, having common shareholders is another type of important connection between VC institutions [8]. We construct a common-shareholder network via making queries on the board members of VC institutions ([https://www.qcc.com/](https://www.qcc.com/)), and connect two VC institutions if they have at least one common shareholder (see Fig. 5). Then we calculate the edge density between VC institutions from different groups (see Fig. 6a), which equals the number of edges between VC institutions from two different groups divided by the product of the size of the two groups. For example, for the edge density between big brothers and rising stars equals the number of edges that connect a big brother on one end and a rising star on the other end divided by the product of the number of big brothers and the number of rising stars. It is intuitive to assume that shareholders of big VC institutions might be of a higher probability on the board of other VC institutions, however, the empirical results indicate that shareholders of big brothers only have some connections with u.f.m. and then big brothers, and quiet few connections with other types of VC institutions. Since big brothers and u.f.m. have a similar entry year, it is not that surprising that they have more common shareholders. The most dominant one is the connection between u.f.m. and rising stars, which indicates that the board members of u.f.m. might just abandon previous unsuccessful institutions and focus on new VC institutions. While shareholders of rising stars also have a relatively high probability of being on the board of a rookie institution, which might also reflect their tendency of future strategies and bet on some new opportunities. Noting that the size of rookies is quite large, having a relatively high edge density between rising stars and rookies is nontrivial. And outsiders generally have a lower edge density with other institutions. By contrast, the edge densities on joint-investment relation between groups (see Fig. 6b) are quite different from the common-shareholders relationship (see Fig. 6a). For syndication, connections between big brothers are of the highest density, and then the connections between rising stars, and between rising stars and big brothers are also relatively high. Such a discovery also indicates a rich club or elite club phenomenon [33]. In addition, with all investments of a VC institution, we can get its industry profile, i.e., a vector details the fraction of investments of the VC institution in each industry. Then we calculate the industry similarity between any two VC institutions via the cosine similarity \(S_{AB}=\sum_{i}^{m}A_{i}B_{i}/(\sqrt{\sum_{i}^{m}A_{i}^{2}}\sqrt{\sum_{i}^{m }B_{i}^{2}})\), where \(m\) is the total number of industry classification, and \(A_{i}\) the fraction of investment of VC \(A\) on the industry \(i\), \(B_{i}\) is the counterpart for VC \(B\). We can find that similarity patterns are highly correlated with syndication patterns but not common-shareholder patterns (see Fig. 6a-c). The region similarity is similar to industry similarity but with smaller values (see Fig. 6d). Results in Fig. 6c-d shows that successful VC institutions are more alike to each other. At last, after comparing the clustering results based on the evolution of other centrality measurements, including degree, nodal strength (i.e., the sum of weights of all links attached to a node), eigenvector centrality, betweenness, \(h\)-index [51], weighted k-shell [52], the number of investments, and the number of joint-investments, we discover that k-shell is the most suitable centrality indicator, which corresponds to a larger inter-group distance and a smaller intra-group distance than other centrality indicators (see Fig.7). ## Conclusions and Discussions In this paper, based on the VC investment records from 1990 to 2013, we construct twenty-four syndication networks between VC institutions year by year in the Chinese market. We show that based on the evolution of k-shell values of VC institutions, we can classify VC institutions into five groups. These five groups are quite different from each other on investment performance, which in turn proves the power of our method - only based on proper time series of topological features of VC institutions in syndication networks, we can reveal their financial investment performance. The results can provide references to limited partners when they decide which fund should they invest in. And we also show that the classification performance based on the evolution of k-shell is better than other centrality measurements, including degree, node strength, betweenness, \(h\)-index, and Figure 5: The common-shareholder networks between VC institutions. The color code and size of nodes are consistent with the ones in Fig. 2, which corresponds to different groups. The VC institutions that have no common shareholder with others are not shown in this figure. Figure 6: The edge density on (a) common-shareholder relation and (b) joint-investment relation between VC institutions from different groups. (c) The industry similarity matrix between different groups. The industry similarity between VC \(A\) and \(B\) is calculated by the cosine similarity \(S_{AB}=\sum_{i}^{m}A_{i}B_{i}/(\sqrt{\sum_{i}^{m}A_{i}^{2}}\sqrt{\sum_{i}^{m}B_ {i}^{2}})\), where \(m\) is the total number of industry types, and \(A_{i}\) the fraction of investment of VC \(A\) on the industry \(i\), \(B_{i}\) is the counterpart for VC \(B\). Similarly, (d) region similarity can be defined, where \(m\) is the total number of cities. Each entry in (c) and (d) represents the group average, i.e., averaging all VC pairs between two groups. eigenvector centrality. Our results may also serve as preliminaries to a possible theory about the success or growth dynamics of small VC institutions, such as what are the impacts of connecting with a big brother and how the small company evolves with the interactions with big brothers. And we also find that there are some VC institutions that only make investments at specific stages (e.g., mature or seed stage), which is another type of specialization and affected by syndication patterns [16] of VC institutions. In comparison, previous literature only focuses on industry specialization [53]. Syndication with a new partner would bring new information, expertise, and deal flow, with the classification of VC institutions presented in this work, more detailed dynamics can be analyzed, for example, it is worth investigating whether a rookie VC institution has a stronger tendency to syndicate with larger institutions when entering new stages or new industry, and whether such a choice will affect their future development. A successful collaboration might further strengthen the relationship and lead to new joint-investments [33, 43], but a relationship generally will not last forever, which is hinted by the discovery in Fig. 4f, the interplay between the memory effect and the forgetting effect is an important topic that is worth future investigation. In addition, we only focus on the cooperation relationship between VC institutions, yet there can also be competition relation between VC institutions [8, 53], which can be modeled by a multi-relational network [54] as the nature of the links strongly affects the interaction dynamics on the networks [55] and then might strongly affect their investment performance. There is also evidence showing that VC investments have impacts on urbanization [56, 57, 8], the inflow of capital and concentration of high-tech companies are important factors driving the urbanization process (e.g., Silicon Valley and Route 128 [58] in the United States) and should be incorporated in urban evolutionary models [59]. ## Methods ### Data We get access to detailed investment and exit records from the SiMuTong dataset [4]. The dataset we purchased is one of the most authoritative VC industry datasets in China. Each record in the investment dataset details the name of the investor (usually a VC institution, sometimes an individual investor or angel), the startup company that got invested, and the company's basic information, which includes industry and location, the date of investment, amount (we converted the foreign currency into RMB), share, stage. If two or more VC institutions jointly invest in the same startup company, this investment event will be shown as two or more records with the same date and other basic company information. Until 2013, there were more than 33,000 investment records, 862 of which are by individual investors, and around 1,400 records are with investor undisclosed. Since we only focus on VC institutions, we eliminate these investment records made by individuals and undisclosed investors. In the dataset, there are some records whose company name is unrevealed, and we checked on-line manually to see whether they are revealed now, if so we update it into the dataset. For all those remaining records with the company name unrevealed, we use (industry, location, investment stage, and investment date) to identify a company, which is of a quite low probability that two different startup companies would the have same data on these four features. Thus even though we do not know the company name, we can still construct the syndication network correctly. The exit dataset contains records of IPO (Initial Public Offering), M&A (Mergers and Acquisitions), equity transfer, buyback, liquidate, discharge of claims, among which, IPO is the best exit way with highest return, attention, and reputation [23], Figure 7: The inter-group distance \(d_{inter}\) and intra-group distance \(d_{intra}\) for clustering results based on the evolution of different centrality measurements. The distance is measured on the metric space spanned by the \(\langle IPO\rangle\) and \(\langle HEI\rangle\), which are ensemble indicators derived from commonly used indicators in Table 1. A good classification should satisfy greater differences between groups (measured as inter-distance \(d_{inter}\) between the center of each group) and smaller distances within each group (measured as intra-distance \(d_{intra}\) between the nodes in each group). and then M&A; all other ways of exit are less successful. ### Scaling of the data Scaling of the data (also termed as Standardization or Z-score Normalization) is a commonly used technique in machine learning when dealing with multiple variables that are with different magnitudes compared to each other. In our case, regarding different centrality variables, the magnitudes can vary from ten to thousands. For the data of each variable, we first calculate the mean and standard deviation of the entire vector, then subtract the mean and divided by the standard deviation, which can be expressed as \[\mathbf{v_{scaled}}=\frac{\mathbf{v}-\langle\mathbf{v}\rangle}{\sigma_{ \mathbf{v}}}, \tag{2}\] where \(\langle\mathbf{v}\rangle\) is the average value of the vector, and \(\sigma_{\mathbf{v}}\) is the standard deviation. This process can be understood in an intuitive way as it is not changing the shape of the data but rather changing the scale of it. ### Centrality measurements _degree_ The degree of node \(i\)[60] can be defined as \(k_{i}=\sum_{j}A_{ij}\) where \(A\) is the adjacency matrix of the network. The degree represents the number of neighbors a node has, which reflects the direct influence of this node on others. _eigenvector centrality_ The eigenvector centrality can be regarded as an extension of the simple degree centrality, where the neighbors of a node are not equally treated [61]. It's formulated as \[v_{i}=\kappa_{1}^{-1}\sum_{j}A_{ij}v_{j}, \tag{3}\] where \(\kappa_{1}\) is the largest eigenvalue of the adjacency matrix \(A\). Thus \(v_{i}\) can be large either because node \(i\) has many neighbors or because it has important neighbors (or both). _node strength_ It is also referred to as node weight which is defined as the sum of weights attached to the ties belonging to a node, i.e., \(s_{i}=\sum_{j}W_{ij}\) where \(W_{ij}\) is the weight of the link between \(i\) and \(j\). _betweenness_[62] measures the extent to which a node lies on paths between other nodes which is important in social and spatial networks.It's defined as \[b_{i}=\sum_{sJ}n_{st}^{i}/n_{st}, \tag{4}\] where \(n_{st}^{i}\) is the number of paths from node \(s\) to \(t\) that pass through \(i\), \(n_{st}\) is the total number of paths from node \(s\) to \(t\)[61]. _Eigenvector centrality_ Eigenvector centrality is a classical measure of the influence of a node in a network. It assumes that the influence of a node \(x_{i}\) is the average of its neighbors, which is defined as \[x_{i}=\frac{1}{\lambda}\sum_{j\in V(i)}x_{j}=\frac{1}{\lambda}\sum_{j}^{N}A_{ ij}x_{j}, \tag{5}\] where \(V(i)\) is the set of neighbors of node \(i\), and \(N\) is the total number of nodes in the network, and \(A_{ij}=1\) if node \(i\) and \(j\) is connected, otherwise, \(A_{ij}=0\). Such an equation can be rewritten in the matrix form that is the same with the eigenvector equation \(\mathbf{Ax}=\lambda\mathbf{x}\). According to the Perron-Frobenius theorem, only the eigenvector associated with the largest eigenvalue are all non-negative, which meets the requirement of defining influence of nodes. A high eigenvector centrality means that a node is connected to many nodes who themselves also have a high influence. \(h\)_-index_ \(h\)-index was first proposed for evaluating the academic influence of scholars based on the citation patterns of his/her publications [63], and it equals the maximum value of \(h\) such that the given researcher has published at least \(h\) papers that have each been cited at least \(h\) times (i.e., there is no \(h+1\) papers that each have a citation no less than \(h+1\)). Quite recently, the \(h\)-index got smartly adapted and applied to measure the influence of nodes in complex networks via replacing the number of publications with the number of neighbors and replacing the citation of each publication with the degree of each neighbor [51]. The calculation of \(h\)-index can be defined as an operator, and iterating such an operator on the series of \(h\)-index of neighbors will converge to k-shell value [51]. \(k\)_-shell decomposition_ k-shell (also called k-core) decomposition starts by iteratively removing all nodes with minimum degree \(k_{min}\) until there is no node left with \(k\leq k_{min}\) in the network. The removed nodes are assigned with \(ks=k_{min}\) and are considered in the first layer/shell, in most cases, \(k_{min}=1\). In a similar way, nodes with current minimum degree \(k_{min}+1\) are iteratively removed and assigned with \(ks=k_{min}+1\). This decomposition process continues removing higher shells until ending up with a core all above a certain remaining degree (i.e., the central core, which will be removed at last). Isolated nodes are usually assigned with \(ks=0\), and we omitted such nodes in this work. k-shell method decomposes the network into ordered shells from the core to the periphery, researchers found that nodes in the central core of the network are more influential than the periphery nodes [37]. Figure 8: The distribution of duration between a previous stage to (a) successful types of exit, i.e., IPO or M&A, and (b) unsuccessful types of exit, including liquidation, and buyback. The distribution can all be well approximated by an exponential function \(P\propto e^{-3.2smooth}\) and \(P\propto e^{-3.3\Delta month}\) for (a) and (b), respectively, which is quite close to the one with all types of exit shown in Fig. 1e. Figure 9: (a) The hierarchical tree obtained by applying hierarchical clustering algorithm based on the evolution of k-shell values of all VC institutions. The digits on x-axis represent the size of certain branches. (b) The distance plot of determining the number of clusters, from which we can clearly see that 5 is the proper number of groups via the “elbow” point.
2310.05867
Domain-wise Invariant Learning for Panoptic Scene Graph Generation
Panoptic Scene Graph Generation (PSG) involves the detection of objects and the prediction of their corresponding relationships (predicates). However, the presence of biased predicate annotations poses a significant challenge for PSG models, as it hinders their ability to establish a clear decision boundary among different predicates. This issue substantially impedes the practical utility and real-world applicability of PSG models. To address the intrinsic bias above, we propose a novel framework to infer potentially biased annotations by measuring the predicate prediction risks within each subject-object pair (domain), and adaptively transfer the biased annotations to consistent ones by learning invariant predicate representation embeddings. Experiments show that our method significantly improves the performance of benchmark models, achieving a new state-of-the-art performance, and shows great generalization and effectiveness on PSG dataset.
Li Li, You Qin, Wei Ji, Yuxiao Zhou, Roger Zimmermann
2023-10-09T17:03:39Z
http://arxiv.org/abs/2310.05867v2
# Domain-wise Invariant Learning for Panoptic Scene Graph Generation ###### Abstract Panoptic Scene Graph Generation (PSG) involves the detection of objects and the prediction of their corresponding relationships (predicates). However, the presence of biased predicate annotations poses a significant challenge for PSG models, as it hinders their ability to establish a clear decision boundary among different predicates. This issue substantially impedes the practical utility and real-world applicability of PSG models. To address the intrinsic bias above, we propose a novel framework to infer potentially biased annotations by measuring the predicate prediction risks within each subject-object pair (domain), and adaptively transfer the biased annotations to consistent ones by learning invariant predicate representation embeddings. Experiments show that our method significantly improves the performance of benchmark models, achieving a new state-of-the-art performance, and shows great generalization and effectiveness on PSG dataset. Li Li\({}^{\dagger}\) You Qin\({}^{\dagger}\) Wei Ji\({}^{\dagger}\) Yuxiao Zhou\({}^{\dagger}\) Roger Zimmermann\({}^{\dagger}\)\({}^{\dagger}\)National University of Singapore Panoptic Scene Graph Generation, Debiasing, Invariant Learning. ## 1 Introduction Panoptic Scene Graph Generation (PSG) [1] aims to simultaneously detect instances and their relationships within visual scenes [2]. Instead of coarse bounding boxes used in Scene Graph Generation (SGG) [3, 4, 5, 6, 7, 8, 9], PSG proposed to construct more comprehensive scene graphs with panoptic segmentation [10]. However, the current performance of PSG methods is sub-optimal due to the biased prediction problem. The problem mentioned stems from the following two aspects: (1) **Con-tradictory Mapping:** PSG models map visual instances to subjects/objects, and their relationships to predicates. However, annotators assign different predicate labels to identical subject-object pairs with similar image features due to their personal language preferences and the semantic ambiguity between predicates, leading to contradictory mapping from visual to linguistics. (2) **Long-tail Distribution:** Existing models seriously entangle predicate prediction with the long-tail data distribution in the training dataset. Specifically, as long as the labels of the subject and object are known, the model can make effective predicate predictions even without resorting to any visual contents of an image [11]. Previous works [12, 13, 8, 4, 14, 15, 5, 16] exploit numerous model architectures to alleviate the bias problem, but these models achieve relatively limited performances, and cannot fundamentally solve the problem. [17] have proposed to enhance the training dataset by a data transfer framework. However, their framework inaccurately transfers a significant number of samples, leading to imbalanced performance among predicates. To alleviate the biased annotation problem, we propose constructing an unbiased dataset by transferring the biased annotations to high-quality consistent predicate annotations. Inspired by [18, 19], we propose our framework which learns unbiased predicate representations excluding the influence from long-tailed subject-object pairs, for biased predicate annotations identification and transfer. In the target inference process, we denote different subject-object pairs as different domains, and we measure the risk within each domain. Targets are then derived that partition the mapping of the reference model which maximally violates the ground truth labels. In the invariant learning process, our aim is to exclude the spurious correlation between predicates and subject-object pairs. Specifically, we propose a predicate-wise invariant risk minimization method to learn invariant predicate representations without the influence from subject-object pairs. Meanwhile, we screen out potentially biased data by measuring their invariances within the dataset, to promise unbiased and invariant predicate representations. Finally, with the unbiased predicate representation embedding space, biased annotations are easily transferred. In summary, the following contributions are made: (1) A novel, plug-and-play framework is proposed, which aims at adaptively and accurately performing biased-data transfer to promise a reasonable dataset with informative and standardized labels. (2) We propose a new domain-based invariant learning method, aiming at accurately identifying biased annotations, and promising consistency during the data transfer process. (3) Comprehensive experiments demonstrate that the proposed method significantly enhances the performance of benchmark models on the PSG dataset and achieves a new state-of-the-art performance. ## 2 Method ### Target Inference We first measure the risks of ERM-based model assigned predicates within each domain, and then locate the assigned predicates that maximally violate the ground truth labels as the potentially biased predicate annotations. **Risk within Domains.** Unbiased PSG methods aim to learn visual features that are predicate-invariant. However, due to the biased annotation and long-tail distribution problems, their predictions are seriously entangled with subject-object pairs (different domains). Thus, this step measure the risks of predicate predictions within each domain to help locate biased annotations. We take the advantage of a reference classifier \(\tilde{\Phi}\), which maps visual scenes and subject-object pairs (Input X) to predicates (Output Y) and is optimised with ERM on \(p^{obs}(X,Y)\). We begin by noting that the per-domain risk \(R^{d}\) depends implicitly on the manual subject-object labels from the training dataset. For a given domain \(\hat{d^{{}^{\prime}}}\), we denote \(I(d^{p}=\hat{d^{{}^{\prime}}})\) as an indicator that predicate \(p\) is assigned to that domain, and express the risk of predicate \(p\) in domain \(d\) as: \[R^{d^{{}^{\prime}}}_{p}(\tilde{\Phi},X_{i})=\sum_{i}^{N}I(d^{p}=\hat{d^{{}^{ \prime}}})M(\tilde{\Phi}(X_{i})), \tag{1}\] where \(M(\cdot)\) denotes the mapping from visual scenes to classification distribution of predicate \(p\) of the ERM-based model, \(N\) denotes the number of samples in the training dataset with domain \(\hat{d^{{}^{\prime}}}\). In practice, we further normalize the risks of different predicates in a certain domain with softmax. **Target Identification.** With the risks of all predicates in each of the domain, we locate potentially biased predicate annotations by checking the mapped predicate classification distribution of the ERM-based model and the conflicts between the distribution and the ground truth. The mapping \(M(\cdot)\) to predicate classification distribution provides a good reflection on the model's confusion on predicate prediction [17]. Specifically, given a sample with subject-object pair (domain) \(d_{s}\) and predicted predicate \(p_{s}\), we measure its Direct Conflict (DC) with the ground truth predicate by comparing the risks between \(p_{s}\) and the \(GT\) in the domain \(d_{s}\): \[DC_{s}=R^{d^{\prime}}_{p_{s}}(\tilde{\Phi},X_{i})-R^{d^{\prime}}_{GT}(\tilde{ \Phi},X_{i}). \tag{2}\] The set of potentially biased annotations (target) \(S_{b}\) are located with the help of Direct Conflict as follows: \[S_{b}=\left\{s_{I}|(DC_{s}>0)\wedge(A_{s}<A_{GT})\right\}, \tag{3}\] where fuction \(A\) denotes the attraction factor [17] representing the scarcity of the predicate and domain. ### Invariant Learning In this process, we aim to learn invariant predicate representations excluding the influence from subject-object pairs (domains). **Predicate-wise Invariant Risk Minimization.** We first collect all of the annotations that appear in the training set. Each annotation will be converted to a sentence for language model processing. For example, \(<\) person, standing on, road \(>\) will be converted to _The person is standing on the road_. Formally, given an anchor sentence \(s_{i}\) in domain \(d_{i}\) of predicate class \(p_{i}\) in the batch \(S=\left\{s_{k}\right\}_{i=k}^{T}\), we can construct its positive set \(S_{i}^{+}=\left\{s_{k}|(p_{i}=p_{k})\wedge(d_{i}=d_{k})\right\}_{k\neq i}\) and negative set \(S_{i}^{-}=\left\{s_{k}|(p_{i}\neq p_{k})\wedge(d_{i}=d_{k})\right\}_{k\neq i}\). With the training data, our loss is computed within each domain \(d\in\varepsilon_{d}\), and we have: Figure 1: **Illustration of the overall pipeline of our method.** It measures the ERM-based risks within domains to identify potentially biased data, and learns invariant predicate representations to make consistent data transfer. \[\ell(d\in\varepsilon_{d})=\sum_{s\in d}\frac{1}{N^{+}}\sum_{z^{+}\in d}-\log\frac{f ^{+}(z^{T}z^{+})}{f^{+}(z^{T}z^{+})+f^{-}(z^{T}z^{-})}, \tag{4}\] where \(N^{+}\) denotes the number of the positive samples in the current batch. Therefore, the proposed predicate-wise IRM loss is: \[L_{s}=\sum_{d\in\varepsilon_{d}}\ell(d)+\lambda Var(\ell(d)), \tag{5}\] where \(Var(\cdot)\) denotes the variance of contrastive loss within each predicate class. To further boost the sensitivity to predicate similarity, we introduce an angular margin \(m\) for positive pairs. Formally, we formulate the \(f^{+}\) as: \[f^{+}=\sum_{s_{j}\in S_{i}^{+}}e^{cos(\theta_{h_{i},h_{j}}+m)/\mathrm{T}}, \tag{6}\] where \(\theta_{i,j}\) is the arc-cosine similarity between feature \(i\) and \(j\), \(\mathrm{T}\) is a temperature hyper-parameter, \(N\) is batch size, \(h_{i,j}\) are language model generated sentence representations for \(s_{i,j}\), and \(m\) is an angular margin introduced for robust learning. For the samples in the negative set \(S^{-}\), we expect them to be quite different from the samples in the \(S^{+}\) in the embedding space. Thus, we propose a representation learning penalty for samples in the \(S^{-}\). Specifically, we calculate the predicate-wise risks taking advantage of Eq. 1: \[\phi_{p}=\sum_{i}^{N}\sum_{d^{{}^{\prime}}\in\varepsilon_{k}}R_{p}^{d^{{}^{ \prime}}}(\tilde{\Phi},X_{i}). \tag{7}\] Then we measure the risk of negative sample with the predicate \(p_{k}\) for the anchor sample \(s_{i}\) with predicate \(p_{i}\): \[\varphi_{p_{i,k}}=Sigmoid(\phi_{p_{k}}-\phi_{p_{i}}). \tag{8}\] We treat the risk in Eq. 8 as the reflection of the visual similarity of predicates: Higher risk denotes harder prediction from the model. Sourcing back to the input of the model, it is the highly similar visual scenes. As a result, we use the metric \((1-\varphi)\) to further differentiate similar predicate representations from the negative set. Formally, we introduce the \(f^{-}\) as follows: \[f^{-}=\sum_{s_{g}\in S_{i}^{-}}\left(1-\varphi_{p_{i,g}}\right)e^{\cos(\theta _{i,g})/\mathrm{T}}. \tag{9}\] **Continuous Data Filtration.** The presence of biased and noisy samples within the training dataset is poised to exert a discernible impact on the impartial process of predicate representation learning. Consequently, we have devised a continuous data filtration procedure to eliminate these biased samples. This approach leverages invariant representation regularization as a means of assessing the quality of samples. We collect and average the variances from Eq. 5 on predicate labels, getting \(V_{aver}\in\mathbf{R}^{Q}\), where \(Q\) denotes the predefined predicate classes in the dataset. For every sample \(s_{i}\) with predicate label \(p_{i}\) and variance \(V_{i}\) in the training dataset, we judge whether it is part of potentially biased and noisy samples, which can be formulated as: \[P_{bn}=\left\{S_{i}|V_{i}>\mu V_{aver}^{i}\right\}, \tag{10}\] where \(V_{aver}^{i}\) represents the averaged variance associated with predicate label \(p_{i}\), and \(\mu\) is a hyper-parameter. To further refine the dataset, we proceed to arrange the elements in \(P_{bn}\) in ascending order based on the loss value computed from Eq. 4, subsequently excluding the uppermost \(D\%\) of the training data. It is noteworthy that, in cases where a predicate class contains fewer than 100 samples, no further elimination of samples is performed. ### Data Transfer As a result, a similarity matrix \(S\in\mathbf{R}^{Q\times Q}\) can be generated by calculating the cosine similarities between all predicate representations. The transfer method is based on importance vector [17]. We transfer the biased predicate annotation to a target predicate that shares the highest similarity and importance vector, and we directly use the similarity score as an adaptive transfer ratio. ## 3 Experiment ### Experiment Details **Dataset**. We evaluate our method on the PSG dataset [1]. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{Scene Graph Generation} \\ \cline{2-7} & mR@20 & @50 & @100 & PR@20 & @50 & @100 \\ \hline IMP [4] & 6.52 & 7.05 & 7.23 & 12.9 & 13.7 & 13.9 \\ +IETrans & 10.2 & 11.0 & 11.3 & 14.5 & 15.4 & 15.7 \\ +Ours & **12.5** & **13.5** & **14.0** & **16.0** & **17.1** & **17.5** \\ \hline VCTree [14] & 9.70 & 10.2 & 10.2 & 16.0 & 16.8 & 16.9 \\ +IETrans & 17.1 & 18.0 & 18.1 & 19.6 & 20.5 & 20.7 \\ +Ours & **18.3** & **18.9** & **19.0** & **20.3** & **20.8** & **20.9** \\ \hline MOTIFS [12] & 9.10 & 9.57 & 9.69 & 15.5 & 16.3 & 16.5 \\ +IETrans & 15.3 & 16.5 & 16.7 & 18.2 & 19.4 & 19.7 \\ +Ours & **18.4** & **19.0** & **19.2** & **20.0** & **20.8** & **21.0** \\ \hline GPSNet [5] & 7.03 & 7.49 & 7.67 & 13.6 & 14.4 & 14.7 \\ +IETrans & 11.5 & 12.3 & 12.4 & 15.3 & 16.2 & 16.5 \\ +Ours & **17.5** & **18.1** & **18.4** & **19.6** & **20.3** & **20.6** \\ \hline PSGTR [1] & 16.6 & 20.8 & 22.1 & 21.9 & 26.3 & 27.6 \\ +IETrans & 23.1 & 27.2 & 27.5 & 24.9 & 28.4 & 28.7 \\ +Ours & **26.4** & **29.6** & **30.2** & **27.0** & **29.9** & **30.4** \\ \hline \end{tabular} \end{table} Table 1: The results (mR@K and PR@K) on SGDet task of our method and other baselines on PSG dataset. IETrans [17] and Ours denote models equipped with different dataset-enhancement methods. **Evaluation Metric.** Following previous works [15, 12], we take mean recall@K (mR@K)[14, 20] as evaluation metrics. Following PSG challenge [1], we also adopt a new evaluation metric named percentile recall (PR), which can be formulated as \(PR=30\%R+60\%mR+10\%PQ\), where PQ measures the quality of a predicted panoptic segmentation relative to the ground truth [10, 21]. **Tasks.** We evaluate our method on the Scene Graph Generation (SGDET) task. **Implementation Details**. We use a BERT-base [22] for representation learning. The decision margin \(m\) is set to 10 degrees, the temperature hyper-parameter T is set to 0.05, and we use an AdamW [23] optimizer with a learning rate 2e-5. The hyper-parameter \(\lambda\) is set to 0.3, \(\beta\) is set to 5e5, \(\mu\) is set to 1.2, and \(\gamma\) is set to 1.5. The \(D\) in data filtration is set to 50. ### Qualitative Analysis As depicted in Fig. 2, a comparative analysis is conducted between the outcomes produced by the Plain PSGTR model and PSGTR integrated with our proposed method. Evidently, PSGTR augmented by our method exhibits an enhanced capacity for predicting more precise associations among instances, concurrently demonstrating a heightened ability to forecast predicates that align more fittingly with the contextual scene. ### Comparison with State-of-the-Art Methods Based on the obtained results in Tab.1, our proposed methodology significantly enhances the performance of baseline models across a wide spectrum of evaluation metrics. A comprehensive comparative analysis against the IETrans framework [17] reveals notable advancements in terms of both mean recall and PR metrics across all baseline models. This observation underscores the superior efficacy of our methodology in optimizing the training dataset, effectively mitigating issues related to extraneous or duplicative transfer processes. Particularly noteworthy is the PR metric, which amalgamates considerations of recall and mean recall, showcasing our methodology's substantial superiority over the original models across a diverse range of predicate labels. This affirms that our approach not only ameliorates recall performance but also adeptly balances the overall performance across various predicate labels, resulting in a more comprehensive and robust assessment of model performance. ### Ablation Study We use PSGTR as the baseline model in ablation studies. Despite the data transfer method, we directly remove the potentially biased data to prove the effectiveness of our target inference process, and to prove the harm of these biased data in training process. As shown in Tab.2, baseline model easily achieves great performance with only biased data removed, and its performance can be further enhanced with our data transfer method. ## 4 Conclusion We present a novel framework for PSG aimed at mitigating the issue of biased prediction. This framework identifies and transfers biased annotations, ensuring a more balanced and representative training dataset. Empirical findings substantiate the performance improvements achieved by our method, consequently establishing a new state-of-the-art benchmark. ## 5 Acknowledgement This research is supported by Singapore Ministry of Education Academic. Research Fund Tier 2 under MOE's official grant number T2EP20221-0023. \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{Data Processing Method} & \multicolumn{3}{c}{SGDet} \\ \cline{2-4} & mR@20 & mR@50 & mR@100 \\ \hline Original & 16.6 & 20.8 & 22.1 \\ \hline Remove & 20.0 & 24.6 & 25.3 \\ \hline Transfer & 26.4 & 29.6 & 30.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on data processing methods. Transfer: data transfer. Remove: simply remove all identified biased data. Original: baseline method on the original dataset. Figure 2: Visualization of plain PSGTR and PSGTR equipped with our method. PSGTR with our method can predict relationships between instances with greater accuracy and also select predicates that better match the visual scene.
2307.06779
Data Behind the Walls An Advanced Architecture for Data Privacy Management
In today's highly connected society, we are constantly asked to provide personal information to retailers, voter surveys, medical professionals, and other data collection efforts. The collected data is stored in large data warehouses. Organisations and statistical agencies share and use this data to facilitate research in public health, economics, sociology, etc. However, this data contains sensitive information about individuals, which can result in identity theft, financial loss, stress and depression, embarrassment, abuse, etc. Therefore, one must ensure rigorous management of individuals' privacy. We propose, an advanced data privacy management architecture composed of three layers. The data management layer consists of de-identification and anonymisation, the access management layer for re-enforcing data access based on the concepts of Role-Based Access Control and the Chinese Wall Security Policy, and the roles layer for regulating different users. The proposed system architecture is validated on healthcare datasets.
Amen Faridoon, M. Tahar Kechadi
2023-07-13T14:38:15Z
http://arxiv.org/abs/2307.06779v1
# Data Behind the Walls - An Advanced Architecture for Data Privacy Management ###### Abstract In today's highly connected society, we are constantly asked to provide personal information to retailers, voter surveys, medical professionals, and other data collection efforts. The collected data is stored in large data warehouses. Organisations and statistical agencies share and use this data to facilitate research in public health, economics, sociology, etc. However, this data contains sensitive information about individuals, which can result in identit theft, financial loss, stress and depression, embarrassment, abuse, etc. Therefore, one must ensure rigorous management of individuals' privacy. We propose, an advanced data privacy management architecture composed of three layers. The data management layer consists of de-identification and anonymisation, the access management layer for re-enforcing data access based on the concepts of Role-Based Access Control and the Chinese Wall Security Policy, and the roles layer for regulating different users. The proposed system architecture is validated on healthcare datasets. Data privacy management; Data security; Access control; RBAC Model; CWSP Model. ## I Introduction New technologies, such as artificial intelligence, cloud computing, sensors and wireless networks, are constantly integrated into our daily lives at a rapid pace. We live in a digital word, also called the fourth industrial revolution. This digital world produces a considerable amount of data, which is highly valuable and can be seen as the new gold or the new oil of this century1. This data enables businesses or organisations to enhance their productions and profits [1]. Many data organisations, such as Facebook, Amazon, Google,..., have emerged as dominant in this field. They invested a lot of resources to create, acquire, maintain, and store the data to make constructive and timely decisions and improve customer services. Therefore, with the rise of the data economy, companies find enormous value in collecting, sharing, and using data. Footnote 1: [https://www.cutimes.com/2020/07/17/big-data-the-new-oil-fields/s/letrums=20220716063653](https://www.cutimes.com/2020/07/17/big-data-the-new-oil-fields/s/letrums=20220716063653) Data breaches and data theft affect both individuals and organisations. Most businesses have elaborated security mechanisms to protect individual private information [2]. However, due to vulnerabilities in software, phishing, stolen credentials, and malicious attacks, cybercriminals can infiltrate and steal personal data, which results in identity theft, financial loss, stress and depression, discrimination, embarrassment, abuse, etc. [3][4]. Many businesses and government agencies have suffered a massive data breach leading to layoffs, lawsuits, and frauds. In 2016 IBM reported that around \(60\%\) of company data breaches were made by its employees, out of which \(75\%\) were deliberate2. Footnote 2: [https://www.ciab.com/resources/ibm-60-percent-attacks-carried-insiders/](https://www.ciab.com/resources/ibm-60-percent-attacks-carried-insiders/) An insider threat is defined as any malicious activity executed by users with legitimate access to the organisation's network, applications, or data stores [5]. One can distinguish three types of insider threats: 1) malicious insider who deliberately looks to steal information, 2) careless insider who does not follow proper IT procedures and accidentally exposes sensitive corporate data at risk and 3) compromised insider who has compromised credentials by an external attacker. Unlike threats from outside the organisation, insider threats are much more difficult to detect and prevent. Insider threats' behaviour is difficult to identify as suspicious because they have legitimate credentials and access to the systems and data. Once inside the network, it is difficult to pick apart the malicious behaviour from a large volume of legitimate day-to-day activities. The data requires robust privacy management systems to prevent identity theft and unauthorised access from external and malicious insider attackers. Therefore, the main challenge of this study is to design a data privacy management architecture that can prevent breaches of an individual's identity and keep the data secure from unauthorised access. The main objectives of our data privacy management architecture are twofold: 1) create security walls between subjects and objects for access control and formulate authorisation rules for legitimate users, and 2) secure the identity and sensitive information of an individual from both the third parties and insider threats. ## II Related Work Numerous data privacy management models have been proposed and implemented by addressing the problem of unauthorized access control whereas the most common are; Mandatory Access Control (MAC), Discretionary Access Control (DAC), Attribute Based Access Control (ABAC) etc. However, these models have limitations. MAC is a very inflexible security model where each user is assigned a clearance, and each object has a classification and compartment stored in its security label. This model is mostly suitable for structured environments where we have classified data. DAC is less central system administration model because the control is not dictated by a company policy but rather by the owner or creator of the information. Whereas, the drawbacks of ABAC limits its implementation such as; time-consuming initial setup in term of identifying right resources, objects and subjects attributes, policies etc, cannot factor everything in environments like big organizations, and it also not ideal for small businesses. According to the organizational structure and needs the most considerable models is Role Based Access Control (RBAC). Most of the studies [6][7][8][9] took an advantage of RBAC security model and their extended versions in different environments (cloud, distributed data storage systems etc.) to restrict the authorized access. Whereas, in the proposed systems the administration of permission groups having the main responsibility for managing the users, their roles and permissions. Therefore, by exploiting their authority, the access permissions of the roles can easily be changed. Moreover, researchers also adopt this model in healthcare domain. Authors in [10] identified the need for roles and their acting procedures in an EHR system. They highlighted that the processes for defining the roles in an EHR system play a vital part in access privileges management. Others proposed structured roles and grant access only to entities located in the authorisation table [11]. The privacy and security maintenance committee is the only authority that can act on the authorisation table. However, structured roles are not enough, and the dynamic and structural aspects of roles are necessary [12]. For instance, one should only provide data access to physicians if they are engaged in their treatment. Therefore, one can deduce roles have more contextual variables (e.g., place, time, association, etc.). Hence, the first model, known as contextual role-based access control authorisation, was proposed in [13]. The purpose of this model is to improve privacy and confidentiality of patients' records, as they were not robust enough. The model considers a hierarchy of roles with authorisation inheritance and a data model which covers the different data types (e.g., prescriptions, demographics). The model also includes a technique for authorisation conflict management. However, most of these models consider access to healthcare practitioners only, they do not have explicit policies on the complete dataset. More data professionals should be considered and be able to access the data with predefined usage and ethics. The applications of Chinese Wall Security Policy (CWSP) in different environments have attracted significant research interests in the last few years. Studies deployed the CWSP model on the cloud at the IaaS level to control the information flow between the conflicting parties [14][15]. Moreover, various scenarios have been modelled with CWSP, mainly when conflict-of-interests are involved. For instance, studies [16][17] focused on protecting subjects (companies) and objects (datasets) by creating subject walls and object walls. Recently, a security model for cryptographic key management in the cloud was proposed and offered security as a Service (CKMS\(-\)SecaaS's) [18]. These studies, while efficient, need further investigations because the spectrum of their applications is limited to distributed or cloud environments. Their main objective was to secure the datasets or assets of an organisation from third-party (conflicting organisation) attacks. Nevertheless, guaranteeing the privacy and security of personal and sensitive datasets within an organisation from insider attacks is still challenging. ## III Proposed Model Concepts The proposed model consists of three modules: the roles (or users) module, the data access management module, and the data management module. We combine the concept of role-based access control with the Chinese wall security policy to accomplish data security. The objective is to manage data storage and usage so that individuals' data cannot be exposed to malicious parties. ### _Roles, Subjects, and Objects_ The Role-Based Access Control model was proposed in 1996 in [19]. It is considered one of the most accepted access control models. The model exploits the "need-to-know" principle, widely adopted and utilised. The whole model assumes that privacy is preserved as long as the data is only accessed when necessary for the right motive and only minimum information details are disclosed. The core components shown in Figure 1 of the role-based access control model are Users or Subjects, Roles, Operations, and Objects. * **Users** are the skilled individuals and they are grouped into roles. * **Roles** represent a set of responsibilities defined by the administration of an organisation. * **Operations (OPS)** are functions performed by the subjects (users) on the objects. * **Objects (OBS)** are the resources that are managed and used by the subjects. A resource can be an application, database, file, etc. ### _Access Management and Walls_ The proposed data access management, based on the Chinese Wall security policy (CWSP), is defined as a dynamic firewall mechanism shown in Figure 2. We implemented the concept of CWSP for data access control between competing/conflicting roles based on access control rules [16], [17], [20]. The walls defined in this model prevent access between Fig. 1: Core Role-Based Access Control Model the roles and data warehouses inside the same organisation. The adoption of the CWSP for the data access management layer is summarised below: * Users/subjects cannot access the object that is in conflict with the object they already possess. Therefore, each subject has a granted and a denied objects, where each subject in the granted set has their conflicting subjects in the denied set. The pairwise (Granted, Denied) of a subject is called the subject wall. It is not possible to find two competing objects inside the same wall. * The subjects can perform read and write operations via authorised access to the objects. In this case, the wall, called as "object wall", is created around the objects based on the same rules as the subjects. ### _Wall Placement_ Chinese wall security policy is based on wall placements to prevent access to data platforms from unauthorised subjects. Our model acts on the same entities as previous ones: subjects, objects, and access rights. In this study, we have data warehouses (DWs) as objects and a set of users of different responsibilities on DWs. We achieve the privacy of individuals' records in two steps: de-identification and anonymisation of original data records. The typical data platform architectures should deal with data collection, integration, and analytics. We divide the collected data stores into three main layers, depending on the level of privacy and security they may need to handle. They are described in the following: Original Data Warehouse (ODW)The collected raw data via different devices and from various sources are stored in data marks. These data marks are carefully integrated into a data warehouse containing original data. This data warehouse maintains the quality, accuracy, validity, timeliness, completeness, uniqueness, and consistency of the original data records. We consider this data warehouse as an object (ODW) that must be secured from unauthorised access and the limited number of users who have access to these original records. De-identified Data Warehouse (DDW)All information about users' identities is removed to prevent identity disclosure. This de-identified data warehouse consists of individual aggregated and derived statistical data that is computed from the original data. We consider this de-identified data warehouse as a separate object (DDW) that needs to prevent the identity of an entity during analysis. But, with the presence of quasi-identifiers, this data needs further data protection. Anonymised Data Warehouse (ADW)After the de-identification, we further anonymise the data so that the original data cannot be recovered from the knowledge extracted during the data analysis. We consider this anonymised data warehouse as an object (ADW) that must be secured from malicious third parties or the linkage and other attacks. ## IV Model Formulation In the previous section, we identified the main concepts of the model: roles, objects, and walls. The roles are the subjects that own the set of objects. The objects represent datasets, data records, or data warehouses (ODW, DDW and ADW). Finally, the walls implement the conflicting relation on which the CWSP is based. In the following, we give formal definitions of all these entities and relations. The overall system can be considered as \((R,O,CIR,CIC,W,A)\), where \(R=\{r_{1},\cdots,r_{n}\}\), \(O=\{o_{1}\cdots,o_{m}\}\), \(CIR=\{cr_{1}\cdots,cr_{k}\}\), \(CC=\{cc_{1}\cdots,cc_{p}\}\), \(W=\{w_{1},\cdots,w_{l}\}\), \(A=\{a_{1},\cdots,a_{h}\}\) sets of Roles (subjects), Objects, Conflict-relations, conflict-classes, Walls, and access rights, respectively. They are defined as follows: The proposed system has following characteristics: 1. Some roles are in conflict: Let \(\{R_{i}\}\) and \(\{R_{j}\}\) are in conflict: if u \(\in\{R_{i}\}\), then u \(\notin\{R_{j}\}\) and u cannot be in \(\{R_{j}\}\) at all. Therefore, we need a wall between \(\{R_{i}\}\) and \(\{R_{j}\}\). 2. Some roles are cooperative: Let \(\{R_{x}\}\) and \(\{R_{y}\}\) are cooperative: if u \(\in\{R_{x}\}\), then u \(\notin\{R_{y}\}\) but u can switch to \(\{R_{y}\}\). So, there is no need to have a wall between \(\{R_{x}\}\) and \(\{R_{y}\}\). Suppose, \(D\) is a set of domains, \(D=\{d_{1},\cdots,d_{n}\}\), \(\Phi\) denotes Execution Denied, \(\{O_{d_{i}}\}\) denotes Object of domain \(\{d_{n}\}\), and \(\{F_{d_{i}}()\}\) is a function or an operation of the environment \(\{d_{n}\}\) **Definition 1**: _A Role is a set of users group or subjects that can execute some given operation F() within a given domain. Each users group can be scaled down to a single user. The role \(\{R_{i}\}\) within a system is defined as follows:_ \[\{R_{i}\}=\langle U_{i},F_{i}(),d_{i}\rangle\] _Where, \(U_{i}=\{\{u_{j}\}_{j}\}\mid u_{j}\) can execute \(\{F_{d_{i}}(O_{d_{i}})\}\)_ **Definition 2**: _Objects are the entities that can be manipulated by subjects. An object can be a record, file, data warehouse, or any entity on which an operation can be executed by a user or subject. The object \(\{O_{i}\}\) within a system is defined as follows:_ \[\{O_{i}\}=\langle E_{i},OP_{i}()\rangle\] \[(e_{j}\in\{E_{i}\})\wedge(op_{j}\in OP_{i}())\Rightarrow op_{j}(e_{j})\neq\Phi\] Whereas, \(\{E_{i}\}\) is a set of entities (records, files, data warehouses) and \(\{Op_{i}\}\) is a set of operations (read, write,...) Fig. 2: Data Access Control Based on CWSP **Definition 3**: _A Conflict-relation is a relation that exist between the competing roles. Roles have conflict with each other on the bases of access granted or denied permissions on objects of different domains. Conflict of interest relation is a binary relation, and within a system is defined as follows:_ \[\text{CIR}\subseteq\{R^{2}\}\Rightarrow\{0,1\}\text{ -- if }r_{i}\in R=1,\text{ otherwise 0}\] Moreover, let \(\text{CIR}\subseteq\{RXR\}\) must satisfies the following properties: 1. CIR-1: CIR is symmetric: If role \(R_{i}\) is in conflict with the role \(R_{j}\), then \(R_{j}\) is certainly in conflict with \(R_{i}\). \[\forall(r_{i},r_{j})\in R^{2};RC_{i}(r_{i},r_{j})=RC_{i}(r_{j},r_{i})\] 2. CIR-2: CIR is anti-reflexive: The role cannot conflict to itself. \[\forall(r_{i},r_{i})\text{ in }R^{2};RC_{i}(r_{i},r_{i})=0\] 3. CIR-3: CIR is anti-transitive: If role \(R_{i}\) is in conflict with the role \(R_{j}\), and \(R_{j}\) is in conflict with the role \(R_{k}\) then role \(R_{i}\) may or may not in conflict with \(R_{k}\). \[\forall(r_{i},r_{j},r_{k})\in R;RC_{i}(r_{i},r_{j})=1,RC_{i}(r_{j},r_{k})=1\] \[\text{ Then }RC_{i}(r_{i},r_{k})=[0,1]\] **Definition 4**: _A conflict-class is defined on the notion of equivalence class, that is the subset of roles which are equivalent to each other called as roles class (RC). Whereas, each role within the class must satisfy the defined properties of conflict-relation. So that we cannot find the conflicting roles within the same class. Conflict-Class within a system is defined as follows:_ \[[RC_{i}]=\{r_{i}\mid(r_{j},r_{i})\in CIR\}\] \[(r_{i}\in\{RC_{i}\})\wedge(o_{i}\in\{RC_{i}\})\Rightarrow u_{i}[F_{d_{i}}(O_{ d_{i}})]\neq\Phi\] \(\{\text{RC}\}\) = set of roles which are authorized to perform functions \(\{F_{d_{i}}()\}\) on object \(\{O_{d_{i}}\}\). However, \[(r_{i}\notin\{RC_{i}\})\wedge(o_{i}\notin\{RC_{i}\})\Rightarrow u_{i}[F_{d_{i}} (O_{d_{i}})]=\Phi\] \(\{\text{CRC}\}\) = set of competing roles classes which are not authorized to perform functions \(\{F_{d_{i}}()\}\) on the objects \(\{O_{d_{i}}\}\) of their competing class. **Definition 5**: _A Wall is build on top of the statement "Who can access what?" in a secure manner and implemented as the object and subject walls. Based on the conflict-class, we created the binary wall that decidedly separate the conflicting subjects and objects from each others. The walls within a system are defined as follows:_ \[\{\text{OW}\}\text{ = }\{\forall(r_{i},r_{j})\in R^{2}\mid RC_{i}(r_{i},r_{j}) =1\}\] _Set of object walls (OW) is the set of roles \((r_{i},r_{j})\in R^{2}\) such that \(RC_{i}(r_{i},r_{j})=1\) whereas, the \(\overline{RC_{i}(r_{i},r_{j})}=0\)_ \[\{\text{SW}\}\text{ = }\{\forall(s_{i}\in r_{i},s_{j}\in r_{j})\mid RC_{i}(s_{i},s_{j})=1\}\] _Set of subject walls (SW) is the set of subjects that belong to their relative roles/ roles classes \((s_{i}\in r_{i},s_{j}\in r_{j})\) such that \(RC_{i}(s_{i},s_{j})=1\) whereas, the \(\overline{RC_{i}(s_{i},s_{j})}=0\)_ ### _Object Set Binary Wall (obj)_ Object set denotes to the set of all objects, where each object \(obj_{i}\) belongs to \(RC_{i}\) and associated with two subsets of RC: \[OWRC_{obj_{i}}\subseteq RC_{i}\] \[OWRC_{obj_{i}}\subseteq CRC_{i}\] Simply \(OWRC_{i}\) is the set of roles classes, where they have an access right to the object \(obj_{i}\). \(OWCRC_{i}\) is the set of conflicting roles classes, denied to be accessed to the object \(obj_{i}\). Therefore, we can represent each pairwise \(\{OWRC_{obj},OWCRC_{obj}\}\) by a binary pairwise of arrays \(\{BinOWRC_{obj},BinOWCRC_{obj}\}\), named it a binary object's wall. Each array has a size of n bits (the name of roles classes). Bits j between 1 and n are initialized as following: \[BinOWRC_{obj}\text{ is 1 if }RC_{i}\in OWRC_{obj}\text{, otherwise 0.}\] \[BinOWCRC_{obj}\text{ is 1 if }RC_{i}\in OWRCRC_{obj}\text{, otherwise 0.}\] The once operation authorised on bits, is to change value from 0 to 1. ### _Subject Set Binary Wall (sub)_ Subject set denotes to the set of all subjects, where each subject \(S_{i}\) associated with two subsets of roles class (RC). \[SWG_{S_{i}}\subseteq RC_{i}\] \[SWD_{S_{i}}\subseteq RC_{i}\] Simply \(SWG_{i}\) is the access granted set. It is a set of roles classes have similar objects inside the subject wall of \(S_{i}\). \(SWD_{i}\) is the denied set. It is a set of roles classes denied to will be access by the subject \(S_{i}\). For the binary wall creation around the subjects, we can represent each pairwise \(\{SWG_{s},SWD_{s}\}\) by a binary pairwise of arrays \(\{BinSWG_{s},BinSWD_{s}\}\), named it a binary subject's wall. Each array has a size of n bits (the number of roles classes). Bit j between 1 and n are initialized as following: \[BinSWG_{s}\text{ is 1 if }RC_{i}\in SWG_{s}\text{, otherwise 0}\] \[BinSWD_{s}\text{ is 1 if }RC_{i}\in SWD_{s}\text{, otherwise 0}\] The once operation authorised on bits, is to change value from 0 to 1. **Definition 6**: _Access right is a set of operations (read, write, delete etc.) that a role has to perform on domain objects is represented in the form of Table I. We considered the two access rights in our system that are read and write. However, only the authorized user can perform their assigned operations on domain objects after their access verification and the walls will be updated according to the operations explained in section V._ ## V Security Check Point Security check point is based on the data access authorization and subject and object binary walls updates. ### _Access Authorization_ Subject is granted to access the object, if and only if the following access authorisation condition is verified: \[SWG_{i}\cap OWCRC_{j}=\emptyset\text{ AND }SWD_{i}\cap OWRC_{j}=\emptyset\] According to the binary walls of subject and object, the access condition is mapped as following: \[BinSWG_{s}\wedge BinOWCRC_{o}=\{00\dots 00\}\text{ AND }\] \[BinSWD_{s}\wedge BinWORK_{o}=\{00\dots 00\}\] The access is denied, if the objects (data warehouse) of competing roles find inside the same wall. ### _Wall Updating_ There is no need to update the walls if the access is denied. However, if the subject is granted to access the object the binary walls of subject and object will be updated immediately according to the access type (reading or writing). #### V-B1 Read from an object If the subject is granted to read from an object, the binary subject wall will be update as following: \[BinSWG_{s}=BinOWRC_{o}\lor BinSWG_{s}\] \[BinSWD_{s}=BinOWCRC_{o}\lor BinSWD_{s}\] #### V-B2 Write to an object If the subject is granted to write to an object, the binary object wall will be update as following: \[BinOWRC_{o}=BinOWRC_{o}\lor BinSWG_{s}\] \[BinOWCRC_{o}=BinOWCRC_{o}\lor BinWD_{s}\] ## VI Data Management Module In this section, we will present the data storage module by taking into account the privacy mechanisms as shown in Figure 3. ### _Sensitivity Level and Protection of Data_ There are four common terms that represent the nature of attributes present in the dataset on the basis of sensitivity level and relation with entity are; identifiable, Quasi, sensitive and insensitive attributes. Preserving the privacy of a piece of data that is sensitive, identifiable, or quasi-identifiable is a critical challenge. Whereas, de-identification and anonymisation of data maintains a low confidence threshold for linking the sensitive information to an individual [21]. The most adoptable privacy models are; k-anonymity, l-diversity, (X,Y)-anonymity, LKC-privacy, t-closeness etc can be implemented using various anonymisation operations, such as noise addition, generalisation, shuffling, and perturbation [22][23][24]. ### _Confidentiality Measurement and Data Transformation_ According to the common laws of confidentiality or data protection (Human Rights Act 1998 and Data Protection Act 1998), private data cannot be disclosed without explicit consent, legal requirement and public interest. However, these laws are based on trust, which can be easily spoiled by malicious insiders with legitimate access to confidential data. Therefore, we required data confidentiality or transformation that can prevent an individual's personal and sensitive information from someone with legitimate access to data. Data confidentiality and transformation functions within the system are defined as follows: \[\text{Data Confidentiality = }\alpha\ \ \ \ \ \therefore\ 0<=\alpha<=1\] The range of \(\alpha\) from 0 to 1 measures the level of data confidentiality. This clearly indicates that \(\alpha\) = 1 is fully confidential data whereas, \(\alpha\) = 0 is less confidential and fully anonymised data. \[\text{Data Transformation }(F_{T}())=\{F_{D_{1}}\cdots F_{D_{n}},F_{A_{1}} \cdots F_{A_{m}}\}\] \[F_{T}(Obj_{i})_{\alpha_{x}}=(Obj_{j})_{\alpha_{y}}\ \ \ \ \therefore\ \alpha_{y}<\alpha_{x}\] Data transformation \((F_{T}())\) is the set of de-identification \((F_{D})\) and anonymisation \((F_{A})\) techniques that decreases the confidentiality level and transforms the original data into anonymised data. #### Vi-B1 Data De-identification Firstly, we consider the original data records as an object of the organization that contains the identifiable and sensitive information of an individual along with other attributes. This data is to the core confidential and must be accessible to the minimum number of roles. The Original Data within the system is defined as follows: \[Obj_{1}=OD_{\alpha_{x}}\ \ \ \ \therefore\ \alpha_{x}=1\text{ (Fully Confidential Data)}\] Fig. 3: Data Privacy Management Secondly, after applying the de-identification function \((F_{D})\), this original data is converted into a de-identified data object. De-identification decreases the confidentiality level of the OD. \[F_{D}(OD)_{\alpha_{x}}=(DD)_{\alpha_{y}}\ \ \ \ \ \therefore\alpha_{y}<\alpha_{x}\] \[Obj_{2}=(DD)_{\alpha_{y}}\ \ \ \ \ \text{(Less Confidential Data)}\] #### Vi-A2 Data Anonymisation Thirdly, we applied anonymisation techniques on the de-identified data so that the identity of an individual cannot be revealed through linkage attacks. This data warehouse is fully anonymised and their level of confidentiality is \(\approx 0\). \[F_{A}(DD)_{\alpha_{y}}=(AD)_{\alpha_{z}}\ \ \ \ \ \therefore\alpha_{z}\approx 0\] \[Obj_{3}=(AD)_{\alpha_{z}}\ \ \ \ \ \ \text{(Fully Anonymised Data)}\] ## VII Case Study: Model Illustration in Healthcare Adopting the advanced technology for Electronic Health Records (EHR) not only revolutionizes the way to treat diseases but also empowers many other sectors like; insurance companies, law enforcement agencies, pharmaceutical, and other product-selling companies etc. Based on the sensitivity, usability and multiple access of EHR, we select the healthcare sector for model illustration. ### _Data Objects_ The defined data warehouses (ODW, DDW and ADW) are considered as the objects of healthcare organisation and must be protected from unauthorised access. ### _Roles and Their Responsibilities_ According to the primary functions of the data science life-cycle we define certain roles and grant them access permissions regarding their responsibilities. The defined roles and their responsibilities are explained as follows (Role = R): **R1: Data Collector:** Is responsible for the acquisition of patients' data from different sources like; IoT devices, sensors, patients' health documents, physically attending patients etc. Data collection tasks can be performed by using mobile apps, web services etc there is no need to access any data warehouse. **R2: Data Integration Officer:** Key concerns refer to the data quality, design and implementation of data integration applications. This will be responsible for the integration of data types and formats into a single location known as the ODW by maintaining the data quality. **R3: Data Privacy Officer:** This specific role within the data management system is required for data protection. Their key responsibility is to apply data de-identification and anonymisation functions on ODW. **R4: Data Controller:** involves managing the data about data (metadata), whereby this "other data" is generally referred to data models and structures, not the content. This role is ultimately accountable with regard to the definition, data quality and value of data in a given subject area. **R5: Data Analyst:** By accessing the de-identified data, data analyst is responsible to structure the raw data, formulate or recognise the various patterns in the data through the mathematical and computational algorithms, and by using the standard statistical tools to analyse and interpret the data. **R6: Data Scientist:** Apart from the work done by the data analyst, the data scientist will perform model creation, designing the new algorithms that can solve the specific research problems of the organisation etc. This role can collaborate with similar roles of other organisations in order to provide insights about business performance and support decision-making via sharing anonymised data. **R7: End Users (Patients and Doctors etc:)** End users are not directly involved in the functions of the data management system but they can access the individual record through the interfaces defined by the healthcare organisation such as; mobile or web services, in-person services etc. \[\{U_{i}\}\in[R1,R4,R7]\mid U_{i}[F_{d_{i}}(ODW,DDW,ADW)]=\Phi\] ### _Equivalent Roles Classes and Binary Walls Creation_ We assigned the roles to their particular class with respect to the access privileges they have on data objects.Table II clearly represents the model elements. #### Vi-C1 Data Management Class (DMC) Data Management Class contains the number of roles who can access the object (ODW) of this class. \([DMC]=\{R2,R3\}\) #### Vi-C2 Data Analyst Class (DAC) Data Analyst Class carry the number of roles who can access the "De-identified Data Warehouse (DDW)" to perform their responsibilities. \([DAC]=\{R5\}\) #### Vi-C3 Data Scientist Class (DSC) Data Scientist Class holds the number of roles who can fulfill their responsibilities by accessing the object ADW. \([DSC]=\{R6\}\) As their access privileges to the data warehouses, DMC, DAC, and DSC are conflicting with each others. ### _Illustration Queries_ We have following sequence queries to validate our system. All the queries are performed on binary walls. **Q1**: Subject S1 needs to perform writing operation on the object ODW (original data warehouse). S1 can access their required object after verifying the access authorisation condition. \[\{100\}\wedge\{011\}=\{000\}\text{ AND }\{011\}\wedge\{100\}=\{000\}\] After the access granted to S1 and the write operation is performed on ODW, object walls will be updated as: \[BinOWERC_{ODW}=\{100\}\vee\{100\}=\{100\}\] \[BinOWCRC_{ODW}=\{011\}\vee\{011\}=\{011\}\] **Q2:** Subject S3 wants to perform reading operation on the object DDW. Their access is denied, because the conflicting subjects cannot access the conflicting objects. However, there is no need to update the walls because of access denied. \[\{001\}\wedge\{101\}\neq\{000\}\text{ AND }\{110\}\wedge\{010\}\neq\{000\}\] ## VIII Conclusion As the today's world is knowledge economy, where the more you know is proportional to more data that you have. In a typical data owning organizational structure, there are number of roles/actors to whom the individuals information at each data architectural layer is accessible and the risks associated with these insiders are far more difficult because of their legitimate access to the system. Hence, we need to formulate a data privacy ensuring system to protect the identity and sensitivity of data not only from external but also from internal malicious intruders. This paper first regulated the actors/users of the organisation by adopting the strategy of RBAC. Secondly, the concept of CWSP is implemented as a firewall mechanisms to restrict the unauthorised access of users. Lastly, we proposed a comprehensive data management architecture composed of necessary components of data de-identification, and anonymisation. Therefore, our model is considered as useful and have significance in the sense that it provides security and privacy along each functional layer of data architecture. Furthermore, this system is also not subjected to faulty human intentions because it is an automatic system that impose limits on the administrators' roles with respect to regulating the accessibility of data and exercising their authority in a malicious manner. Along with this issue, it also encounters with the risk of mistakes by administrators. ## IX Acknowledgment This work is supported by Science Foundation Ireland under grant number \(SFI/12/RC/2289_{P}2-\) Insight Centre for Data Analytics.
2304.07137
Canalisation and plasticity on the developmental manifold of Caenorhabditis elegans
How do the same mechanisms that faithfully regenerate complex developmental programs in spite of environmental and genetic perturbations also permit responsiveness to environmental signals, adaptation, and genetic evolution? Using the nematode Caenorhabditis elegans as a model, we explore the phenotypic space of growth and development in various genetic and environmental contexts. Our data are growth curves and developmental parameters obtained by automated microscopy. Using these, we show that among the traits that make up the developmental space, correlations within a particular context are predictive of correlations among different contexts. Further we find that the developmental variability of this animal can be captured on a relatively low dimensional phenoptypic manifold and that on this manifold, genetic and environmental contributions to plasticity can be deconvolved independently. Our perspective offers a new way of understanding the relationship between robustness and flexibility in complex systems, suggesting that projection and concentration of dimension can naturally align these forces as complementary rather than competing.
David J. Jordan, Eric A. Miska
2023-04-14T14:03:06Z
http://arxiv.org/abs/2304.07137v3
# **Canalisation and plasticity on the developmental manifold of _Caenorhabditis elegans_** ###### Abstract How do the same mechanisms that faithfully regenerate complex developmental programs in spite of environmental and genetic perturbations also permit responsiveness to environmental signals, adaptation, and genetic evolution? Using the nematode _Caenorhabditis elegans_ as a model, we explore the phenotypic space of growth and development in various genetic and environmental contexts. Our data are growth curves and developmental parameters obtained by automated microscopy. Using these, we show that among the traits that make up the developmental space, correlations within a particular context are predictive of correlations among different contexts. Further we find that the developmental variability of this animal can be captured on a relatively low dimensional _phenotypic manifold_ and that on this manifold, genetic and environmental contributions to plasticity can be deconvolved independently. Our perspective offers a new way of understanding the relationship between robustness and flexibility in complex systems, suggesting that projection and concentration of dimension can naturally align these forces as complementary rather than competing. Draft of the _Caenorhabditis elegans_ Draft of the _Caenorhabditis elegans_ Correspondence: dj333 (at) cam.ac.uk ## Introduction Biological systems are remarkable for their ability to generate reproducible macroscopic dynamics from the complex interactions of large numbers of microscopic components. For example, in animals, the development of an entire organism from a single cell proceeds faithfully each generation even in the presence of environmental fluctuations and molecular noise. Such robustness arises at many spatial and temporal scales e.g., gene expression patterns give rise to reproducible cell differentiation [1], neural and muscular activity generate locomotion [2], and interactions between individuals of different species give rise to surprisingly reproducible ecological dynamics [3]. This robustness is called _analisation_[4] and dynamics that are canalized are said to be _homerhetic_[5]. While robust, canalised processes nevertheless allow for important variability; stem cell populations generate diverse tissue types, behaviors respond to stimuli and environmental cues, and populations adapt to changing environments. The structure of this macroscopic variability, however, is much more constrained than the variability intrinsic to the microscopic processes that underlie it. Although gene expression determines the dynamics of the cell cycle, variations in these dynamics are largely insensitive to stochastic fluctuations in the levels of the thousands of proteins. Thus, the ways in which these macroscopic phenotypes can vary is relatively "low-dimensional" compared to the ways in which the individual components can vary. This projection of variations into a lower dimensional space provides a way for biological systems to be both robust and flexible. Robustness arises because most variations manifest as excitations onto relatively few phenotypic modes, while flexibility is permitted along these modes. As an example, consider how facial diversity can be generated by the combination of relatively few _eigenfaces_[6]. Here the eigenfaces are the modes in this system, and varying the weights of each mode can capture many diverse faces. However, even large variations in these weights are unlikely to produce non-face images. Low dimensional representations of phenotypic diversity are ubiquitous in living systems. Variations in phenotypes that tend to a stable state during development have been successfully represented in low-dimensional phenotypic spaces called "morpho-spaces", for example, morphological traits like the arrangement of flowers on a plant [7], the shapes of finch beaks [8], of coiled sea-shells [9], and even the shape of the influenza antigen [10]. Additionally, while the dimensionality of dynamic, time-varying, and responsive phenotypes is more difficult to define rigorously [11], it has been shown in some cases to be low-dimensional. Some recent examples include the crawling behavior of _C. elegans_[12] and the neuronal dynamics that underlie it [13], the swimming behavior of both eukaryotic (_Tetrahymena thermophila_) [14] and prokaryotic (_Escherichia coli_) single celled organisms [15], and the transcriptional trajectories of cells during fate determination [16]. Concentration of dimension may be an intrinsic property of systems where robustness and control of the macroscopic observables are required [17] and may arise from constraints imposed by steady state growth [18; 19], or trade-offs between phenotypic archetypes suited for different tasks [20]. Here we propose that the process of canalisation may be superseded by a more general process of phenotypic concentration of dimension. That is, genetic networks evolve such that most variations will be projected onto a low dimensional manifold, and that this in turn provides both canalisation and phenotypic plasticity. Concentration of dimensions manifests as the co-variation of traits in a high dimensional multi phenotype space. This structure of the co-variation is called _phenotypic integration_[21] and its geometric properties can be captured by a phenotypic manifold, i.e., a lower dimensional space that faithfully captures most of the variation observed in a higher dimensional space. The process of finding such a manifold is called dimensionality reduction. The determination of this space and how it is measured can be essential to finding the dominant modes of phenotypic variation, and thus for assessing the degree of canalization. A recent example comes from the developmental program of the wing of the fruit-fly _Drosophila melanogaster_[22], which used landmark free morphometrics to uncover a dominant mode of variation that was not apparent from traditional landmark based techniques. In this work, we use a custom built automated imaging system to map the phenotypic space of growth and development of _C. elegans_. Using this system, we recorded 673 individual growth curves during the \(\approx 70\) hours of their development from eggs to reproductive adults, and manually recording the timing of egg hatching and reproductive maturation in single animals. To construct as complete a manifold as possible, we sample from extant variations in different genetic and environmental contexts, including both natural genetic variation and single gene mutants. While the most unbiased approach to sampling genetic variation would be random mutagenesis, because of the immense sampling space, efficient sampling cannot be done. For this reason, we chose to sample "wild isolates" of _C. elegans_, i.e. strains collected from nature, and whose collections of genetic changes have been subjected to natural selection. For environmental diversity, we chose to alter the animal's diet; _C. elegans_ feed on bacteria, and we chose a collection of bacteria that included the standard laboratory bacteria _E. coli_ as well as bacteria isolated from natural sites where _C. elegans_ were also collected [23]. Measurements were collected for three _C. elegans_ wild-isolates, each fed on one of four different bacterial diets, and two mutants of the _C. elegans_ laboratory strain N2. Using these data, we demonstrate that the space of developmental trajectories can be captured by a low-dimensional manifold and show a correspondence between the directions of fluctuations in a fixed context the directions in which populations will shift due to genetic or environmental changes. We find that the manifold obtained using nonlinear dimensionality reduction techniques captures developmental variations in a way that allows one to neatly decompose the contributions of genetics and environment independently, with the major mode of variation corresponding to environmental shifts and the second mode corresponding to genetic changes. Figure 1: **(a)** A schematic of the imaging apparatus. Synchronized eggs are transferred into the mini wells at \(t_{0}\). Hatching \(t_{hatch}\) and egg-laying time \(t_{tail}\) are manually recorded and give development time \(t_{dev}\). Example images are shown for 10 time points (a, lower), with the computed outline of the worm shown (yellow). **(b)** An example image an adult _C. elegans_ and an egg (scale bar \(200\mu m\)). A phylogenetic tree pruned from CeNDR [24] database of some common wild isolates as well as the three strains used in this work, marked in red. In addition, micro-graphs of the bacteria used as food sources imaged at 100x magnification (scale bar \(10\mu m\)). **(c)** The full developmental time course of an animal from hatching through adulthood. The blue points show calculated length over time, with 10 specific points highlighted (red plus) corresponding to each of the images in (a, lower). **(d)** shows the length measurements and the computed logistic best fit curves from three example time courses. The parameters of each logistic fit are shown to the left, with each color corresponding to the relevant data and curve. ## Results Characteristics that change over time are more difficult to quantify and compare than those that are static or near a steady state. Often, dynamic phenotypes such as these are measured and compared at a single fixed time, but in this case, proper alignment or synchronization can be challenging. Ideally, one would like to compare the time series of time-varying phenotypes and compare these directly. The growth of a multi-cellular organism is a good example of such a time-dependent phenotype. Ideally, one would like to compare growth curves aligned to an unambiguous starting time. To this end, we designed and constructed a low-cost parallel imaging platform capable of measuring _C. elegans_ growth for 60 individual animals simultaneously over the course of their \(\approx 70\) hour development at a temporal resolution of \(~{}0.001\) Hz, resulting in a time series of \(\approx 200\) observations per animal. In addition to length and area measured automatically, egg hatching, and first egg-laying by mature adults are manually recorded. These not only provide estimates of the animals reproductive development, but also provide a fixed time to which the time series can be aligned. Reproductive development is in general correlated with growth but need not necessarily be. Because animals grow from approximately \(0.2\) mm to \(1\) mm in length during their lifetime, any wide-field system capable of imaging many isolated worms simultaneously would lack the necessary resolution. To solve this, we developed a system that uses a fixed lens and USB camera that is programmed to move between \(6\)mm diameter custom made wells using an XY plotting robot. Sample images from the time series are shown (Fig. 1a) (inset, 1-8), along with the associated time series and the best fit logistic function of the form: \[l(t)=\frac{l_{max}}{1+e^{-r(t-A)}} \tag{1}\] as determined by non-linear least squares fitting, giving three parameters for each curve (Fig. 1d). We used this system to record development in a collection of _C. elegans_ isolated from the wild and fed on various bacteria, some of which were collected from sites where _C. elegans_ were found. In total, we assayed five unique _C. elegans_ genotypes fed on four different bacterial diets (Fig. 1b) and collected a total of \(673\) growth curves, in addition to developmental data, across these conditions. The developmental data consist of the duration of _ex-utero_ embryonic development and the duration of reproductive maturation. These are measured as the time from egg-laying to egg-hatching \(t_{hatch}\), and the time from hatching until the animal grows to reproductive age and lays its first egg \(t_{dev}=(t_{laid}-t_{hatch})\) (Fig. 1a). While these durations were single scalar quantities, the growth curves consisted of hundreds of points. Dimensionality reduction could have been performed directly on these high-dimensional vectors after appropriate regularization, but instead these growth curves were fit with the logistic function (Eq. 1), which performed similarly well and whose parameters are easier to interpret (See Supplemental Information). The three parameters, which we call the maximum length \(l_{max}\), the growth rate \(r\), and the shift \(A\), determine the saturation value of the growth curve, the growth rate at the inflection point, and the temporal shift of the inflection point, respectively. For each growth curve we have an independent measurement of the reproductive development time \(t_{dev}\) and of the animals length at this time \(l(t_{laid})=l_{dev}\) (Fig. 2a, squares). We can use these additional parameters to rescale each growth curve. First, we divide the length as a function of time by \(l_{dev}\), yielding: \[\hat{l}(t)=\frac{l_{max}/l_{dev}}{1+e^{-r(t-A)}}\] Then, rescaling time \(\hat{t}\to t/t_{dev}\) \[\hat{l}(\hat{t})=\frac{l_{max}/l_{dev}}{1+e^{-rt_{dev}(\hat{t}-A/t_{dev})}} \tag{2}\] Thus, the normalization simply rescales the fit parameters from \([l_{max},r,A]\rightarrow[l_{max}/l_{dev},r\cdot t_{dev},A/t_{dev}]\) yielding unit-less quantities for the fit parameters. The fitting procedure on the normalized curves recovers the rescaled fitting parameters from the unnormalized data (See Supplementary Information). Interestingly, we find that rescaling the curves to an independently measured parameter, the reproductive developmental duration, seems to collapse the growth curves, consistent with temporal scaling observed previously [25], e.g. (Fig. 2b). To compare the variance of all of the rescaled growth curves to the variance of the un rescaled growth curves, the raw data was normalized to the mean developmental duration and the mean and standard deviation of the resulting growth curves were compared, showing that rescaled growth curves "collapse" and that their resulting standard deviation is smaller than for the raw data normalized (Fig. 2c). The growth curves of _C. elegans_ of different genetic backgrounds and grown on differ food sources differ in their maximum length, their growth rate, and in the shift, as well in the durations of _ex-utero_ development and reproductive development. However, we find that these parameters do not vary independently. For example, some recent work has shown a negative correlation between growth rate and developmental duration in _C. elegans_ and we also observe this in our data. They suggest that this may be a mechanism for reducing variability in adult size in [26]. If correlations between traits arise from a mechanism for control such as this, we would expect variation in individuals to be correlated in the same way as variations between populations. Strikingly, we find that if parameters are correlated among individuals in one context, these correlations are more likely to appear also between populations in different contexts (To see how such correlations may arise, refer to Box 1). For example, there is a strong negative correlation between the maximum length and the growth rate among individuals in all combinations of conditions. Likewise, Figure 2: (continued) for each. The data can be re-scaled to plot length \(l\) as a fraction of development length \(\hat{l}=l/l_{dev}\) and time \(t\) as a fraction of development time \(\hat{t}=t/t_{dev}\) as shown in **(b)**. Re-scaling the data in this way changes the fit parameters of each logistic function. \([l_{max},r,A]\rightarrow[l_{max}/l_{dev},r\cdot t_{dev},A/t_{dev}]\). The re-scaling of the curves in **(b)** appears to collapse the growth curves. This seems to be true in general, as the growth curves re-scaled by their individual \(t_{dev}\) and \(l_{dev}\) show a smaller standard deviation (**(c)** black) than the raw data (**(c)** red) (which is normalized to the average \(t_{dev}\) and \(l_{dev}\) to facilitate plotting on the same scale). Box 1: Illustration of relationships between genetic architecture, dimensionality, and the correlation structure of traits. **(a)** A simple genetic circuit where green \(\phi_{1}\) and red \(\phi_{2}\) phenotypes are controlled by a single transcription factor \(\chi_{1}\). **(b)** Different populations (P1-P3) have different mean expression of \(\chi_{1}\) due to either genetic or environmental changes, changing the brightness of the yellow color. Fluctuations in \(\chi_{1}\) around these mean values lead to correlated noise within populations **(c, left)**. The mean values of \(\phi_{1}\) and \(\phi_{2}\) are also correlated between populations **(c, right)**, in the same way. This leads to a one dimensional phenotypic manifold **(b, grey line)**, which is consistent with the single knob \(\chi_{1}\) In **(d)**, however, there are two independent transcription factors \((\chi_{1},\chi_{2})\) that control \(\phi_{1}\) and \(\phi_{2}\). In each population P1-P3 in **(e)**, \(\chi_{1}\) sets the mean value of both \(\phi_{1}\) and \(\phi_{2}\) in a correlated manner, changing the overall brightness. However, fluctuations in \(\chi_{2}\) introduce anti-correlated noise within populations **(f, left)**, shifting the spectrum to either green or red. The between population correlation is dominated by the changes in \(\chi_{1}\), resulting in correlated changes in both \(\phi_{1}\) and \(\phi_{2}\)**(f, right)**. The within population anti-correlation expands the dimensionality of the manifold **(e, grey plane)**. Dimensionality can be expanded also by uncorrelated noise both within and between populations (not illustrated). Figure 3: **(a)** An example showing a pair of developmental parameters \(\hat{l}\) and \(\hat{r}\) that are correlated within each populations measured. The scatter plot shows z-score of these parameters with respect to each conditions mean and variance, along with the best fit linear regression for that condition (colors indicate the food source). There is a significant within population negative correlation of these parameters in each condition. **(b)** The z-score is then calculated with respect to the mean and variance for all conditions taken together. These are then grouped by condition and the mean and standard deviation of the z-scores of both parameters are plotted (colors indicate the food source). The means are also correlated between conditions, and indicated by confidence interval (b, green dashed lines) of the slope of the linear regression (b, red line). For other traits, there is no significant within population correlation between traits, e.g. between the development time \(t_{dev}\) and \(\hat{l}\)**(c)**. Similarly, in **(d)**, the between population means are also not significantly correlated. **(e)** Summarizes the within and between population correlation coefficient for all pairs or developmental parameters, the mean correlation coefficient for each of the within condition groups is plotted with error bars showing the standard deviation. For the correlation among the mean values, there is only a single value thus there are no vertical error bars. The red dashed line indicates equivalence, showing that the within population correlations are largely but not exclusively, predictive of the between population correlations. the mean values of these parameters among populations grown in different conditions is also negatively correlated (Fig. 3 a,b). In contrast, the length and the duration of reproductive development are not strongly correlated among individuals, and neither are they correlated between populations (Fig. 3c,d). Computing the correlation coefficient (\(\rho\)) among individuals in a sample in fixed conditions, and plotting it against the value computed between population means among different conditions shows that this seems to be true in general for different pairs of parameters of _C. elegans_ development (Fig. 3e). The geometric structure of these correlations can be captured by a low dimensional manifold in the ambient phenotypic space. Non-linear principal components analysis [27] was used to determine the shape of this manifold from the ambient space of rescaled logistic parameters with the developmental durations included (Fig. 4a). The first two non-linear principal components capture 93% of the variance. While this dimensionality reduction may seem modest, in fact, the true dimensionality of the growth curve space is higher with some of the dimensions having already been compressed by the logistic fit. Within this embedding of growth curves, animals tend to cluster in \(\phi_{1}\) according to their food source (Fig. 4b) and in \(\phi_{2}\) according to their genetic background (Fig. 4c) (note only natural genetic variants are shown). This decomposition can be quantified with a linear regression model predicting either the NLPCA parameters (\(\phi_{1},\phi_{2}\)) or the rescaled logistic fit parameters (\(\hat{l},\hat{r},\hat{A}\)) using the genetic or environmental conditions as independent variables. Linear regression models of the form: \[y(G,E)=\beta_{0}+\sum_{i}\beta_{1,i}G+\sum_{j}\beta_{2,j}E+\epsilon\] were fit where \(\epsilon\), with \(y\in(\phi_{1},\phi_{2},\hat{l},\hat{r},\hat{A})\), \(G\) and \(E\) are indicator variables which take the value of 1 or 0 depending of the strain \(i\) and food \(j\) in that condition, and \(\epsilon\) are the residuals to be minimized. In Fig 4b and Fig 4c, the distributions grouped by environment and genotype respectively seem to have different means. We can quantify this using the F-statistic, which is the variance of the between group means divided by the mean of the within group variances. Using this, we can see that grouping the data by environment does in fact give distinct distributions in \(\phi_{1}\) and grouping by genotype gives distinct distributions in \(\phi_{2}\) (Fig. 4e). However, a linear regression of the parameters before dimensionality reduction does not give a clean separation between genotype and environment. The eigenfunctions of the non-linear principle components analysis cannot be derived analytically, but can be investigated by varying each component while keeping the others fixed. If \(\phi_{2}\) is fixed (\(\phi_{2}=0\)), the shapes of the resulting growth curves as \(\phi_{1}\) is varied reflect the strong anti-correlation between the parameters \(l_{max}\) and \(r\). For positive values of \(\phi_{1}\), animals grow quickly during their maximum growth phase but are ultimately shorter (Fig. 4f, blue curve). In contrast, for negative \(\phi_{1}\), animals grow more slowly at their peak, but are longer as adults (Fig. 4f green curve). This may indicate a potential trade-off between the speed of growth during development and the ultimate size achieved by adult animals. In contrast, variation along \(\phi_{2}\) result both in growth that is slower and in animals that are shorter as adults (Fig. 4g, green curve) or both faster and longer (Fig. 4g, blue curve). Interestingly, variation _along_\(\phi_{1}\) does not seem to affect the duration of reproductive development as much as along \(\phi_{2}\) as shown by the color of the plotted points (Fig. 4d). Points away from the boundary in the positive \(\phi_{2}\) direction correspond to slower reproductive development. ## 5 Discussion Biological systems are remarkable in part because they seem to be both incredibly robust and simultaneously flexible. Biological process faithfully regenerate complex developmental programs every generation, yet these same processes have given rise to an overwhelming diversity of complex matter. Canalisation and developmental plasticity seem to be opposing forces. One can imagine that these forces ebb and flow, acting in sequence. Cryptic variation which can be suppressed or revealed in certain circumstances, e.g. by the chaperone HSP-90 [28], is a good example of this, and it has been shown that disruption of the chaperone at the molecular level act as the switch from robustness to plasticity, even in complex phenotypes such as the morphology of an animal [29]. However, this may be only part of Figure 4: **(a)** The z-score of the three re-scaled logistic fit parameters are shown in a 3-d scatterplot (blue dots). These points lie close to a curved 2-d manifold which was found by performing non-linear principle components analysis (NLPCA). The flattened manifold is shown in **(d)** as a scatterplot where the color of the points indicates \(t_{dev}\) for each individual. The mean \(\phi_{1}\) and \(\phi_{2}\) for each condition are shown as a combination of symbols (_C. elegans_ strain) and color (Bacterial food source). On this manifold, \(\phi_{1}\) seems to separate the environmental conditions as shown in **(b)** by the marginal distribution over bacterial food sources. In contrast, the marginal distributions **(c)** over the _C. elegans_ strains shows separation in \(\phi_{2}\). Marginal distributions were computed with a kernel density estimator. This separation in \(\phi_{1}\) and \(\phi_{2}\) can be quantified by computing the f-statistic for a linear regression model on taking genotype and environment as regressors **(e)**. \(\phi_{1}\) regresses primarily on environment and \(\phi_{2}\) on genotype. Interestingly, regressing the three logistic fit parameters without first performing NLPCA results in mixed mixed regression on both genotype and environment for each. In **(f)** to determine the effect of varying \(\phi_{1}\) on the shape of the growth curve, \(\phi_{2}\) was fixed to 0 and \(\phi_{1}\) was varied through a range, as indicated by the colorbar, with coordinates being converted back from the unit-less quantities. Similarly in **(g)**,\(\phi_{1}\) is fixed and \(\phi_{2}\) varied. the story. Canalisation and flexibility may in fact be complementary properties of systems that are organized to generate low dimensional phenotypic manifolds. Concentration of dimensionality is an intrinsic property of complex high-dimensional systems, even if only trivially because of the Johnson-Lindenstrauss lemma [30]. However, the emergence of low dimensions in biological systems is often orders of magnitude more than random projection would predict [17]. At the heart of concentration of dimensionality lies projection. We propose that is natural to view robustness and plasticity as analogous to projection of variation onto, or orthogonal to, a low dimensional manifold, respectively. Given a distribution of variations that arise from environmental heterogeneity, stochastic fluctuations, and genetic mutations, whether a given phenotype or set of phenotypes is buffered or responsive will depend on the projection function from high dimensional chemical space to the low dimensional phenotype space. In cases where a phenotype is highly buffered to some set of variations, almost all those variations will be orthogonal to the phenotypic manifold, and variations for which the phenotype is plastic will have large projections onto the manifold. In this scenario one might envision the process of evolution as one of shaping the projection function, and thus the resulting phenotypic manifold, given the statistics of distribution of variations. The process of adaptation can then be viewed as the process of learning the optimal projection over the prior distribution of fluctuations, i.e. an environment to phenotype rather than genotype to phenotype map [31], although the environment to phenotype map is certainly a product of the architecture of the gene regulatory networks of the organism. It is tempting to look for the genetic basis of emergence of low dimensional phenotypes, especially in _C. elegans_, which has many genetic tools available. However, we believe it is unlikely that a single gene or a genetic module will underlie this process. Rather, it will likely be a property of the entire network of molecular interactions that underlie complex phenotypes. Nevertheless, there may be important connections between the structure of phenotypic manifolds and evolutionary dynamics. It has been proposed that movement along, rather than away from such manifolds constitutes a path of "least resistance" for genetic changes which might fix phenotypic variations. While some evidence supports this hypothesis, e.g. a study of the integration of phenotypic and life-history traits in the flowering plant _Arabidopsis thaliana_[32], other evidence from the morpho-space of the greenfinch _Cardielis chloris_ suggests that within population correlations may not predict between population correlations [33]. This discrepancy may be due in part to how traits are quantified, how the dimensionality reduction is performed, and to whether the populations included in the analysis sample natural genetic variation, genetic mutations, environmental variations, or combinations of all three. While the question of when and in what conditions the "directions" of phenotypic plasticity are predictive of the directions of subsequent genetic evolution remains open, phenotypic plasticity in canalized traits has been shown to be associated with rapid evolutionary diversification. An example comes from the polyphenism in the feeding structures of nematodes, in which the acquisition of mouth form plasticity is associated with an increase in evolutionary rates. Interestingly, even if only one of the two alternative forms is subsequently fixed, the underlying genetic architecture seems to maintain an expanded phenotypic manifold which facilitates future exploration [34]. While traditional forward and reverse genetics may not be the ideal approach, artificial evolution for expanded phenotypic manifolds seems promising, especially with an automated system that can map individuals to the phenotypic manifold in real time and use this as a selection criterion. In physics, statistical mechanics provides tools to connect microscopic and macroscopic dynamics and to describe the behavior of the macroscopic observables that arise from systems with large numbers of identical interacting components. Examples include the Navier-Stokes equation for fluid flow or the Fokker-Planck equation for diffusion. The coarse graining of many interacting degrees of freedom into a few dominant modes can be formulated precisely in terms of projection operators [35, 36, 37]. These projections often make use of time scale separations. For example, the random motion of a Brownian particle results from its many collisions with surrounding molecules, but these collisions occur very fast and can thus be treated as white noise. Box 2: Illustration of robustness and flexibility resulting from projection onto a low dimensional manifold. **(a)** In this example, 64x64 pixel leaf images represent a high dimensional environment to be sensed. The discrete cosine transform is then used as a projection operator. Past evolutionary history has optimized this projection function to recognize the top left leaf in panel **(a)** and for this, the top 35 modes capture 90% of the energy. The retained modes are shown in **(b, inset)**. In **(a)**, we can see a variety of environmental inputs and their reconstructions from their low dimensional representations, the first 10 mode weightings are shown as color band below each image in **(a)**. Even though this projection operator evolved for the leaf in the top left, it reconstructs the different leaf images reasonably well. In particular, high frequency variation around the original leaf shape is buffered **(a, blue box)**, resulting in canalisation of the representation such fluctuations in the environmental input. Furthermore, even distinct environmental inputs such as the flowers in **(a, orange box)** are reconstructed fairly well, demonstrating that this projection operator is robust, even in very divergent contexts. **(b)** shows the embedding onto the first two discrete cosine transform components. From this, we can visualize both the robustness and the plasticity associated with this projection operator. The leaves cluster separately from the flowers, revealing flexibility, and the two leaves that only differ in high frequency modes cluster together, showing robustness, with the variation between reconstructed representations smaller than that of inputs (See Supplemental Information). Here the first component is correlated with overall size and the second seems to separate leaves from flowers. It is interesting to note that the projection operator was not optimized in any way to distinguish leave from flowers, and in fact, the eigenfunctions used are the generic ones from the DCT. Over evolutionary time, organisms could optimize not only which modes are retained to better capture the prior distribution over contexts, but also could optimize the shape of the eigenfunctions themselves. This panel depicts an environmental sensing mechanism, but equally important, though not shown here, would be the phenotypic execution mechanism. The structure of the network which maps the low dimensional representation of the environment back into a high dimensional phenotype will also be an important source of variability and affect both robustness and plasticity. For near equilibrium systems, the rigorous relation between the fluctuations arising from the many unobserved degrees of freedom and the evolution of the system's macroscopic observables are known collectively as Fluctuation-Dissipation relations [38, 39]. While biological systems are characteristically far from equilibrium, similar relations have been observed between phenotypic variability and evolutionary response [40, 41]. In fact, even in systems far from equilibrium, relations of this sort can arise similarly when there is a separation of both observed and unobserved degrees of freedom and of time-scales [42]. However, the origin of low dimensional dynamics and of fluctuation-dissipation like relationships in biological systems remains a mystery. A recent intriguing proposal for the origin of such relations and the emergence of low dimensionality in biological systems is the evolution of robustness. If the same evolutionary forces that increase robustness to noise also tend to increase robustness to genetic perturbations this could account for both phenomena [18]. Recently, Murugan and colleagues have presented a model describing how mutational perturbations may be constrained by global epistasis to excite only a few slow or _soft_ modes with examples from protein elasticity and gene regulatory dynamics [43]. Their work shows that such a relationship between mutational induced and physically induced deformations is expected mathematically for protein elasticity. The existence of such slow modes may also be related to the seemingly universal emergence of _stiff_ and _sloppy_ modes from parameter space compression [44, 45]. In _C. elegans_, development has been shown to be controlled by a massive gene expression oscillator that is comprised of of \(\approx 3700\) genes [46, 47]. The existence of such an oscillator evokes a direct analogy to projection operator techniques as this network only has a few excitable oscillatory modes similar to a Fourier transform or the cosine transform employed in Box 2. Recently, the components of a gene regulatory network that comprises a central clock to control this oscillator has been identified [48]. As a follow up to this work, we would like to assess how mutations in these core components affect how fluctuations in gene expression are projected onto the main oscillatory modes that control molting, and how this in turn manifests on the developmental manifold of _C. elegans_. In this work we have focused on variability resulting from combinations of dietary and genetic perturbations, and we have done so in only fixed environments. In the future, it will be interesting to perform experiments in which the developmental trajectory was perturbed by some impulse or step function and to observe the resulting relaxation. _C. elegans_ development is highly temperature dependent, and it would be interesting to first assay they developmental trajectories at different temperatures and then to perturb development using various temperature shift protocols. In addition, if we could find perturbations that tend to generate displacement in specific directions on the phenotypic manifold, we could test quantitatively for fluctuation response type relationships. The projection operator perspective leads one naturally to consider how organisms might learn their particular projection function. Evolution by mutation and selection will surely play a large part in this process, especially when an organism must adapt to changes to the prior distribution of variations, but there is no reason, in principle, that this operator cannot be shaped also by within generation "learning" and other non-genetic and epi-genetic feedback mechanisms. ## Conclusions Biological systems are remarkably robust, and yet the same mechanisms that underlie this robustness have also generated the incredibly diversity of life on earth. In this work we attempt to reconcile these seemingly opposing forces by studying the structure of variability in the development of an animal in the contexts of different genetic backgrounds and environments. We have used high resolution imaging data of the growth of the nematode _C. elegans_ in environmental and genetic contexts to map the variability of development. We find that traits which are correlated within a particular context predict whether the mean values of those traits will be correlated among different contexts. Correlation between traits indicate that the true dimensionality of the system may be lower, and we find a parsimonious low dimensional representation of the vari ability, in which the contributions of genetic variation on the phenotype are separable from those of environmental variation by a simple linear model. We present a framework in which the emergence of low dimensionality provides a basis for both robustness and plasticity. Concentration of dimensionality seems to be an intrinsic property of the chemical and molecular networks that generate living systems, and of complex dynamical systems in general. While beyond the scope of this work, we hope that the connection between projection operators in physical systems, which explain how reproducible coarse grained dynamics arise from the stochastic influences of innumerable microscopic components, may inform the theory in biology of how reproducible developmental dynamics arise from the interactions of similarly large numbers of bio-molecules. ## Methods Imaging hardwareThe imaging hardware consists of a single 3.2 MP monochrome camera (Flea3, Teledyne FLIR) mounted to the arm of an XY-plotting robot (Eleksdraw, Eleksmaker) which moves the arm using two stepper motors controlled by a GRBL stepper motor controller that interprets G-code sent via a USB serial connection using the MATLAB (R2018b, Mathworks) fprintf command. Returning to the same position was accurate to with a few hundred microns, and wells were recentered periodically by fitting a circle to an intensity then-olded image of the well and zeroing the offset. Illumination was provided from above by an LED light panel. To maximize contrast, oblique illumination was blocked by a sheet of black acrylic in which an array of square holes was laser cut to allow for the direct illumination to pass though. A 50mm f/1.4 lens (NMV-50M1, Navitar) was attached to the camera via a 40mm lens tube (CML40, Thorlabs). In this system, one pixel in the image corresponded to \(2.67\mu m\), giving an effective magnification of \(0.93x\). Detailed instructions and a parts list with suppliers is available upon request. Multiwell platesMultiwell plates were made by gently placing a pre-cut acrylic form into a standard 30mm Falcon petri dish filled with 5ml of liquefied NGM-Gelrite medium. The multi-well acrylic forms were 24mm square, with a 2x2 grid of 6mm diameter circular wells 4cm from the edge and 10 cm center to center, were cut from 3mm thick black acrylic using an LS 6090 PRO Laser Cutter (HPC Laser Ltd). After placing the multiwell plate into the molten solid media so that it rested on the surface, the plate was allowed to set and to dry for 1 hour covered at 20\({}^{\circ}\)C, and seeded with \(2\mu l\) of bacteria corresponding to approximately 18 million cells, as measured using a Petroff-Hausser Counting Chamber (Hausser Scientific). Temperature controlPlates were kept in an insulated, temperature controlled box during imaging, which itself was in a temperature controlled room set to 20\({}^{\circ}\)C. The actual average temperature in the room was 19.7\({}^{\circ}\)C. Temperatures were measured with a custom thermometer; the signal from a linear thermistor (Omega Engineering) was difference amplified against a known voltage corresponding to 20\({}^{\circ}\)C. The amplified signal (Gain = 2) was recorded in MATLAB (Mathworks) from the analog input of a DAQ (Labjack). This was fed into a Proportional Integral Differential (PID) control script whose output was a voltage that controlled a Push-Pull current amplifier driving a Peltier effect element (Custom Thermoelectric). Fans were used to distribute air from the heat sinks on each face of the Peltier element either into the enclosure or as exhaust. Temperature was maintained to within 70 mK of the set point (20\({}^{\circ}\)C). Image processingImages were captured directly into Matlab using the built-in video input object class. Moving objects were extracted from each image using background subtraction, to generate a region of interest with the largest amount of detected motion, as measured by the largest pixel difference. Within this region of interest, objects were detected by contrast with the background by applying a threshold to a laplacian of gaussian filtered image. Filtering was performed using the Matlab _imfilter_ function with the _fspecial_ function with filter size 15x15 and filter standard deviation of 1.5. Connected components were extracted from this thresholded image and the locations and properties of connected components were recorded from the resulting black and white image. Area (in pixels), and centroid location were calculated using the MATLAB function _regionprops_, and length was computed using the MATLAB functions _bwmorph_ to extract the skeleton, the function _bwgeodesic_ to compute the length. These properties were used to classify each blob as a worm or not, based on the output of a pre-trained support vector machine. All images were also saved for later inspection, which was used to determine the egg-hatching and egg-laying times manually. All code used for analysis and data processing is available on Github, including data from processed images (Raw images are available on request). Nematode culture and strains_C. elegans_ was grown on NGM agar plates and fed _E. coli_ HB101 for standard maintenance at 20 "C. Bristol N2 was used as wild-type strain [49]. In addition to N2, two wild isolate strains were used, AB1 isolated in Adelaide, Australia, and PS2025 isolated in Pasadena, California. In additon, two single gene mutants of N2 were used, _sid-2(mj465)_, a mutant that is not competent for RNAi by feeding, and _c28h8.3(mj649)_ a catalytic mutant of a putative helicase. Bacterial culture and strainsIn addition to _E. coli_ HB101, three other bacteria were used as food sources. _Comamonas aquatica_ DA1877 was used as it has been previously shown to increase the rate of _C. elegans_ development by providing supplemental vitamin B12 [50, 51]. We also chose _Bacillus pumilus_ and _Pseudomonas fragi_ from the wild bacteria collection [23] as we had previously observed large developmental differences on these food sources. Synchronization by coordinated egg layingTo generate populations of synchronized animals without bleaching and starvation, young egg-laying adults (50-75 hours post hatching) were gently picked with a platinum wire to a fresh NGM plate seeded with the appropriate bacteria and allowed to lay eggs for a fixed duration, after which the adults were removed from the plate and the eggs were collected. The egg-laying rate of animals at this stage is \(\approx 6\) eggs/animal/hour. Synchronization could be tightened by shortening the duration of egg-laying, and the number of synchronized eggs could be increased by using more egg-laying adults. Nematode growth media - gelriteNGM gelrite plates were made by replacing the Agar in normal NGM recipe with Gellan Gum (Sigma Aldrich), a polymer derived from algae. The recipe is given in Table 1. In addition, peptone and cholesterol were omitted to prevent bacterial growth on the plate, so that the only available bacterial food was that which was initially inoculated. Additive salts were prepared in 1 M stock solutions, and the \(KH_{2}PO_{4}\) stock solution was adjusted to pH 6. Phylogenetic tree of wild isolatesThe phylogenetic tree of _C. elegans_ wild isolates was generated from the full CeNDR phylogenetic tree [24] which was hard-filtered by isotype using the _prune_ function in MATLAB. Logistic fitsLogistic fits were calculated using the MATLAB implementation of the Levenberg Marquardt [52] non-linear least squares algorithm within the _lsqcurvefit_ package. Logistic fits were performed on the raw data as well as the rescaled data. The rescaled fits were nearly identical to the raw-fit parameters which were transformed according to Eq. 2 (See Supplemental Information Fig S1). Non-linear PCANonlinear principal components analysis attempts to find an optimal auto encoder that can recreate the input data after passing it though a bottle neck layer with fewer components \begin{table} \begin{tabular}{|l|l|} \hline Ingrediient & Amount \\ \hline \multicolumn{3}{|l|}{Autoclave Together} \\ \hline NaCl & 3.0 g/l \\ Gellan Gum & 8.0 g/l \\ \hline \multicolumn{3}{|c|}{Add Aseeptically} \\ \hline \(CaCl_{2}\) & 1 mM \\ \(MgCl_{2}\) & 1 mM \\ \(KH_{2}PO_{4}\) & 25 mM \\ \hline \end{tabular} \end{table} Table 1: NGM Gelrite than the input. The number of components in the bottle neck layer corresponds to the number of desired principal components. The NLPCA implementation we use is from NLPCA toolbox for MATLAB [27] and uses a multilayer perceptron architecture with a hyperbolic tangent activation function in the hidden layers. Linear RegressionEach phenotypic output can be decomposed as a genetic contribution, an environmental contribution, and some residual error. Linear regression was performed using the _fitlm_ function in MATLAB. if \(y\) is the phenotypic variable of interest, \[y(G,E)=\beta_{0}+\sum_{i}\beta_{1,i}G+\sum_{j}\beta_{2,j}E+\epsilon\] were fit where \(\epsilon\), with \(y\in(\phi_{1},\phi_{2},\hat{l},\hat{r},\hat{A})\), \(G\) and \(E\) are indicator variables which take the value of \(1\) or \(0\) depending of the strain \(i\) and food \(j\) in that condition, and \(\epsilon\) are the residuals to be minimized. _fitlm_ uses an iteratively reweighted least squares algorithm. For example, the best fit linear model for \(\phi_{1}\) indicates that the average value of \(\phi_{1}\) is \(0.022\) and that changing to _Bacillus pumilus_ \(0.115\) and switching to _Pseudomonas fragi_ moves \(\phi_{1}-0.200\). F-statistics were calculated by the MATLAB _anova_ function, whichi gi given by \[F=\mathbb{V}[\mathbb{E}(y_{i})]/\mathbb{E}[\mathbb{V}(y_{i})]\] again, calculated for each prediction variable \(y\in(\phi_{1},\phi_{2},\hat{l},\hat{r},\hat{A})\) and decomposed according to environment or genotype \(i\in E,G\). AcknowledgementsThe authors would like to thank Marie-Anne Felix for providing us with wild-isolates of _C. elegans_ as well as wild bacteria. We would also like to thank Stanislas Leibler, Bing Kan Xue, Maros Pleska, Jon Chuang, and Ricardo Rao for helpful discussion, and in particular S.L. for advice on the presentation of the material and B.K.X for suggesting the rescaling and the linear decomposition of the variance.
2306.16643
Cautious explorers generate more future academic impact
Some scientists are more likely to explore unfamiliar research topics while others tend to exploit existing ones. In previous work, correlations have been found between scientists' topic choices and their career performances. However, literature has yet to untangle the intricate interplay between scientific impact and research topic choices, where scientific exploration and exploitation intertwine. Here we study two metrics that gauge how frequently scientists switch topic areas and how large those jumps are, and discover that 'cautious explorers' who switch topics frequently but do so to 'close' domains have notably better future performance and can be identified at a remarkably early career stage. Cautious explorers who balance exploration and exploitation in their first four career years have up to 19% more citations per future paper. Our results suggest that the proposed metrics depict the scholarly traits of scientists throughout their careers and provide fresh insight, especially for nurturing junior scientists.
Xingsheng Yang, Zhaoru Ke, Qing Ke, Haipeng Zhang, Fengnan Gao
2023-06-29T02:52:08Z
http://arxiv.org/abs/2306.16643v2
# Cautious explorers generate more future academic impact. ###### Abstract Some scientists are more likely to explore unfamiliar research topics while others tend to exploit existing ones. In previous work, correlations have been found between scientists' topic choices and their career performances. However, literature has yet to untangle the intricate interplay between scientific impact and research topic choices, where scientific exploration and exploitation intertwine. Here we study two metrics that gauge how frequently scientists switch topic areas and how large those jumps are, and discover that 'cautious explorers' who switch topics frequently but do so to 'close' domains have notably better future performance and can be identified at a remarkably early career stage. Cautious explorers who balance exploration and exploitation in their first four career years have up to 19% more citations per future paper. Our results suggest that the proposed metrics depict the scholarly traits of scientists throughout their careers and provide fresh insight, especially for nurturing junior scientists. ## Introduction While advancing their academic pursuits, scientists choose their research topics, and these choices shape their individual careers[1, 2, 3] as well as the evolution of science[4, 5, 6]. Like phenomena seen in various human activities and the broader natural world[7, 8, 9], some scientists choose to exploit their current domains, following a conservative strategy, while the more risk-taking ones prefer to explore domains that are less familiar to them[10, 11]. To maximize prospective academic output while reducing uncertainty, it is arguably an optimal strategy to keep sensible balances between prudent production and risky innovation[9, 10, 11, 12, 13, 14]. The work on the topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research of the research topic of the research topic of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research topic of the research of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research topic of the research topic of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research topic of the research topic of the research topic of the research topic of the research topic of the research of the research topic of the research topic of the research of the research topic of the research of the research topic of the research topic of the research of the research topic of the research of the research topic of the research topic of the research of the research topic of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research topic of the research of the research topic of the research of the research topic of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research topic of the research of the research of the research topic of the research of the research of the research of the research topic of the research of the research of the research of the research topic of the research of the research of the research of the research of the research topic of connections[16, 17, 18]. This contradiction may be resolved if confounding factors around both the switching patterns and academic performance are taken into account, and we elaborate the point in the example below. Considering versatile ways of interpreting switching, we assume the convention that _areas_ and _topics_ are specific levels of academic categories with areas being broader (An area may contain many topics, see Methods). We define the _exploration propensity_ (EP) of a scientist to measure her likelihood of switching to an unfamiliar area, estimated by the proportion of papers that explore unfamiliar areas among all her papers (Fig. 1a, Methods, and section S2.1). We further define a paper's 5 (or 10)-year log-citations, denoted by log-c5 (or log-c10), by the logarithm of the number of citations it receives within five (or ten) years after publication (Methods and sections S1.1.3 and S7.4.1), and we typically measure a scientist's performance by averaging her papers' log-c5. Figure 2a presents an apparent negative correlation between the EP and overall career performance, which seemingly implies that someone who stays focused will have better performance. However, this implication can be easily challenged. For instance, the negative correlation can be explained the other way around--scientists who enjoy past successes are more likely to stick with their existing areas (Fig. 2b). When we look at a specific group of scientists with similar past performance (and thus the past performance is controlled for), the correlation between their past EP and future performance is almost non-existent (Fig. 2c). Furthermore, the complex switching behaviour has more stories to tell than the one-dimensional metrics that only consider the binary choices of whether to switch to new areas can capture. Under these metrics, a drastic switch from acoustics to lasers appears identical to the one from statistics to machine learning. The complexities of the switching behaviour and unaccounted-for confounding factors call for an in-depth study, which we present here. To this end, we examine a PubMed dataset formulated in the Microsoft Academic Graph[20] including 29,776,639 papers in biomedicine, the American Physical Society (APS) dataset that covers 678,961 physics papers, and the American Chemical Society (ACS) dataset with 1,325,257 chemistry papers (Methods). The main text focuses on the APS dataset, while all the reported results also hold for the PubMed dataset (section S4.6) and the ACS dataset (section S4.7). Figure 1: **Computing exploration propensity (EP) and exploration distance (ED).** Suppose a scientist has a set of papers, denoted by \(S\). **(a)** EP is calculated as the proportion of exploratory papers in \(S\). A paper Paper\({}_{t}\) is exploratory if it covers at least one research area that has not been studied by the scientist in a past period, i.e., if it has at least one area not covered by \(\text{LB}\big{(}\text{Paper}_{t}\big{)}\)–the papers by the scientist in a look-back period (the past five papers in this example). **(b)** ED measures how different (in terms of topic similarities) each paper of the scientist is from his or her previous paper, on average. Specifically, Paper_t_'s distance is the distance between its topic set and that of \(\text{ LB}\big{(}\text{Paper}_{t}\big{)}\), where we average the pairwise distances of all possible topic pairs between these two topic sets. Details on how to compute the EP and ED are in Methods. ### Results #### 2.1.1 Cautious explorers We are interested in explaining the differences in scientists' future performance by the variations in their past research area switching behaviours measured by EP. To this end, we conduct regression analysis correlating scientists' 'past' EP up to a _split point_ with their 'future' performance (Methods), where we control for 'past' performance (log-citations), 'past' number of papers, and the area and year of the first paper. The regression results (table S2, column (S3) on the left) confirm the highly-intuitive relationship between future performance and past performance[21, 22, 23], and indicate that EP has a statistically significant positive effect on scientists' future performance (\(P~{}=~{}2.2\times 10^{-13}\)). For those more likely to explore new areas (i.e., with higher EPs), their future papers have significantly higher average log-citations, as shown in the marginal effect plot in Fig. 3a, where future performance increases with respect to increasing EP after all other variables are fixed at their respective averages. This finding resolves the inconsistency in reported correlations between switching and academic performance, either seemingly negative[3, 15] or positive[16, 17, 18]. It moderates the stronger claims about extreme exploration at paper-level[19] and reveals that the established understanding in the literature has drawn, at best, an incomplete picture. Figure 2: **Interplay between switching and career performance.****(a)** EPs and career performance. We calculate the EP and log-c5 per paper for each scientist’s entire career and divide the scientists into groups by aligning their EPs to the nearest integer multiples of 0.1. Each blue point marks the mean of the log-c5 per paper of the scientists in that group, with 95% confidence bands. The career performance shows a decreasing trend as the EP increases. **(b)** Scientists with higher past performance (log-c5 per past paper) have lower probability of changing research areas for their next paper. Each time a scientist publishes a new paper, we compute her past mean log-c5 to put the publication into a group of publications with similar past performance among all authors. For each publication in a particular group, we decide whether it is exploratory and calculate the portion of exploratory papers in the group as the group’s probability of changing research area for the next paper (section 53.2 and fig. S8(a)). **(c)** Past EPs and future performance of a group of scientists whose past performance is controlled for. We regard the first ten years of each scientist’s career as ‘past’, select the scientists whose past performance is in the small range of [1.74,1.90], and plot future performance against past EP, in a fashion like (a). The disappeared correlation suggests that past performance can be one confounding factor not to be ignored. Further analyses under varying temporal split points suggest a robust relationship between switching and performance (table 53). Strikingly, the results are valid at even very early career stages of two years or having published just two papers. We may surmise that those junior scientists who, in as early as their starting years of PhD study, adventure into domains other than their primary ones are probably different from their peers in qualities such as open-mindedness and breadth of knowledge that are not easily reflected in conventional scholarly performance indicators. Switching between research topics can be bold or cautious: a scientist may switch between similar topics, such as from statistics to machine learning, or to something very different, such as from acoustics to lasers[18, 19, 1]. Thus, we also propose a complementary measure called the _exploration distance_ (ED), the average distance of a scientist's papers to her previous papers (Fig. 1b, Methods, and section S2.2). Adding ED to our regression setup as one more independent variable, we find that it has statistically significant negative effects on the future performance (\(P=3.7\times 10^{-9}\)), while the positive effects of the EP remain statistically significant (table S2, column (S4) on the left). The results are still true across comprehensive choices of split points with respect to both career length (2-15 years) and the number of papers (2-15) (table S4). These indicate that for someone with high EP but low ED, whom we categorize as a 'cautious explorer', her future papers have higher average citations (Figs. 3b and 3c). Since it is generally expected that those with higher EP have higher ED and vice versa (Pearson correlation between EP and ED: 0.6039), individuals with high EP and yet low ED deviate from the norm and achieve a balance between exploration and exploitation. Both the EP and ED give E-values[24] (table S5) much larger than ones that scientists commonly consider safe to report their findings (larger E-values suggest more reliable findings, see Methods). Besides, the regression results are consistent when we perturb the EP and ED with Gaussian noises (section S7.8). Note that similar conclusions hold consistently if we change the measurement of _academic performance_ to the log-c10 per future paper or the log-citations of a scientist's most impactful future paper[25] (section Figure 3: **Marginal effect of the exploration metrics.****(a)** We regress the future performance against the EP as the only metric while controlling for the usual variables. The marginal effect of the EP on the future performance is graphed. **(b)** Similar to (a), we run the regression with the EP and ED as the metrics and plot the marginal effect of the EP on the future performance. **(c)** The same as (b), except that the marginal effect of the ED on the future performance is shown. S7.4). We also control for possible confounding factors, e.g., topic contribution from co-authors[26], and high-EP-low-ED scientists preferring hot areas[27], combining novelty and conventionality in papers[28], seeking advantageous collaborations[29, 30, 31], or utilizing institutional resources[32] that may yield more academic impact (Methods and section S6). To further consolidate the discovery, we convert the original data into dynamic panel data and perform author and time fixed-effect regressions (Methods and section S4.3). This approach allows us to account for the unobserved differences between individual scientists in the regression model. Passing the above robustness checks, our findings on EP and ED in the regression analyses remain (statistically) striking. To better understand these high-EP-low-ED scientists, as well as other behavioural patterns quantified by EP and ED, we divide the scientists into four groups along two dimensions, one dividing EP into upper or lower halves of all EPs, and the other dividing ED into upper and lower halves. As anticipated, the high-EP-low-ED group is relatively small, consisting of only 14% of the population. Figure 4 visualizes one scientist for each group (chosen such that the four scientists started their careers in the same area for illustration purposes, see section S5). Here each node represents a topic, and any two topics that have co-occurred in at least one paper are connected by an edge, the length of which is the topic distance between the two. Nodes sharing the same colour are in the same area defined by the APS and if a scientist has multiple papers with a particular topic, the corresponding node is proportionally larger. In Fig. 4, the high-ED representatives (right column) have visibly more spread topics and those in the high-EP groups (top row) have more colours, which indicates that they explore more areas. For the high-EP-low-ED group of cautious explorers, the representative scientist has colourful nodes of moderate sizes that are clustered together, which implies that this scientist makes steady moves when choosing topics, yet these topics span a variety of areas. Quantifying the effects via propensity score weighting The above analysis revealed a clear relationship between future performance and EP/ED, but we are further interested in precisely quantifying the difference between the four groups defined above. We first compute the citations per paper for each scientist's future career and obtain the group averages. In Fig. 5a, we see that 'cautious explorers' have the highest future performance across split points from 2 to 15 career years; when the split point is only two career years, for example, the advantage over the second place is almost 20%. However, this line of thinking does not take the inter-group differences in other dependent variables into account. To remedy this, we borrow the standard method of _propensity score weighting_ from causal inference that handles multiple treatments[33], which in our scenario correspond to the four high/low EP/ED groups. The method weights each scientist according to her estimated likelihood to be in her own group to make the groups comparable and calculates the pairwise average treatment effect (ATE) between groups (Methods and section 54.2.3). We carry out our study in diverse split points consistent with those in the previous regression analyses. Figure 4: **Sample scientist from each of the four high/low EP/ED groups.** Each node, coloured or grey, represents a topic, and each edge linking two nodes indicates that the corresponding topics have occurred together in at least one paper, with its length showing the distance between the topics (Methods and section 52.2.2). In each subplot, only the topics covered by the corresponding scientist’s papers are coloured and expanded, where the colours correspond to the area containing the topic/node. Any topics/nodes of the same colour (excluding grey in the background) are under the same area. The node size reflects how many of the scientist’s papers are on that topic. Among the four groups, the high-EP-low-ED (cautious explorers) and low-EP-high-ED ones are relatively rare combinations, each taking up about 14% of the entire population, while the other two combinations each represent roughly 36%. Figure 5b illustrates the pairwise ATEs across different split points on career years, against the baseline high-EP-high-ED groups. We see that the high-EP-low-ED groups (cautious explorers) are always significantly better than the others, even when only considering the EP and ED metrics from the first two years of the scientists' careers. This implies that combining both serves as a powerful indicator of more successful scientific careers ahead. We also observe that the low-EP-high-ED groups have negative ATEs compared with the baseline and the high-EP-high-ED 'bold explorers' do not have a clear advantage/disadvantage over the baseline, with the ATEs being around zero. We further compare the cautious explorers with the apparently opposite low-EP-high-ED groups by changing the baseline (Fig. 5c). The 'cautious explorers' have statistically significant positive ATEs against the new baseline since the corresponding confidence intervals only contain positive elements. If the split point is set to four career years, the ATE of being in the high-EP-low-ED group is as much as 19% more citations per future paper (percentage translated from 0.1738 in the logarithmic scale, 95% CI = [0.1252, 0.2225], table S7). In a simplified setup where cautious explorers are compared against the rest three groups combined, the ATE of being cautious explorers is about 10% more citations per future paper (Fig. 5c). These findings pass thorough robustness checks and are consistent on the PubMed dataset (section S4.6) and the ACS dataset (section S4.7). Figure 5: **Better future performance of cautious explorers.****(a)** We compute the average future citations for each of the four high/low EP/ED groups with varying career years as split point. The cautious explorer (high-EP-low-ED) groups have consistently the highest future performance, while the opposite (low-EP-high-ED) groups have the lowest. **(b)** We select the high-EP-high-ED groups as the baseline and calculate the ATEs between the baseline and other three groups. The y-axis marks the ATEs of the three groups over the baseline with 95% confidence intervals, corresponding to the future citations per paper in the logarithmic scale with translated percentages in parentheses. The 'cautious explorers' have the largest ATEs across all groups. **[c]** We compare the 'cautious explorers' against the low-EP-high-ED baseline; the ATEs reported in orange are significantly larger than zero, with the largest ATE being 0.1738 (19%) at the split point of four career years. The violet line shows the (statistically significant) ATEs with cautious explorers as the treatment group and everyone else as the control. When the split point is four years, cautious explorers compared to everyone else have an ATE of about 0.094, which translates to 10% more future citations per paper. **[d]** We try different thresholds on EPs and EDs to form the four groups with the split point of 10 years. There is an increasing trend in ATEs when larger EPs and smaller EDs are required to form the cautious explorer group. As the regressions suggest that a higher EP and lower ED are associated with better future performance, we next study how the inter-group differences change, if we adjust the thresholds to obtain more disparate group divisions. Instead of using the previous 'upper and lower halves' threshold, we vary it to 'upper and lower \(Q\)%' to obtain the high/low groups, with \(Q\) being 40, 30, and 25, respectively. We see in Fig. 5d that when we form the groups with more extreme EPs and EDs, the 'cautious explorer' group displays increasing advantage over other groups. We conduct further analyses almost parallel to the above via propensity score matching[34]--a common (and less refined) causal inference technique, and they give consistent findings (sections S4.2 and S7.6). ### Discussion It is natural to ask how the EP and ED evolve through scientists' careers and how scientists of different generations behave under these two metrics. Figure 6a displays the average EP and ED of scientists in their careers. The EP shows a decreasing trend while the ED grows over time, despite the early-stage sharp drop of the EP that may be partially associated with the way it is calculated (when there are relatively fewer past papers in the early stage, the EP can be high). Curiously, this suggests that junior scientists are more cautious explorers than the future senior scientists that they become. In Fig. 6b, we compare the EPs and EDs of three groups of scientists who started their careers in different periods of time (i.e., 1985-1989, 1990-1994, and 1995-1999). Kolmogorov-Smirnov (K-5) tests on the distributions of the EP and ED, respectively, suggest that scientists from these three periods have similar distributions of EPs, while scientists of newer generations become more cautious as indicated by having smaller EDs, which is consistent with existing findings[13]. This effect may be related to the growing trend of training junior scientists to become experts in ever more specified fields, which is possibly a direct consequence of the increasingly expanding knowledge in almost every field[35]. How do memberships of the four high/low EP/ED groups change over time? In Fig. 6c, we track the scientists who started their careers before 2000 and remained active beyond 2009. We compute the group memberships at four time points--the ends of 2000, 2003, 2006 and 2009--using all data prior to the respective time point and observe how the memberships evolved. We see that over 60% of the 'cautious explorers' in 2000 had remained in the same group in 2003 and the ratio was more than 72%, when we look at the transition from 2006 to 2009. The two groups in the middle are relatively stable across the years, with the ratios of staying being over 83%. Meanwhile, members seldom flow between completely opposite groups. Are 'cautious explorers' a certain kind of people, or is 'cautious exploration' a'strategy' that can be adopted for better performance? The individual fixed-effect regressions mentioned earlier lean to the latter hypothesis. One way to study this is by interpreting one's future EP and ED as a proxy to future'strategy', as opposed to using past EP and ED as predictors, and we further examine how one's future Figure 6: **Temporal perspectives of EP and ED.****(a)** The average EP and ED of scientists in each year of their careers with 95% confidence bands. We select scientists with careers of at least 15 years (58% of the population) and examine their first 15 years. For each year, we calculate each scientist’s career EP and ED until the end of that year and compute the average. Notably, junior scientists have higher EPs and lower EDs than they do in later career stages. **(b)** The EP and ED distributions of scientists who are selected from those specified in (a) and whose careers began in 1985-1989, 1990-1994, and 1995-1999, respectively. The EP and ED are calculated within their first 15 career years. The three distributions of EP are similar (K–5 tests, \(P>0.48\)), while the distributions of ED are different (\(P<10^{-5}\)). This suggests that scientists tend to become more cautious in choosing research topics in the past decades. **(c)** Member exchanges among the four groups in the 2000s. We select the 11,791 scientists who have at least two papers before 2000 and at least one paper after 2009, and divide them into groups in 2000, 2003, 2006, and 2009, respectively, where we set the threshold for having a high EP (ED) at the 50th percentile. For the ‘cautious explorers’ in 2000, 37.21% of them were still ‘cautious explorers’ by 2009, and 29.48% of them are in this group in all four snapshots. performance correlates with the strategy selected. The regression results are consistent with the fixed-effect analysis (Methods, section S4.3). We seek more analyses in the absence of large-scale real-world experiments. Fig. 6c suggests that most scientists intentionally or unintentionally maintain consistent research agendas. However, as noticed earlier, a very small portion of scientists exhibit drastic behavioural changes, such as transitioning from low-EP-high-ED to the completely opposite high-EP low-ED group of cautious explorers and vice versa. We speculate that these 'drastic changers' alter their research strategies on purpose. Like in Fig. 6c, we divide the scientists into the four high/low EP/ED groups, before and after a split point of 10 career years, respectively. Interestingly, 8.6% of the low-EP-high-ED scientists drastically turn into cautious explorers after the split point, and their citations per paper increase by 4.8%. On the other hand, 6.9% of cautious explorers became 'drastic changers', and their performance drop by 19.0%. Beyond this simple calculation, we also conduct the PSW analysis to estimate their performance change against their counterparts who remained in the same groups after the split point. Scientists who drastically changed to cautious explorers saw a 34.1% increase in citations per paper, while the opposite group experienced a 26.2% drop. These analyses, which are detailed in the Methods and section S4.5, further shed light on the possibility that cautious exploration could be a strategy connected to greater research impact, yet not adopted by many. EP and ED help us take a first step towards disentangling the complex relations between switching and performance. We have discovered that cautious explorers have better future performance, and this is valid even when we look at early scientists who have just published two papers. The conclusion is consistent whether we look from split points defined by years into the career or number of publications. By dividing scientists into four groups (high/low EP/ED) after quantifying their switching behaviours, we look into the balance between exploration and exploitation [7, 8, 9], specifically in scientific research and its connection to academic achievements. Previous studies investigated the behaviours from the risk-taking aspect for impact and the risk-averse aspect for production [10, 11], which we measure collectively with the EP and ED metrics. Our study predicts that a junior scientist with higher exploration propensity will be more impactful in the future, but someone who balances risk-taking with shorter topic moves would enjoy an even larger advantage. In contrast to existing studies on correlations between switching behaviour and performance, we rule out extensive possible confounding factors and discover quantifiable predictive power. As such, we suggest that these two metrics could help track the behavioural change of the participants in the evolution of science and shed light on individuals' upcoming careers, especially those of junior scientists. Although the EP and ED metrics may be informative gauges of scientists, evidences from a purely data analytic viewpoint are not enough to decide whether they are indicators of certain inherent traits of scientists or strategies that lead to different academic impacts. Answering these questions can potentially assist research-related activities such as self-assessment and planning, research resource allocation, and training curricula design. Further research may look in a more fundamental direction from possibly neurocognitive and brain science perspectives [36, 37, 9]. Moreover, an open question is whether other creative activities, such as filmmaking, invention, and entrepreneurship, also have these uniquely successful 'cautious explorers.' ## References * [1] Jia, T., Wang, D. & Szymanski, B. K. Quantifying patterns of research-interest evolution. _Nat. Hum. Behav._**1**, 0078 (2017). * [2] Kleinberg, J. & Oren, S. Mechanisms for (mis)allocating scientific credit. _Algorithmica_**84**, 344-378 (2022). * [3] Zeng, A. _et al._ Increasing trend of scientists to switch between topics. _Nat. Commun._**10**, 3439 (2019). * [4] Sinatra, R., Deville, P., Szell, M., Wang, D. & Barabasi, A.-L. A century of physics. _Nat. Phys._**11**, 791-796 (2015). * [5] Shi, F., Foster, J. G. & Evans, J. A. Weaving the fabric of science: Dynamic network models of science's unfolding structure. _Soc. Netw._**43**, 73-85 (2015). * [6] Gates, A. J., Ke, Q., Varol, O. & Barabasi, A.-L. Nature's reach: narrow work has broad impact. _Nature_**575**, 32-34 (2019). * [7] March, J. G. Exploration and exploitation in organizational learning. _Organ. Sci._**2**, 71-87 (1991). * [8] Mehlhorn, K. _et al._ Unpacking the exploration-exploitation tradeoff: a synthesis of human and animal literatures. _Decision_**2**, 191-215 (2015). * [9] Cohen, J. D., McClure, S. M. & Yu, A. J. Should I stay or should I go? how the human brain manages the trade-off between exploitation and exploration. _Philos. Trans. R. Soc. B Biol. Sci._**362**, 933 (2007). * [10] Kuhn T. S. _The Essential Tension: Selected Studies in Scientific Tradition and Change._ (Univ. Chicago Press, 1979). * [11] Foster, J. G., Rzhetsky, A. & Evans, J. A. Tradition and innovation in scientists' research strategies. _Am. Sociol. Rev._**80**, 875-908 (2015). * [12] Zuckerman, H. Theory choice and problem choice in science. _Sociol. Ing._**48**, 65-95 (1978). * [13] Rzhetsky, A., Foster, J. G., Foster, I. T. & Evans, J. A. Choosing experiments to accelerate collective discovery. _Proc. Natl. Acad. Sci._**112**, 14569-14574 (2015). * [14] Yin, Y., Wang, Y., Evans, J. A. & Wang, D. Quantifying the dynamics of failure across science, startups and security. _Nature_**575**, 190-194 (2019). * WWW '18_ 373-378 (2018). * [16] Yu, X., Szymanski, B. K. & Jia, T. Become a better you: correlation between the change of research direction and the change of scientific performance. _J. Informat._**15**, 101193 (2021). * [17] Pramanik, S., Gora, S. T., Sundaram, R., Ganguly, N. & Mitra, B. On the migration of researchers across scientific domains. _Proc. Int. AAAI Conf. Web Soc. Media_**13**, 381-392 (2019). * [18] Huang, S., Lu, W., Bu, Y. & Huang, Y. Revisiting the exploration-exploitation behavior of scholars' research topic selection: evidence from a large-scale bibliographic database. _Inf. Process. Manag._**59**, 103110 (2022). * [19] Hill, R., Yin, Y., Stein, C., Wang, D. & Jones, B. F. Adaptability and the Pivot Penalty in Science. Preprint at [https://doi.org/10.48550/arXiv.2107.06476](https://doi.org/10.48550/arXiv.2107.06476) (2021). * [20] Sinha, A. _et al._ An overview of Microsoft Academic Service (MAS) and applications. _Proc. 24th Int. Conf. World Wide Web_ (2015). * [21] Merton, R. K. The Matthew effect in science. _Science_ (1968). * [22] Clauset, A., Larremore, D. B. & Sinatra, R. Data-driven predictions in the science of science. _Science (2017). * [23] Kong, X. _et al._ The gene of scientific success. _ACM Trans. Knowl. Discov. Data TKDD_ (2020). * [24] VanderWeele, T. J. & Ding, P. Sensitivity analysis in observational research: introducing the E-Value. _Ann. Intern. Med._**167**, 268 (2017). * [25] Sinatra, R., Wang, D., Deville, P., Song, C. & Barabasi, A.-L. Quantifying the evolution of individual scientific impact. _Science_**354**, aaf5239 (2016). * [26] Xu, F., Wu, L. & Evans, J. Flat teams drive scientific innovation. _Proc. Natl. Acad. Sci._**119**, e2200927119 (2022). * [27] Wei, T. _et al._ Do scientists trace hot topics? _Sci. Rep._**3**, 1-5 (2013). * [28] Uzzi, B., Mukherjee, S., Stringer, M. & Jones, B. Atypical combinations and scientific impact. _Science_**342**, 468-472 (2013). * [29] Wuchty, S., Jones, B. F. & Uzzi, B. The increasing dominance of teams in production of knowledge. _Science_ (2007). * [30] Zhang, C.-T. A proposal for calculating weighted citations based on author rank. _EMBO Rep._**10**, 416-417 (2009). * [31] Lariviere, V., Gingras, Y., Sugimoto, C. R. & Tsou, A. Team size matters: collaboration and scientific impact since 1900. _J. Assoc. Inf. Sci. Technol._**66**, 1323-1332 (2015). * [32] Allison, P. D. & Long, J. S. Departmental effects on scientific productivity. _Am. Sociol. Rev._**55**, 469-478 (1990). * [33] Lee, B. K., Lessler, J. & Stuart, E. A. Improving propensity score weighting using machine learning. _Stat. Med._**29**, 337-346 (2010). * [34] Ho, D. E., Imai, K., King, G. & Stuart, E. A. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. _Polit. Anal._**15**, 199-236 (2007). * [35] Jones, B. F. The burden of knowledge and the "death of the renaissance man": is innovation getting harder? _Rev. Econ. Stud._**76**, 283-317 (2009). * [36] Tt, H., Pm, T., D, L., Ad, R. & Id, C. Exploration versus exploitation in space, mind, and society. _Trends Cogn. Sci._**19**, (2015). * [37] Daw, N. D., O'Doherty, J. P., Dayan, P., Seymour, B. & Dolan, R. J. Cortical substrates for exploratory decisions in humans. _Nature_**441**, 876-879 (2006). * [38] Martin, T., Ball, B., Karrer, B. & Newman, M. E. J. Coauthorship and citation patterns in the Physical Review. _Phys. Rev. E_**88**, 012814 (2013). * [39] Pan, R. K., Sinha, S., Kaski, K. & Saramaki, J. The evolution of interdisciplinarity in physics research. _Sci. Rep._**2**, 551 (2012). * [40] Grover, A. & Leskovec, J. node2vec: Scalable Feature Learning for Networks. in _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ 855-864 (ACM, 2016). * [41] Radicchi, F., Fortunato, S. & Castellano, C. Universality of citation distributions: toward an objective measure of scientific impact. _Proc. Natl. Acad. Sci._**105**, 17268-17272 (2008). * [42] Brzezinski, M. Power laws in citation distributions: evidence from Scopus. _Scientometrics_**103**, 213-228 (2015). * [43] Blundell, R. & Bond, S. Initial conditions and moment restrictions in dynamic panel data models. _J. Econom._**87**, 115-143 (1998). * [44] Friedman, J. H. Greedy function approximation: a gradient boosting machine. _Ann. Stat._**29**, 1189-1232 (2001). Methods ### Datasets We have studied three large-scale datasets, covering disciplines of physics, chemistry, and biomedicine, respectively. In the main text, we present analysis on the APS (American Physical Society) dataset and in the Supplementary Information, the comparable chemistry and biomedicine datasets are analysed, respectively. The APS dataset contains all papers from the journals in the APS from 1893 to 2020. Each paper is associated with up to five 6-digit PACS codes, and the PACS codes have a hierarchical structure, where the first digit represents the top level of classifications. The chemistry dataset includes papers published in the ACS (American Chemical Society) journals and collected by the Microsoft Academic Graph (MAG), which assigns each paper several field codes under a hierarchical classification scheme. The biomedicine dataset is constructed from the PubMed data collected by MAG. Besides the MAG codes, papers in this PubMed dataset can be mapped to the hierarchically organized Medical Subject Headings (MeSH), as an alternative source of classification codes. For the APS dataset, we perform author name disambiguation using the approaches of other studies of the APS data[38, 25] and obtain 395,678 unique authors. The authorships in the ACS and PubMed datasets have already been disambiguated by MAG. In all three datasets, we keep authors with at least 10 papers. These steps result in 25,245 authors with 281,958 papers in the APS dataset, 62,316 authors with 747,061 papers in the ACS dataset, and 1,218,355 authors with 13,072,174 papers in the PubMed dataset (1,260,125 authors with 13,649,286 papers, if MeSH is used for PubMed.). Details on the datasets and processing procedures are presented in the Supplementary Information (section S1). ### The metrics: EP and ED EP is the frequency that a scientist switches to unfamiliar areas. These switches are represented by the number of exploratory papers published by the scientist. Specifically, 'areas' of a paper are represented by the first two digits of the paper's PACS codes in the APS dataset (section S1.1) and an exploratory paper covers at least one area that is different from these of the scientist's past papers in the _look-back_ period, which can either be defined as her past \(J\) papers or the papers in the last \(K\) years. The EP is then calculated as the fraction of the exploratory papers among all the scientist's papers. In the study presented in the main text, we compare each paper to the five papers before it to decide whether it is exploratory and study the outcomes. In the Supplementary Information (section S7.5), we test 1 to 15 papers and 1 to 15 years as alternative look-back periods for the calculation of EP, and we also test infinity as the look-back period, which means we include all past papers in comparisons. ED is the average distance of a scientist's papers to her previous papers, measured by research topic similarities, in the same look-back period as in the definition of the EP. For each paper, we calculate its distance to the scientist's last \(J\) papers or papers within the last \(K\) years (consistent with the choices in EP) and take the average over all papers. Like the calculation of EP, we calculate the distance of each paper to the five papers before and vary the look-back periods in the Supplementary Information (section S7.5). Specifically, to compute the distance between a paper and the set of past papers, we measure the distance between two sets of topics, one from the focal paper and the other from the past papers in the look-back period, and the paper distance is the average distance of all possible topic pairs between these two topic sets. To calculate the distance between any given pair of topics, we construct a network where nodes represent said topics, and weighted edges indicate the frequency with which the two topics occur together in papers. There are two primary methods for determining this distance. The first involves analysing the degree to which the neighbours of the nodes overlap after appropriate normalizations[39]. The second forms low-dimensional vector representations of the nodes in the network through a graph embedding technique known as node2vec[40], and calculates the cosine distance between the two nodes in question. Recall that each unique six-digit PACS code is a topic in the APS dataset. More details about the calculations and alternative definitions of EP and ED are in the Supplementary Information (sections S2 and S7.3). ### Regression Analyses We design regressions to study the relationship between the proposed metrics and the future performance. The data of any scientist is split into two parts, where the first part before the split point corresponds to her 'past' for computing independent variables, and the second consists of the 'future' for an assessment of her performance in the rest of her career as the dependent variable. The independent variables include the EP, the number of papers, and the scientist's past performance represented by the log-c5 per paper, and the future performance is evaluated by the average log-c5 of her future papers. We also control for the year and area of the scientist's first paper, since papers of different research areas and times often display distinct citation patterns[41]. All data points used in the same regression share a _split point_, which is either the first \(N\) career years (years that any individual has spent in her scientific career), or the first \(M\) papers. Throughout the main text, the results are presented under the split point of ten career years, unless specified otherwise. The following regression (eq. (S3)) studies the relation between the scientists' past EP and the future performances: \[\text{LogCIt}_{\text{future}}=\beta_{0}+\beta_{1}\,\text{LogCIt}_{\text{past} }+\beta_{2}P_{\text{past}}+\beta_{3}\text{year}_{\text{first}}+\beta_{4} \text{area}_{\text{first}}+\beta_{5}\text{EP}_{\text{past}}+\text{Noise}.\] For each choice of \(N\) or \(M\) for the split point, we process the data of any scientist with proper splitting and run a regression. Citations of scientific publications are known to follow heavy-tailed distributions[42] and taking logarithms on said citations drastically reduces the undesirable variability in their numeric values (section S1.1.3). Particularly, \(\text{LogCIt}_{\text{past}}\) is the average logarithm of the citations that the scientist's past papers receive, \(P_{\text{past}}\) is the number of her published papers in the past, \(\text{year}_{\text{first}}\) is the year of her first paper understood as a category variable, and \(\text{area}_{\text{first}}\) is the area of her first paper, again understood as a category variable. The dependent variable, \(\text{LogCIt}_{\text{future}}\) is computed using the data dated after the split point by averaging the logarithms of the number of citations per paper in the scientist's future career. Furthermore, we construct another regression (eq. (S4)) to study the effect of including the ED as an independent variable: \[\text{LogCIt}_{\text{future}}=\beta_{0}+\beta_{1}\,\text{LogCIt}_{\text{past} }+\beta_{2}P_{\text{past}}+\beta_{3}\text{year}_{\text{first}}+\beta_{4} \text{area}_{\text{first}}+\beta_{5}\text{EP}_{\text{past}}+\beta_{6}\text{ ED}_{\text{past}}+\text{Noise},\] where the newly added \(\text{ED}_{\text{past}}\) is computed from the scientist's past papers. Similarly, we run one regression for each choice of split point. All the regression results are reported in the Supplementary Information (section S4.1). Note that the alternative ways of measuring future performance as robustness checks are investigated in the Supplementary Information (section S7.4). Beyond the usual regression analysis and in part to study the question whether cautious exploration is a strategy, we borrow the idea from econometrics and perform the individual fixed-effect analysis. The additional individual fixed effect parameter is introduced to account for the effect of possible unobservable variables at the individual level. To form dynamic panel data for this analysis, we divide the timeline into several evenly spaced time periods and based on each scientist's publication record in each period alone, we calculate the scientist's average log-citations, the EP and ED, and other covariates. The setup here compartmentalizes each period to alleviate the complex time dependencies among all quantities involved and is different from the usual regression analysis above. We proceed to solve the following regression equation using the classic Arellano-Bond estimator--one of the most popular system GMM[43]: \[\text{LogCit}_{i,t}=\beta_{0}+\beta_{1}\,\text{LogCit}_{i,t-1}+\beta_{2}\, \text{EP}_{i,t-1}+\beta_{3}\,\text{ED}_{i,t-1}+\beta_{4}\,P_{i,t-1}\] \[+\beta_{5}\,Career\_year_{i,t+1}+u_{i}+\eta_{t}+\text{Noise},\] where \(\text{LogCit}_{i,t}\) is the average logarithm of the citations of scientist \(i\) in period \(t\), \(\text{EP}_{i,t-1}\) and \(\text{ED}_{i,t-1}\) measure scientist \(i\)'s EP and ED in period \(t-1\), \(P_{i,t-1}\) and \(Career\_year_{i,t}\) are the number of paper and last career year of scientist \(i\) in period \(t-1\), \(u_{i}\) and \(\eta_{t}\) denote the individual fixed effects and time fixed effects, respectively. Statistical tests are conducted to validate the model choice. The regression results and further details are documented in table S10 and in section S4.3, respectively. We further examine the possibility of 'cautious exploration' being a strategy, from a different perspective. We quantify a scientist's (future) switching strategy with future EP and ED and replace past EP and ED in the original regression (eq. S4) to analyse how one's future performance is correlated with the research strategy chosen. The results, presented in Table S11, show that future EP and ED contribute to future impact in a similar manner as past EP and ED. This finding suggests that 'cautious exploration' could potentially be a strategy worth considering for achieving higher academic impact. ### Propensity score weighting Our study has four groups differentiated by whether the EP/ED of each scientist lies in the top or bottom half of the metrics, respectively, as opposed to the usual two-group setting in causal inference. We wish to systematically quantify the differences of scientific careers of those in the high/low EP/ED groups by exploiting techniques in causal inference that controls for the inter-group discrepancies. For the multiple 'treatment' scenario considered, each group corresponds to a treatment, and we pick one of the treatments as the baseline group, the choice of which makes little difference and often depends on what we are looking at. In practice, whether a scientist is in a certain treatment group is intertwined with her other features, which results in systematic inter-group differences. The _propensity score weighting_ scheme assigns scientists weights based on their estimated odds of being in their groups, respectively, and the groups should have become similar after reweighting. We apply the highly versatile nonparametric generalized boosted model (GBM)[44] from machine learning to estimate the propensity score weights. We then conduct a weighted regression where the weights on each data point (scientist) equal the reciprocal of her estimated propensity score to be in her group. Taking one of the groups as the baseline, the regression uses membership indicators of the rest of the groups as dependent variables (covariates) and the average log-c5 of the future papers as independent variable (response). The obtained regression coefficient of each group membership indicator, together with the corresponding confidence intervals, equivalently measures the _average treatment effect (ATE)_ of being in that group relative to the baseline. The ATE measures the (average) difference in outcomes between the _hypothetical_ cases that all scientists in question, regardless of which group they are in, were in the treatment group and that all were in the baseline group. Ruling out other factors We have calculated the E-values[24] to evaluate the sensitivity of the statistical significances of EP and ED in the regression against possible unknown confounding factors, where large E-values indicate that more unaccounted-for confounding would be needed to explain away an effect estimate, or roughly speaking, the findings are more reliable against unmeasured confounding. While we have controlled for many easily conceivable factors and a quick calculation in the above regression analysis on the E-values (table S5) indicates that our results have taken enough confounding factors into account, it is still conceptually possible that other less visible factors may have more than negligible roles. For instance, after the split point, (A) perhaps the cautious explorers are more likely to switch to hot areas, which often results in more citations[27]? Or (B) are they more likely to publish papers with high novelty and high conventionality, which were shown to be superior[28]? We have formulated more such hypotheses to propose factors that may explain away the discovered phenomenon as follows: (C) Do cautious explorers work with larger teams such that more citations follow[29, 31], or do their scholarly impacts mainly come from research in which they take secondary roles[30]? (D) Do they change institutions more frequently[32] or do they work in by League schools such that they enjoy more resources and opportunities to produce high-impact work? (E) Is cautious exploration the behaviour of the focal scientist, or is it contributed by co-authors who bring in their preferences of research topics[26]? To clarify such questions, we conduct several regression analyses where the independent variables further include the variables indicating the above listed factors of concern along with the usual ones studied in the main text. When such additional variables are considered or even if we include all the variables of these hypotheses in one of the regression studies, our results (section S6) are robust as the regression coefficients of EP and ED have the same signs and are always statistically significant, despite that the explanatory power of EP and ED combined may be weakened slightly. This implies that although more factors may be at play, the differences of scientists indicated quantitatively by EP and ED metrics are sound. Code availability The code for this study is accessible at [https://cautious-explorers.github.io/cautious-explorers/](https://cautious-explorers.github.io/cautious-explorers/). Data availability The processed data for analysis and plotting, as well as the pointers to the original data, are available at [https://cautious-explorers.github.io/cautious-explorers/](https://cautious-explorers.github.io/cautious-explorers/). **Supplementary Information for** _Cautious explorers generate more_ _future academic impact_ ###### Contents * S1 Data Description * S1.1 The APS Dataset * S1.1.1 Author Name Disambiguation * S1.1.2 Data Selection * S1.1.3 Log-Citations * S1.1.4 Gender Assignment * S1.1.5 Limitations of the APS dataset * S1.2 The PubMed Dataset * S1.3 The ACS Dataset * S2 Exploration Metrics * S2.1 Exploration Propensity * S2.2 Exploration Distance * S2.2.1 Topic Co-occurrence Graph * S2.2.2 Topic Distance * S2.2.3 Paper Distance and Exploration Distance * S3 * S2.3 Exploration Metrics in Relation to Other Factors * S2.3.1 Gender * S2.3.2 Team Size * S2.3.3 Ivy League Experience * S3 Preliminary Analysis of the EP and ED * S3.1 Correlational Analysis * S3.2 Relations Between Past Performance and Future Explorations * S4 Experiments * S4.1 Regression * S4.2 Propensity Score Matching and Weighting * S4.2.1 The EP * S4.2.2 The ED * S4.2.3 The EP and ED combined and PSW * S4.3 Fixed Effect Analysis * S4.4 Future EP/ED and Future Performance * S4.5 Performance of "Drastic Changers" * S4.6 Results on the PubMed Dataset * S4.7 Results on the ACS Dataset * S5 Case Study * S6 Analysing Other Possible Latent Factors * S6.1 Hypothesis A: Research Area * S6.2 Hypothesis B: Paper Novelty * S6.3 Hypothesis C: Advantageous Collaborations * S6.4 Hypothesis D: Changing Institutions * S6.5 Combining Hypotheses A-D * S6.6 Mediation Analysis * S7 Robustness Checks * S7.1 Different Subsets of Scientists * S7.1.1 The Minimum Number of Publications Requirement * S7.1.2 Removing Scientists with Asian Last Names * 7.2 Controlling for the Genders of Scientists * 7.3 Alternative Definitions of the EP and ED * 7.3.1 Different PACS code digits * 7.3.2 Different Topic Graphs * 7.3.3 Different Node Distance Metrics * 7.3.4 Different Paper Distance * 7.4 Citation Measurements * 7.4.1 10-Year Citations * 7.4.2 Normalized Citations * 7.4.3 Percentiles among Peers * 7.4.4 Maximum Future Citations * 7.5 Different Look-back Windows * 7.6 Propensity Score Matching * 7.6.1 Varying Grouping Quantiles * 7.6.2 Shuffling after Matching--The Null Model * 7.7 Propensity Score Weighting * 7.7.1 Shuffling after Weighting--The Null Model * 7.8 Perturbing Independent Variables List of Figures * 1 An illustrative sample of the classification scheme of the MAG dataset. * 2 Relationship between the co-occurrence weight and the neighbourhood overlapping similarity. * 3 Distributions of EPs and EDs for male and female scientists. * 4 The average EPs and EDs for scientists with different team sizes. * 5 Distributions of EPs and EDs for Ivy versus non-Ivy scientists. * 6 The distributions of EPs and EDs for scientists with at least 10 papers. * 7 Association between scientists' ED and academic performance. * 8 Relationships between the prior performance and the next explorations. * 9 Average treatment effects with different baseline groups and varying numbers of papers as split point. * 10 Average treatment effects with different baseline groups and varying career years as split point. * 11 Average treatment effects for different treated groups and varying numbers of papers as split point. * 12 Average treatment effects for different treated groups and varying career years as split point. * 13 Average treatment effects for varying split points in a simplified setup. * 14 Average treatment effects with varying numbers of papers as split point in the PubMed\({}_{1}\) dataset. * 15 Average treatment effects with varying career years as split point in the PubMed\({}_{1}\) dataset. * 16 Average treatment effects with varying quantiles in the PubMed\({}_{1}\) dataset. * 17 Average treatment effects with varying numbers of papers as split point in the PubMed\({}_{2}\) dataset. * 18 Average treatment effects with varying career years as split point in the PubMed\({}_{2}\) dataset. * 19 Average treatment effects with varying quantiles in the PubMed\({}_{2}\) dataset. * 20 Average treatment effects with varying numbers of papers as split point in the ACS dataset. * 21 Average treatment effects with varying career years as split point in the ACS dataset. * 22 Average treatment effects with varying quantiles in the ACS dataset. * 23 Possible mechanisms to explain why cautious explorers have more impact in the future. * 24 Distributions of the ED calculated with 2, 4, and 6 digits of PACS codes. List of Tables * 1 Pearson correlation coefficients between code distance matrices calculated using papers from different time periods * 2 Regression results with varying regression equations * 3 Regression results of Model (S3) with varying split points * 3 Regression results of Model (S4) with varying split points E-values for the regression analyses with varying split points * 6 PSM results with varying career years as split point * 7 Average treatment effect in different groups with varying career years as split point * 8 PSM results with varying numbers of papers as split point * 9 Average treatment effect with varying numbers of papers as split point * 10 Regression results of the system GMM * 11 Regression results of Model (S8) with varying split points * 12 Regression results on the PubMed\({}_{1}\) dataset * 13 Regression results on the PubMed\({}_{1}\) dataset with an alternative performance measurement * 14 Regression results on the PubMed\({}_{2}\) dataset * 15 Regression results on the ACS dataset * 16 Regression results on the ACS dataset with an alternative performance measurement * 17 Normalized regression coefficients of the EP and ED with different factors as the dependent variable. * 18 Coefficients of the EP and ED when controlling for different factors * 19 Pearson correlation coefficients between confounding factors and other independent variables. * 20 Pearson correlation coefficients between confounding factors and other independent variables with 10 papers as split point. * 21 Mediation effects of different factors on the future impact * 22 Mediation effects of different factors on the future impact with 10 papers as split point. * 23 Regression coefficients of the EP and ED on different subsets of scientists, with at least 5 or 20 publications * 24 Regression coefficients of the EP and ED on the subset excluding scientists with common Asian last names * 25 Regression coefficients of the EP and ED when controlling for scientists' genders * 26 Regression coefficients of the EP and ED with varying definitions of EP and ED [MISSING_PAGE_POST] ## S1 Data Description ### The APS Dataset We perform our analyses primarily on the APS dataset1, which is provided by the American Physical Society (APS) and includes information of all papers published in the Physical Review series of journals between 1893 and 2020, totalling more than 670,000 publications. For each paper, the dataset contains information such as the paper's digital object identifier (DOI), the date of publication, the name of the journal of publication, and the name and affiliation(s) of every author. In addition, the citations between publications in the Physical Review journals and the PACS codes of each paper are provided. Footnote 1: [https://publish.aps.org/datasets](https://publish.aps.org/datasets) The Physics and Astronomy Classification Scheme2 (PACS) is the classification criterion of papers used by the APS journals, which we apply in the definitions of the exploration propensity (EP) and the exploration distance (ED) in our study. The PACS codes in each paper are submitted by the authors and reviewed by the editors. The scheme first came in use in the early 1970s and was discontinued in around 2015. More than 90% of the publications were assigned PACS codes between 1976 and 2015, and one publication can have up to five PACS codes. A PACS code takes the form of six digits with a hierarchical structure. The highest level in a PACS code is marked by the first digit, ranging from 0 (general) to 9 (geophysics, astronomy and astrophysics). The second level is marked by the first two digits, such as 04 (general relativity and gravitation) and 32 (atomic properties and interactions with photons). The third level is recorded by the first four digits, e.g., 91.25 (geomagnetism and paleomagnetism; geoelectricity). Overall, there are 10 categories in the top level, 73 in the second, and 948 in the third. The fourth and fifth levels are designed to be represented by the first five digits and the entire six digits, respectively (e.g., 91.25.F for 'rock and mineral magnetism' and 91.25.fd for 'environmental magnetism'). All the 5,971 unique PACS codes cover the third level, 80% of them the fourth, and 10% the fifth. These unique 6-digit PACS codes can be seen as a mixture of the finest possible levels--20% of it being third levels, 70% fourth levels, and 10% fifth levels. We call each 6-digit PACS code a research topic_ and use the first two digits to represent a broader research _area_. #### s1.1.1 Author Name Disambiguation Since the APS does not maintain a list of unique author identifiers, a name disambiguation is required to associate scientists with their publications. The name disambiguation procedure of scientific publications usually starts with assigning a unique identifier to each author of any publication, then proceeds to merge the identifiers that appear to correspond to the same person. We infer the authorship from the author names and metadata available in each publication, which is similar with the name disambiguation methods in prior works [14, 21]. First, we compile a list of 2,333,999 authors by treating each author from any publication as if each were unique; then, we merge the records of the supposedly identical authors into one entry in the following steps. 1. We first combine the author records with the same last names and compatible first names into one group. By "compatible", we mean either the first names are exactly matched when the complete first names are available, or the first names have same initials, when deciding for these only have initials. 2. In each group, we measure the degree of similarity of authors from the following perspectives: whether they worked in similar and/or related institutions, whether they cited each other, had the same co-authors, and published in journals of similar research areas. Records with similarities above a certain threshold are merged. With the above merging process, we obtain 395,678 authors. To evaluate the accuracy of our disambiguation procedure, we randomly sample some author pairs from all the author pairs which have the same last names and compatible first names. In detail, we sample 200 pairs of author identifiers that our procedure classifies as identical and 200 pairs of author identifiers that our procedure considers different. Out of these 400 pairs, we evaluate the outcomes of our procedure by manually looking up names on ResearchGate3 and Google Scholar4 to recover the ground truth. As such, we obtain estimates of the false positive rate--the fraction of pairs that are merged wrongly--to be about 5% and the false negative rate--the fraction of pairs that are supposed to merge while they are not--to be around 10%. Compared to other related works [14, 21], our error rates are low. Besides, we note that the errors produced by disambiguation are not evenly distributed among authors. Previous studies have pointed out that name disambiguation is more difficult for authors with Asian names [21]. In fact, in the evaluation just above, 90% of the errors are related to authors with Asian names. We have repeated our main analysis in the subset excluding the authors with Asian names, and have found that our experimental results remain essentially the same (see Section S7.1.2). #### 5.1.2 Data Selection After disambiguating the author names, we obtained the publication records of all authors. We only consider the authors and their publications that meet the following criteria: (1) the publications were published during 1976-2015, when it was required to submit the PACS codes along with each publication; (2) we exclude the authors with any publication that has an incomplete record in the dataset, in the sense that not all fields (publish date, PACS codes) related to the paper were included; (3) the authors should be associated with at least ten papers that were published between 1976 and 2015. Criteria (1) and (2) ensure the accuracy of the author's EP and ED calculations and allow us to trace the author's initial performance; criterion (3) ensures that the author had a substantial scientific career rather than, say, only participated in some projects as short gigs. Note that enforcing (2) removes about 6% of all the authors, a small proportion, and 281,916 papers published by 25,237 scientists are left in the dataset after this step of screening. #### 5.1.3 Log-Citations The most prominent problems with citation-based performance measures are that the number of citations for papers grows naturally as time elapses, and the heavy-tailed distribution of citations leads to erratic performance measurements in [4]. As such, for any paper, we use the logarithm of the cumulative citations it received within five years after its publication to measure its scientific impact. In the following sections, we call it "5-year log-citations", denoted by \(\log c_{5}\). Taking logarithms on the citation numbers makes them much less problematic as the effect of sometimes extraordinarily large citation numbers in averaging is mitigated, and since the logarithm transformation is strictly monotone increasing on \((0,\infty)\), the ordering of the papers' importance (as measured by 5-year citations) is preserved. #### s1.1.4 Gender Assignment In our study, we infer a scientist's gender from the name. Specifically, we use a commercial service genderize.io5 widely used in scientific studies [11], which has built a database of names mapping first names to binary gender labels by integrating information from publicly available census datasets. For each first name, genderize.io gives a gender prediction and provides both an estimated probability of its labelling being correct and the count of samples in its database. We only accept its gender labelling of the first names with at least 10 samples and probabilities of at least 60% to be correct in its database, such that relatively precise gender assignments are carried out. After this process, we obtain 17,140 male and 1,904 female scientists in the APS dataset. Footnote 5: [https://genderize.io/](https://genderize.io/) #### s1.1.5 Limitations of the APS dataset Although the APS dataset has been widely investigated in scientific studies, there are several limitations that should be kept in mind. First, scientists may have published their manuscripts in any journals, but we only study their publications in the APS journals. Similarly, papers may receive citations from journals not in the APS journal list, but we only count the citations between pairs of the APS journals. Second, the PACS codes were submitted by scientists and might later be reviewed and modified by editors, so they might not have been objectively written down in accordance with the classification scheme. Besides, the PACS codes and their exact meanings were subject to revisions by the APS from time to time. Nevertheless, the PACS had been an enduring and the recommended classification standard for the APS journals for a long time. Third, some papers had incomplete records--some miss the PACS codes or the scientists' name. For the record, we note that over 95% of all the publication records of the scientists with at least 10 papers during 1976-2015 have the PACS codes on the records. More than 99% of the publications have complete records of all the other information (paper DOI, publish date, number of citations, authors' names and affiliations, and journal of publications). Our analyses are performed only on those papers which have both records of PACS codes and authors' names. ### The PubMed Dataset We also examine the PubMed dataset, which covers biomedicine publications. This dataset is constructed from the Microsoft Academic Graph (MAG) [22], by requiring the URLs of the papers to be from PubMed. It originally contains 29,776,639 papers. For the PubMed dataset, there are two distinct coding systems that can be used to identify the areas and topics of papers. The first is a classification scheme provided by MAG. It is inferred based on paper content and other publicly available information [22] and applies to all papers in MAG. The second is the hierarchically-organized Medical Subject Headings (MeSH)6 specific to the PubMed dataset and provided by the National Library of Medicine (NLM). In this study, we not only use the MAG coding systems to infer the areas and topics of papers within PubMed, but also utilize the MeSH system to build a separate dataset for robustness check purposes. Specifically, MAG's classification scheme consists of 6 levels (Figure S1). For papers in PubMed, there are 45 categories in the first level, 39,666 in the second level, 65,949 in the third, 91,552 in the fourth, and 17,643 in the fifth. We use the first-level fields as _areas_ for computation for the EP and the second-level fields as _topics_ for the ED. As a sizeable portion of the publications (about 17.1%) has no records of the second-level fields, we fill these missing values with the fields of other levels in the following steps. 1. If the publication have records of its third-level fields, we find the corresponding second-level fields of each of its third-level fields, and consider the corresponding second-level fields as the "true" second-level fields of the publication. 2. If a publication does not have records of its third-level fields, we use the fourth-level fields to obtain the corresponding third-level fields and then the third to the second. 3. Those with only higher-level fields in their records than the third-level ones can be dealt with similarly in a recursive fashion. We also utilize the MeSH for inferring PubMed papers' areas and topics. This involves mapping a MeSH code to the MeSH tree7 and obtaining the corresponding area and topic, from its ancestor and descendant, respectively. Given a MeSH code, we use its ancestor at the first level (the MeSH tree starts at the zeroth level) as the area it represents, and the descendant(s) at the tenth level as the topic(s). If all leaf nodes of a code are at levels less than ten, we just use these leaf nodes as the code's topics. Note that the NLM modifies the structure of the MeSH tree annually, and we decide to use the most recent (2016) MeSH tree. By doing this, 99.94% of the papers have at least one MeSH code. In total, we have 14 areas and 39,660 topics using MeSH, and they are of comparable order of magnitude as the results from using MAG's classification scheme. Footnote 7: [https://meshb-prev.nlm.nih.gov/treeView](https://meshb-prev.nlm.nih.gov/treeView) The MAG provides citations between publications, similar to the APS dataset. Recalling \(\log c_{5}\), we count only a paper's citations in the five-year period after its publication, and take its logarithm to measure the paper's impact. Note that here we are able to count a paper's citation from all papers that cite it, instead of from only those published in the APS journals in the APS dataset (see Section S1.1.5). All the publication entries in the PubMed dataset have field IDs and publication dates, and more than 99% of them contain author IDs. Similarly to the APS dataset, we consider only papers published before the end of 2016 to make sure that the 5-year log-citations (after their publications) are calculable, and only authors with at least 10 publications are studied. (In the later case of using 10-year log-citations, we only consider papers published before the end of 2011 for similar reasons.) Finally, using the MAG and MeSH coding systems respectively, we arrive at the PubMed\({}_{1}\) dataset comprising 1,218,355 scientists and 13,072,174 papers, and the PubMed\({}_{2}\) dataset with 1,260,125 scientists and 13,649,286 papers. The divergence in the number of papers between the two datasets is attributable to differences in code coverage rates for the papers. ### The ACS Dataset We also construct a dataset of chemistry papers published in the 86 American Chemical Society (ACS) journals, from 1879 to 2021, by filtering the MAG dataset. Originally, it contains 1,325,257 papers. It is then processed in the same way as the PubMed dataset and the papers' areas and topics are decided by the MAG classification scheme (the MeSH system is only available for PubMed.). In total, the processed dataset has 21 areas and 16,257 topics, covering 62,316 scientists and 747,061 papers. ## S2 Exploration Metrics ### Exploration Propensity The exploration propensity (EP) measures a scientist's likelihood of switching to an unfamiliar area. It is calculated as the proportion of her exploratory papers that explore unfamiliar research areas among all her papers. An exploratory paper is a paper that covers at least one area that is different from the areas of the papers in the _look-back_ period, which can either consist of her past \(J\) papers before this paper or the papers in the \(K\) years before the publication year. When \(J\) is set to be 1, the EP is the probability that a scientist explores different areas between any two consecutive publications. When \(J\) is \(\infty\), we consider all papers before it when deciding whether a paper is exploratory or not, even if the previously explored area might have seen so much progress that it is quite different from the area a few years ago. One example would be "machine learning", which nowadays mainly consists of deep learning and is drastically different from what machine learning used to be several years ago. By default, we use \(J=5\) and in the robustness check, we vary \(J\) from 1 to 15 and \(K\) from 1 to 15 as well. Besides, we also set them to \(\infty\) to consider all the past papers for the robustness checks. Formally, assuming that a scientist has published \(L\) papers, we need to decide how many of them are exploratory. We denote her \(i\)-th paper as \(p_{i}\) and when \(p_{i}\) is not the first paper (\(i\in\{2,\ldots,L\}\)), we compare its set of areas \(A_{i}\) with \(A_{i,m}\), which is the set of past areas from her \(m\) papers published in the _look-back_ period. If \(A_{i}\not\subset A_{i,m}\), \(p_{i}\) is an exploratory paper since it contains areas unseen in the _look-back_ period. The first paper, \(p_{1}\), does not have previous papers, and thus we do not determine whether it is exploratory. Therefore, \((L-1)\) out of \(L\) papers can be determined, and we calculate the EP as the fraction of exploratory papers among these \((L-1)\) papers. Specifically, for the APS dataset, we use the first two digits of the PACS codes as the area identifiers, of which there are 73 unique ones, while for the PubMed dataset, we use the first-level tags of 45 different areas. If we use the entire six digits of PACS codes (research "topics", as described in Section S1.1) when determining exploratory papers, most papers will be exploratory, and a large portion of scientists will appear to explore all the time with high EPs. The other extreme is to use the first digit that represents one of the 10 top level classifications as a PACS code's area, and the coarse-grained classifications would result in very few switches and many scientists with EPs being zero. We therefore use the first two digits, and we try four digits in Section S7.3.1. ### Exploration Distance To complement the EP, we calculate the exploration distance (ED) of a scientist by an average distance of her papers to their corresponding previous papers. A paper's distance to another is determined by the average distance between the two papers' topics. And to compute the distance between two topics, the straightforward approach based on counting topic co-occurrences in the same papers [26] is not applicable in our scenario, since 98.31% of the six-digit PACS code pairs in the APS dataset do not co-occur, despite that they may be indirectly related. If the two-digit PACS area codes are used instead of the six-digit topic codes for the co-occurrences, only the crude area-level distances can be captured, and finer details in topics are omitted. Therefore, it would be more appropriate to use six-digit PACS codes to include subtle (dis)similarity at the topic-level. As such, we adopt the construction of a co-occurrence based code similarity graph using the entire dataset and measure the pairwise code distances according to how the pair's neighbours overlap [16, 18, 34]. After such a graph is constructed, the distance between any pair of papers can be determined, based on which we calculate the EDs of the scientists. We elaborate on these operations in the following subsections. #### 2.2.1 Topic Co-occurrence Graph Following [18], we consider all papers in the focal dataset and construct a weighted graph, in which each node represents a unique topic and an edge connects two topics that have appeared in the same papers. The edge weight between topic \(i\) and \(j\) is \(w_{ij}=\sum_{p\in H}{(n_{p}-1)^{-1}}\), where \(H\) denotes the set of all papers containing \(i\) and \(j\), and \(n_{p}\) is the number of topics in paper \(p\). \((n_{p}-1)\) is the number of co-occurring topics that each topic has in paper \(p\) and each co-occurrence has \(1/(n_{p}-1)\) as the weight from this paper. Therefore, \(w_{ij}\) depicts the co-occurrences of \(i\) and \(j\), with each co-occurrence weighted by the number of all topic co-occurrences of \(i\) or \(j\) on the corresponding paper. The strength of each topic, \(s_{i}=\sum_{j}w_{ij}\), is equal to the number of papers containing that topic, excluding these with single topics. With this graph, the similarity between topic \(i\) and \(j\) can be calculated using the _weighted overlap_ indicator that considers their overlapping neighbours. #### 2.2.2 Topic Distance In an unweighted graph, the similarity between two nodes can be measured by the number of their overlapped neighbours divided by the number of nodes in the union set of their neighbours [17]. Pan et al. [18] extends this to a weighted graph, and defines the _weighted overlap_ similarity between \(i\) and \(j\) as: \[O_{ij}=\frac{W_{ij}}{s_{i}+s_{j}-2w_{ij}-W_{ij}}\] (S1) where \(w_{ij}\) is the weight of the edge between nodes \(i\) and \(j\), \(s_{i}\) is the sum of weights from all \(i\)'s edges, and \(W_{ij}\), defined by \(\sum_{k\in\Lambda_{i}\cap\Lambda_{j}}\frac{w_{ik}+w_{kj}}{2}\), represents the weight from overlapped neighbours of \(i\) and \(j\), and \(\Lambda_{i}\) is \(i\)'s neighbouring nodes. The subtraction of \(2w_{ij}\) in the denominator is to exclude the effect of direct links between \(i\) and \(j\), if there is any. By definition, \(O_{ij}=0\) if \(i\) and \(j\) do not have any overlapping neighbours, while \(O_{ij}=1\) if \(i\) and \(j\) have the same set of neighbours. We then define the topic distance between \(i\) and \(j\) as: \(\text{TD}_{ij}=1-O_{ij}\), and \(\text{TD}_{ij}\in[0,1]\) by definition. We choose to compute neighbourhood overlap for ED instead of measuring how nodes are directly connected (i.e., the weighted co-occurrence for the graph edges), mainly because that very few (1.69%) pairs of nodes are directly connected. For these pairs of connected codes, the similarities measured from the neighbourhood overlap correlate positively with the weighted co-occurrence (Fig. S2), indicating that the neighbourhood overlap measurement is a reasonable replacement. Some APS codes may disappear in certain years [18] and if the code graph is constructed from papers in a short period of time, it may miss some codes such that the distances for these missing codes from other years cannot be computed. To avoid this, we use papers of all years to build the graph, and all the codes that have ever appeared are in this graph. Here we show that the code distances are relatively stable, regardless of whether we construct the graph with all papers or just the papers from a certain period of time, by adopting the analysis from Hidalgo et al. [9]. First, we construct node graphs using papers from different time periods. For each period, we compute the distances for all possible node pairs and form a distance matrix only for the nodes that appear across all periods. We then calculate pairwise correlations for all the matrices, including the one built with all papers as used in our method. The correlations in Table S1 are fairly high, indicating the stability of node distances over time. The first row of Table S1 shows that our distance matrix that uses all papers aligns well with distance matrices built from papers of various time periods, and the correlations are mostly above 0.75. The only exception is 1981-1985 (0.544), during which the published papers only account for 1.21% of all the papers. When computing any code distance, we use papers of all years to construct the graph and all the codes that have ever appeared are in this graph, to avoid the issue of code missing. Furthermore, to prove the validity of this approach, we verify that the distances between nodes are stable whether we use papers from the entire time period or from a partial time period to construct the topic graph, following an analysis adopted from Hidalgo et al. [9]. First, we construct node graphs using papers from different time periods. For each period, we compute the distances for all possible node pairs and form a distance matrix only for the nodes that appear across all periods. We then calculate pairwise correlations for all the matrices, including the one built with all papers as used in our method. The correlations in Table S1 are fairly high, indicating the stability of node distances over time. The first column of Table S1 shows that our distance matrix that uses all papers aligns well with distance matrices built from papers of various time periods, and the correlations are mostly above 0.75. The only exception is 1981-1985 (0.544), during which the published papers only account for 1.21% of all the papers. #### s2.2.3 Paper Distance and Exploration Distance Before diving into the ED, we need to compute paper distances on papers in consideration. Similar to Section S2.1 where we decide whether a paper \(p_{i}\) is exploratory by comparing its set of areas (\(A_{i}\)) with that of the \(m\) papers before it (\(A_{i,m}\)), the paper distance is the average of pairwise topic distances between \(p_{i}\)'s topics and these of the \(m\) papers: \[\text{PD}_{p_{i}}=\frac{1}{|T_{i}|\times|T_{i,m}|}\sum_{t_{j}\in T_{i}}\sum_{t _{k}\in T_{i,m}}\text{TD}_{t_{j}t_{k}},\] (S2) where \(t_{j}\) is a topic in \(p_{i}\)'s topic set \(T_{i}\), and \(t_{k}\) is a topic in the \(m\) past papers' topic set \(T_{i,m}\). The ED of a scientist is computed as the average paper distances of all her published papers in a certain period, which often corresponds to the part before the split point. ### Exploration Metrics in Relation to Other Factors We examine the relationships between the two exploration metrics and the factors that are often associated with systematic differences in human behaviours connected to academic career performances. Some factors, such as years into careers and different periods of time as starting points of careers, have been discussed in the main text. #### s2.3.1 Gender Using the gender inferring tool in Section S1.1.4, we obtain 17,140 male and 1,904 female scientists in the APS dataset and plot the distributions of EP and ED for the two genders in Figure S3. Applications of the Kruskal-Wallis (KW) tests suggest that there are no statistically distinguishable differences between the EP distributions of both genders (\(P=0.3886\)), while they seem to differ (statistically) in their ED distributions (\(P=0.0045\)). The average ED of men and women are 0.570 and 0.561, respectively. In light of this, we will include gender as an independent variable in the robustness check Section S7 to see the role it plays in scientists' future performance. #### s2.3.2 Team Size Team sizes affect the extent to which scientists explore [29]. For each scientist, we calculate her average team size across all her papers, where the team size of a paper refers to its number of authors. We then plot the EP and ED distributions over different team sizes in Figure S4 and find that the two metrics peak when the team size is around 5 and 6. #### s2.3.3 Ivy League Experience We identify scientists who have worked for or studied at Ivy League universities according to their affiliated institutions in their publications. Figure S5 shows the distributions of EPs and EDs for Ivy versus non-Ivy scientists, and the KW tests as expected suggest that these two groups of scientists have different exploration patterns--the ones from the Ivy group have larger EPs (\(P=1.40\times 10^{-12}\)) and EDs (\(P=3.60\times 10^{-13}\)). ## S3 Preliminary Analysis of the EP and ED Existing literature have reported the correlation between scientists' academic performance and their tendencies to switch to new topics, among which Zeng et al. [33] found a weak negative correlation between the scientists' 5-year citations per paper and their degrees of concentration on certain topics measured (reversely) by the switching probabilities. In what follows, we study the EP and ED and explore the relationships between them and their relations with confounding factors. We start the section by plotting the histograms of career-long EPs and EDs of all scientists in Fig. S6. ### Correlational Analysis As pointed out in Fig. 2 in the main text, we observed a negative correlation between the exploration propensity and the scientists' academic performance (Pearson correlation coefficient: \(-0.1072\))--as a scientist's overall exploration propensity increased, her academic performance became worse. As our study unveils in the main text and in the sections below, having higher EP should be in fact associated with better academic performance in the future, and we remark that this is an exemplifying example of the failure of correlational analysis that statisticians have always warned us about. Furthermore, there is a more substantial negative correlation between the scientists' exploration distances and their overall performance (Pearson correlation coefficient: \(-0.2852\)) (Fig. S7). ### Relations Between Past Performance and Future Explorations When conceiving new projects, a scientist may determine the direction of the next paper(s) based on her experience in areas and topics that she has worked on. Two hypotheses arise. By some psychological accounts, engaging in mastery experience builds a person's self-efficacy [13], i.e., a scientist becomes more confident in tackling new challenges after achieving successes in the past, so she is more willing to explore new topics; or alternatively, it may prompt the scientist to commit to the areas of her existing expertise and enter a scientific comfort zone, thus she would focus on topics that she is already experienced in and more comfortable with. Either seems plausible. We call the later one of a scientist's any two consecutive papers a _new publication_, and this is the opportunity where she could choose to switch topics and decide how far away the next topic would be from previous ones. We observe the connection between the scientist's performance measured by the 5-year log-citations per paper prior to her each new publication, and the outcomes of the publications measured Figure S8: **Relationships between the prior performance and the next explorations.** For each publication of any scientist (except the first one), we calculate the prior research performance by averaging the 5-year log-citations of all her publications prior to this publication, then we determine whether she explores in this publication, and calculate how far she explores. For all publications of similar prior research performance that are grouped together by having the same nearest integer multiple of 0.1, we calculate the probability of exploration by counting how many times explorations take place and calculate the average distance of the next paper from the past (five) papers during all these publications. (a) Scientists' past performance versus the probability of changing research area in the next paper. Each red point marks the mean of the \(\log c_{5}\) per paper of the scientists in that group, with 95% confidence bands. The light grey bars in the figure indicate the distribution of scientists' past performance at every (new) publication. Note that part of (a) has appeared in the main text. (b) Scientist's past performance versus the next paper distance. Scientists are less likely to be bold in their explorations as the prior performance improves. by (the average of) whether an area switch takes place and (the average of) how far away the next paper's topics are from the last one. We find that scientists with better past performance are more likely to continue exploiting areas of familiarity by either not exploring or exploring closer topics, and vice versa, see Fig. S8. ## S4 Experiments Given the negative correlation between a scientist's academic performance and her exploration propensity throughout their academic career (Section S3.1) and our observation of the relation between the prior performance and the future explorations (Section S3.2), we would hypothesize that the prior exploratory metrics can be used to explain and predict future successes. To test the validity of this idea, we design two types of tasks: choosing each (i) scientist's \(i\)-th (\(i=2,\ldots,15\)) career year or (ii) \(i\)-th (\(i=2,\ldots,15\)) article as split point, and using the exploration metrics prior to the point to predict their academic performance after the split point. For each type of task, we first apply regression analyses to quantitatively verify the idea and then study the relations further via propensity score matching and weighting. ### Regression Keeping in mind the systematic differences among the scientists, we introduce the following regression analyses to study the explanatory and predictive powers of the aforementioned exploratory metrics on future performance while controlling for other features. We construct the regression equation to study the relation between the future academic performance and the variable EP\({}_{\text{past}}\)--the EP before the split point--while controlling for the other independent variables: \[\begin{split}\text{LogCit}_{\text{future}}&=\beta _{0}+\beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}\\ &+\beta_{3}\text{year}_{\text{first}}+\beta_{4}\text{area}_{ \text{first}}+\beta_{5}\text{EP}_{\text{past}}+\text{Noise},\end{split}\] (S3) where LogCit\({}_{\text{future}}\) represents scientists' average academic performance after the split point, and LogCit\({}_{\text{past}}\), \(P_{\text{past}}\), year\({}_{\text{first}}\), and area\({}_{\text{first}}\) record the average performance and the number of papers before the split point, and the publishing year and areas of the first paper, respectively. Note that \(\text{year}_{\text{first}}\) and \(\text{area}_{\text{first}}\) are both categorical variables and handled as such. The above regression output is reported in the column of Model (S3) in Table S2, where the past EP has a statistically significant positive impact on the future performance. The result is consistent with the (later) propensity score weighting/matching results in Section S4.2. We then add the scientists' average ED before the split point into the mix of independent variables on top of (S3): \[\begin{split}\text{LogCit}_{\text{future}}=\beta_{0}& +\beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}+ \beta_{3}\text{year}_{\text{first}}\\ &+\beta_{4}\text{area}_{\text{first}}+\beta_{5}\text{EP}_{\text {past}}+\beta_{6}\text{ED}_{\text{past}}+\text{Noise}.\end{split}\] (S4) The corresponding column in Table S2 indicates that after controlling for the performance and EP in the past (before the split point), the effect of the ED on the future performance is statistically significantly negative. This is later confirmed again by the propensity score weighting results: scientists who explore new topics at shorter distances enjoy certain advantages in the future. We also propose the rudimentary regression model \[\text{LogCit}_{\text{future}}=\beta_{0}+\beta_{1}\text{EP}_{\text{past}}+ \beta_{2}\text{ED}_{\text{past}}+\text{Noise},\] (S5) to investigate the basic relations between the future performance and \(\text{EP}_{\text{past}}\) and \(\text{ED}_{\text{past}}\) jointly. The output in Table S2 is consistent with what we have known, albeit in a crude fashion. As illuminated in the above, scientists with higher EPs and lower EDs have better academic performances as measured by 5-year log-citations per paper. Similar to the main text, we go a step further and partitioned the scientists into four groups with four different kinds of patterns in exploratory metrics and put the group membership in the following regression study. Following the same framework as above and after calculating the EP and ED for every scientist, we label each scientist either high/low EP/ED, depending on whether her EP/ED metric lie in the top half or the bottom half of all. Then high-EP-low-ED scientists go to group A, low-EP-low-ED ones B, high-EP-high-ED ones C and the rest (low-EP-high-ED ones) D. Then we propose the next regression formula by \[\begin{split}\text{LogCit}_{\text{future}}=\beta_{0}& +\beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}\\ &+\beta_{3}\text{year}_{\text{first}}+\beta_{4}\text{area}_{ \text{first}}+\beta_{5}\text{group}+\text{Noise},\end{split}\] (S6) where the variable group is the categorical variable indicating to which of the four groups a scientist belongs with group D as the baseline. The regression result is shown in Table S2. We find that group A have the best future performance, followed by C and B, and finally D. This again supports our analyses in the main text via propensity score weighting about differences among the four groups. To verify the robustness of our results, we run regressions of Model (S3) and Model (S4) with different split points, from 2 to 15 in number of papers and from 2 to 15 in career years, although some care is necessary to handle the EP that only have values of 0 and 1 when the split point is set to be two papers. The regression results of Model (S3) are shown in Table S3, where all the coefficients of the EP are (statistically) significantly positive. The more refined regression analyses with Model (S4) are shown in Table S4, and again all the coefficients of the EP are (statistically) significantly positive and all the coefficients of the ED are (statistically) significantly negative. The consistency of results under exhaustive choices of split points suggests the robustness of our results. For the potentially unobserved confounding factors, we use E-values of the sensitivity analysis [24] to measure their potential impact on the above regression results. Specifically, we use the "EValue" package8 in R to obtain the E-values of \(\text{EP}_{\text{past}}\) and \(\text{ED}_{\text{past}}\), respectively. See the package documentation for detailed references and Table S5 for exact numbers. It suffices to conclude that in the experiments with split points set in terms of career years, both \(\text{EP}_{\text{past}}\) and \(\text{ED}_{\text{past}}\) have large E-values, which provides further evidence of the soundness of our discoveries on top of Section S6. Footnote 8: [https://cran.r-project.org/web/packages/EValue/index.html](https://cran.r-project.org/web/packages/EValue/index.html) ### Propensity Score Matching and Weighting To further solidify the reported relations between the proposed exploration metrics and future performance, we conduct more analyses via propensity score matching (PSM) [10] and propensity score weighting (PSW) [12]. We design tasks similar to our regression analyses, where the data are pre-processed similarly with the length of career years and the number of papers serving as split points. However, when applying the PSM, we have to study the EP and ED separately due to the limitation of the methodology itself. #### s4.2.1 The EP Specifically, for each split point, the analysis on the EP via PSM is conducted in the following steps: 1. First, we divide the scientists into a treatment group with their EPs lying in the top half of all EPs and a control group with their EPs in the bottom half, where the EPs are calculated (as usual) with each scientist's data before the split point. 2. Then PSM is used to match scientists in the treatment and control groups, where the covariates (to be matched) are the individual's performance, the number of papers before the split point, the PACS code and the year of the first paper. After playing around with the caliper hyper-parameter, we obtain the matched groups, both being subsets of the original treatment and control groups, respectively. The choice of caliper relies on human interventions such that the obtained groups after matching are statistically indistinguishable from each other in terms of their past performance and other covariates, and thus are _balanced_. 3. Finally, we compare the differences of the future performance of scientists between the matched treatment and control groups. In particular, we use the implementation of the MatchIt9 package from R to carry out the matching process in the above second step. We mostly follow the default settings recommended by the package for tuning parameters, except for caliper. Footnote 9: [https://cran.r-project.org/web/packages/MatchIt/](https://cran.r-project.org/web/packages/MatchIt/) The results of the above analyses are detailed in the EP columns in Tables S6 and S8. The future performance of the scientists in the treatment group are statistically significantly (\(P<0.01\)) higher than those in the control group, regardless of the choice of the split point. For example, we take the choice of the default split point of ten career years, and have 14,159 individuals in both the treatment and control groups in total, and after the matching 4,983 pairs of scientists remain. We have ensured that the covariates such as the past performance are statistically indistinguishable between the groups after the matching. On average, the future performance of the scientists in the treatment (high-EP) group is higher than those in the control (low-EP) group by 4.71%. #### s4.2.2 The ED We do similar analyses for the ED. The methodology is virtually the same, except in two parts. The first is that the variable of interest is no longer the EP, and the second is that we added the EP to the covariates to be matched via PSM. The latter is to ensure that the obtained subsets of both groups have similar (and thus comparable) distributions in the EP. As expected, the future performance of scientists in the high-ED treatment group is (statistically) significantly lower than those in the low-ED control group, regardless of which split point is chosen. See the ED columns in Tables S6 and S8 for more details on the results. We point out that under the default split point of 10 career years, the future performance of the scientists in the treatment group is lower than those in the control group by 6.40%, on average. #### s4.2.3 The EP and ED combined and PSW The above results suggest that those with high EP (low ED) will have an edge in future academic performance, over their otherwise similar counterparts with low EP (high ED). But do the scientists with both high ES and low ED perform better than others? And if so, by how much? Recall that in the main text and Section S4.1, we have divided the scientists into four groups A, B, C and D, each with different exploration habits quantified by both the EP and ED. We first conduct an elementary analysis via PSM by finding a one-to-one matching between group A and D, similar to what is done in Sections S4.2.1 and S4.2.2. The matching results show that the treatment group A performs statistically significantly better than the control group D, as expected. Again, we refer to the corresponding columns in Tables S6 and S8. However, the PSM is in itself no longer sufficient to deal with the multiple-group scenario, and as a result, we introduce the propensity score weighting [15]. For simplicity of exposition, we start by observing two groups of data, one 'treatment' group and the other 'control'. In practice, whether a scientist is in the treatment group is intertwined with her other features, resulting in systematic difference between the two groups. The PSW scheme assigns scientists weights with the generalized boosted model (GBM), which depends on the estimated probability of each scientist being in either the treatment or the control group given her measured features. The GBM algorithm is an iterative scheme to combine the predictive power of a large collection of simple regression trees with proper weighting. At each iteration, the algorithm seeks to add to the model a small refinement in the form of a simple regression tree to improve the fitness of the current model to the data measured by the Bernoulli log-likelihood, where the simple regression tree is a recursive algorithm to find a partitioning of the covariate space to estimate the relation between the covariate space and the treatment assignment. At each recursion step, the simple regression tree algorithm finds a further level of partitioning of the current model such that the prediction error is minimized. The GBM keeps iterating to find a maximum-likelihood fit, if a proper stopping criterion is specified. Then we calculate the average treatment effect (ATE) of being in different groups, by performing a weighted linear regression on just the dummy variable indicating the group membership, with one of the groups as baseline. Alternatively, we can also compute the average treatment effect on the treated (ATT) by virtually the same procedure as above, except for minor differences in obtaining the weights via GBM. The ATE and ATT are both quantities in causal inference to measure such causal effects with subtle differences. Roughly speaking, the ATE measures the population (i.e. all scientists) difference in outcomes between the treatment and control, while the ATT measures the difference in outcomes if the scientists in the treatment group were in the control group. Specifically, in our case, the covariates for calculating the propensity scores are the same as the independent variables in Section S4.2.1. The obtained ATEs for varying definitions of split points are reported in Tables S7 and S9 and Figs. S9 and S10, and they consistently suggest that group A outperforms any other groups and particularly by a significant margin group D, while groups B and C have similarly better performance than group D. Similarly, we compute the ATTs as a robustness check, and the results are presented in Figs. S11 and S12. Note that the results on the ATTs are virtually the same as those on the ATE. We also compare group A (as the treatment) against all other three groups combined (as the control) with the same methodology, and the ATEs of being group A under varying split points are shown in Fig. S13. Figure S9: **Average treatment effects with different baseline groups and varying numbers of papers as split point.** For completeness, each plot has a different group as the baseline group: (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. Figure S10: **Average treatment effects with different baseline groups and varying career years as split point.** (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. Note that (b) has appeared in the main text. Figure S11: **Average treatment effects for different treated groups and varying numbers of papers as split point.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. Figure S12: **Average treatment effects for different treated groups and varying career years as split point.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. ### Fixed Effect Analysis In this section, we control for individual fixed effects by constructing panel data, to explore from another perspective the connection between past EP and ED and future performance. Specifically, following the Methods section in the main text, we divide the 21 calendar years (1995 - 2015) into 7 three-year periods (or 30 calendar years (1986 - 2015) into five 5-year periods) and perform the calculations of all relevant variables. Then, we solve the following regression equation using the Arellano-Bond estimator [2]: \[\begin{split}\text{LogCit}_{i,t}&=\beta_{0}+\beta _{1}\text{LogCit}_{i,t-1}+\beta_{2}\text{EP}_{i,t-1}+\beta_{3}\text{ED}_{i,t-1}+ \beta_{4}\text{P}_{i,t-1}\\ &\qquad+\beta_{5}\text{Careeryear}_{i,t-1}+\mu_{i}+\eta_{t}+\text {Noise}.\end{split}\] (S7) Note that the above Eq. (S7) is the same as the fixed-effect regression equation in the Methods section in the main text. Here, we treat Careeryear as an endogenous variable, i.e., a variable correlating with the observational noise in each time period. The results in the Table S10 show that after controlling for individual fixed effects and a range of other covariates, the coefficients on EP and ED remain statistically significant, which is consistent with the findings in Table S2. Note that we also conduct the Hansen test and the AR tests [2], respectively, to justify our model in Eq. (S7). The Hansen test tells us that our model does not suffer from the over-identifying problem, which indicates our choice of the independent variables and endogenous variables is sound; the AR tests show the first-order autoregressive structure is valid, which permits us to employ system GMM in solving Eq. (S7). ### Future EP/ED and Future Performance Following the Methods section in the main text, we define the following regression: \[\begin{split}\text{LogCit}_{\text{future}}&=\beta _{0}+\beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}+\beta_{3} \text{year}_{\text{first}}\\ &\qquad+\beta_{4}\text{area}_{\text{first}}+\beta_{5}\text{EP}_ {\text{future}}+\beta_{6}\text{ED}_{\text{future}}+\text{Noise}.\end{split}\] (S8) The results are shown in Table S11, and hint at a certain extent of inference in the relationship between the future performance and the adopted strategy. ### Performance of "Drastic Changers" As mentioned in the main text, most scientists intentionally or unintentionally maintain consistent research agendas, while a very small portion of scientists exhibit drastic behavioural changes, such as transitioning from low-EP-high-ED to the completely opposite high-EP low-ED (cautious explorers) and vice versa. We may guess that these 'drastic changers' alter their research strategies on purpose. Here we examine how their performance correlates with these drastic changes. To be more specific, for the default split point of 10 career years, we focus on group A and group D and observe the performance changes of these scientists who adopt the opposite strategies after the split point, i.e., scientist who were in the high-EP-low-ED (low-EP-high-ED) group before the split point and joined the low-EP-high-ED (high-EP-low-ED) group after the split point. We compare their performance after the split point to 1) their own performance before the split point and 2) the after-split performance of their counterparts, who were in the same group as the scientists before the split point and remain in the original group after the split point. In the comparison of within-personal performance, scientists in group D (group A) who employ the opposite strategy after the split point, i.e., turning into group A (group D), comprises only 8.6% (6.9%) of the group. This dramatic change results in an increase of 4.77% (a decrease of 18.97%) on average in the individuals' performance after the split point, as compared to their performance before the split. We also conduct a comparative analysis of the future performance between scientists who change their research strategy and their counterparts who do not. Specifically, we focus on scientists in group D (group A) prior to the split point. Scientists who remain in the same group after the split point are assigned to the control group, while those who adopt the opposite strategies are allocated to the treatment group. We employ the PSW technique to measure the difference in after-split performance between the control and treatment groups while controlling for the usual covariates. The results indicate that the average treatment effect between the control group and the treatment group is 0.293 (-0.304) with a \(P\) value of \(4.55\times 10^{-8}\) (\(1.33\times 10^{-6}\)), which translated to a percentage is 34.14% (-26.23%). ### Results on the PubMed Dataset To figure out how the past EPs and EDs of scientists are connected with future scientific impact in biomedicine, we conduct similar analyses using regression and propensity score weighting on the PubMed dataset. Recall the difference between PubMed\({}_{1}\) and PubMed\({}_{2}\). In this section, we perform the same regression and PSW analyses on both PubMed datasets, and the results are consistent. We therefore do not distinguish the two in the descriptions below. First, we run the regression (S4) on the dataset with varying split points, from 2 to 15 in number of papers and in career years, respectively. As is shown in Tables S12 and S14, the regression coefficients of the EP and ED with all split points are (statistically) significant and have the right signs, which is consistent with our findings in the APS dataset (Table S4). The result suggests that cautious explorers have higher future impact in biomedicine as well. When measuring scientists' future impact, we also employ another method--the maximum 5-year log-citations after the split point as an alternative, and the results are robust (Table S13). Furthermore, we utilize propensity score weighting to verify the advantage of being in group A, where the groups A, B, C and D are defined similarly. The experiments are similar to the above regression process, and the calculated ATEs for varying split points consistently show that group A performs better than any other group (see Figs. S14, S15, S17 and S18). In the studies on the APS dataset in the main text, we see that group A has an expanding advantage as the thresholds on constructing high/low EP/ED groups (A, B, C and D) become extreme. As such, we test different thresholds in the PubMed dataset, and the results in Figs. S16 and S19 verify that the conclusions are similar to those before. Figure S14: **Average treatment effects with different baseline groups and varying numbers of papers as split point in the PubMed\({}_{1}\) dataset.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. Figure S15: **Average treatment effects with different baseline groups and varying career years as split point in the PubMed dataset.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. Figure S17: **Average treatment effects with different baseline groups and varying numbers of papers as split point in the PubMed\({}_{2}\) dataset.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. Figure S18: **Average treatment effects with different baseline groups and varying career years as split point in the PubMed2 dataset.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. ### Results on the ACS Dataset In this section, we investigate the relationship between the past EPs and EDs of scientists and their subsequent scientific impact in the field of chemistry. To this end, we also employ regression and propensity score weighting techniques to analyse the ACS dataset. First, we conduct a regression (S4) on the ACS dataset, utilizing a range of split points varying from 2 to 15 in the number of papers and career years. The results, as presented in Table S15, indicate that the regression coefficients of EP and ED are (statistically) significant and have the correct signs. The results are in agreement with those obtained from the APS dataset (Table S4) and PubMed dataset (Table S12). Our findings indicate that cautious explorers have a higher future impact in chemistry. The results are robust when we employ "maximum 5-year log-citations" as an alternative method for measuring scientists' future impact (Table S16). In addition, we employ propensity score weighting to investigate the advantage of being in group A. The experimental procedures are similar to those of the aforementioned regression analysis, and the ATEs for different split points consistently demonstrate that group A outperforms all other groups (Figs. S20 and S21). Furthermore, we also demonstrate that group A exhibits a widening advantage as the thresholds used to construct high- and low-EP/ED groups become increasingly extreme in the ACS dataset (Fig. S22). Figure S20: **Average treatment effects with different baseline groups and varying numbers of papers as split point in the ACS dataset.** Note that (a) has group D as the baseline, (b) group C, (c) group B, and (d) group A. ## S5 Case Study We select a representative case for each of the four groups in the main text. The representatives start their careers in similar topics and have comparable numbers of publications. Specifically, the areas of their first publications in the APS journals are the same, which all correspond to the PACS codes with their first two digits being "74"; the differences between their numbers of publications in the first ten years of their careers are no more than one. For clear illustrations, we select those specific cases so that individuals with high (low) EDs work on topics that have longer (shorter) average distance between their PACS codes in their own respective groups (cf. Section S2.2.2) and those with high (low) EPs have more (fewer) explored two-digit areas, comparing with other individuals within the same groups. Besides, the number of explored areas (marked by the first two digits of PACS codes) and explored topics (marked by the first six digits of PACS codes) should be comparable both between the two high-EP (low-EP) individuals with different EDs, and the average distances between PACS codes of their papers should be comparable both between the two high-ED (low-ED) individuals with different EPs. ## S6 Analysing Other Possible Latent Factors As we mentioned in the main text and the Methods section, several conceivable mechanisms may provide alternative explanations to why cautious explorers have greater future impact. Here, we study some possible hypotheses, from the viewpoints of (A) hot research areas, (B) paper novelty, (C) advantageous collaborations and (D) changing institutions, and demonstrate how we may exclude these hypotheses. The related results in the entire section are summarized in Fig. S23 and Tables S17 and S18. As usual, the _past_ in this section corresponds to all the information of any scientist before the split point, while the _future_ to that after the split point, the split point in all the experiments in this section is 10 career years or 10 papers. The independent variables (covariates) are inherited from Section S4.1. ### Hypothesis A: Research Area As cautious explorers explored new areas more frequently, they could be chasing hot areas or just have greater exposure to hot areas, thus they garner higher future impact. The question is in fact two-fold: 1. Are cautious explorers more likely to publish on hot areas in the future? 2. Will working on hot areas in the future explain why cautious explorers have higher future impact? To answer the first question, we examine the effect of having different past exploration patterns, measured by the EP and ED, on the popularity of the areas of their future publications. Recall that a research _area_ correspond to the first two-digit of PACS codes. For a specific year, we gauge the popularity of an area--\(\text{Popularity}_{\text{area}}\), by the proportion of papers that are associated with the area among all the papers published in the same year. Since a paper may have several (unique) areas as identified by the first two digits in its PACS code(s), we define the area popularity of a paper--\(\text{Popularity}_{\text{paper}}\), either by (1) the average area popularity of all the associated PACS codes, or (2) the maximal area popularity of those codes. With either definition, we run a regression analysis where the dependent variable is the average area popularity of a scientist' future papers--\(\text{Popularity}_{\text{future}}\), against the usual independent variables as in Section S4.1. The regression equation is as follows: \[\begin{split}\text{Popularity}_{\text{future}}&=\beta_{0}+ \beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}\\ &+\beta_{3}\text{year}_{\text{first}}+\beta_{4}\text{area}_{\text{ first}}+\beta_{5}\text{EP}_{\text{past}}+\beta_{6}\text{ED}_{\text{past}}+\text{error},\end{split}\] (S9) where the future area popularity is the average area popularity of a scientist's future papers. We find that cautious explorers are more likely to publish high area-popularity papers in the future (\(P<0.001\), see Table S17 and Fig. S23), since the above regression reports that both the EP and ED have statistically significant coefficients and the coefficient of the EP (ED) is positive (negative). The results hold regardless of which definition of the Popularitypaper is used. Finally, we carry out a regression analysis of the future scientific impact when controlling for the Popularityfuture, and find that the EP and ED remain statistically significant (\(P<0.001\), see Table S18), though area popularity may explain some of the difference. The regression equation is as follows: \[\begin{split}\text{LogCit}_{\text{future}}&=\beta _{0}+\beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}\\ &+\beta_{3}\text{year}_{\text{first}}+\beta_{4}\text{area}_{ \text{first}}+\beta_{5}\text{EP}_{\text{past}}+\beta_{6}\text{ED}_{\text{ past}}\\ &+\text{Popularity}_{\text{future}}+\text{error}.\end{split}\] (S10) ### Hypothesis B: Paper Novelty Uzzi et al. [23] suggested that the highest-impact publications have a combination of high novelty and high conventionality. Cautious explorers may be more creative as they explored more frequently, so they may publish papers with higher novelty. At the same time, as they tended to explore closer areas, their publications may be somewhat more conventional in a sense. As such, publishing this type of paper may explain why cautious explorers have greater future impact. To test the hypothesis, we evaluate the following two questions: 1. Do cautious explorers tend to publish papers with both high novelty and high conventionality in the future? 2. Will publishing paper with both high novelty and high conventionality in the future explain why cautious explorers have more future scientific impact? We estimate the effect of the past exploration behaviour (exploration propensity and exploration distance) on the probability of publishing papers with both high novelty and high conventionality by a regression analysis, similar to those in the above Section S6.1. We calculate the novelty and conventionality of each paper and label the papers of high novelty and high conventionality in the same way as [23]. Then we run regressions where the dependent variable is the probability of a scientist publishing papers of both high novelty and conventionality. Similarly to the discovery in Section S6.1, we find that cautious explorers are more likely to publish papers of both high novelty and conventionality in the future (\(P<0.001\), see Table S17 and Fig. S23). Finally, we perform a regression analysis on the future impact while controlling for the probability of publishing papers with both high novelty and conventionality in the future. The results are again similar to those in Section S6.1 and suggest that the coefficients of EP and ED remain statistically significant (\(P<0.001\), see Table S18) with the correct signs, though publishing papers of both high novelty and conventionality may explain some of the difference in the future impact. ### Hypothesis C: Advantageous Collaborations Are cautious explorers prone to seek more advantageous collaborations? Some studies showed that teams often produced works of higher impact [30] than individuals, and having more distinguished co-authors may be partially responsible for greater future scientific impact. Besides, we measure scientists' EP and ED based the codes of papers, and these codes may be "brought in" by their co-authors. Through measuring EP and ED, are we quantifying the scientists' behaviour or their co-authors' behaviour? We need to further quantify the co-authors' contribution to the paper topics. Here, we study whether 1) cautious explorers having distinguished co-authors who contribute more to the papers, 2) co-authors bringing in their PACS codes or 3) cautious explorers working with larger teams is the reason for higher future impact. To begin with, we assess the author's contribution by whether she takes an important role in leading the research or managing correspondence. Similar to Wang et al. [27], we consider a scientist to be a _lead author_ if she is the first or last author of the publication. Then we study whether the probability of having lead-author publications is different between cautious explorers and other scientists. We find that having higher ED in the past predicts having higher proportions of lead-author publications in the future (\(P<0.001\), see Table S17 and Fig. S23). Finally, we measure the future impact of a scientist by only considering papers of which she was the lead author and thus has contributed the most to the publication. Our conclusions on the EP and ED are the same under this setting(\(P<0.001\), see Table S18). For the second question, we measure the co-author's "contribution to topics" as a function of the co-author's likely importation of codes from her prior work to a focal paper [31]. Following this idea, we first compute the number of unique codes that the focal author brings to the focal paper from her past 5 papers, denoted as \(I_{focal}\). We then compute this code importation metric for all other authors combined except the focal author, denoted as \(I_{other}\). Other co-authors' contribution is computed as \(\frac{I_{other}}{I_{focal+I_{other}}}\). Note that here we try both 2 digits and 6 digits of PACS codes, to represent "area" contribution and "topic" contribution respectively, in the context of our main text. We next perform the usual regression routine similar to Eq. (S9) and Eq. (S10), only on the newly constructed factor--co-authors' area/topic contribution in the future, and find that scientists with low EP and low ED are more likely to have co-authors with higher contribution in the future. When we set the future impact as dependent variable, and control for the co-authors' area/topic contribution in the future on top of the usual independent variables, the coefficients of EP and ED remain statistically significant (\(P<0.001\), see Table S18). We also study whether the future team sizes have much to do with the EP and ED. We define the team size of each paper by the number of its authors. Then we run regressions where the dependent variable is the average team size of the scientist's future publications. We find that a lower ED is a statistically significant predictor of a larger average future team size (\(P<0.001\), see Table S17 and Fig. S23). Then we conduct one more regression analysis on the future scientific impact controlling further for the average future team size, and the obtained coefficients of the EP and ED remain statistically significant (\(P<0.001\), see Table S18) and have the correct signs. ### Hypothesis D: Changing Institutions Cautious explorers may work in different institutions in the future, where they may have higher impact because of being in institutions of higher reputation [1], and this may lead to greater future scientific impacts. Besides, they often have better opportunities in institutions of great reputation, e.g. Ivy League schools. We measure a scientist' tendency of switching institutions, by dividing the number of institutions by the number of publications. By adding the tendency of changing institutions and whether working in the Ivy League in the future to the mix of dependent variables separately, we find that being cautious explorers is a statistically significant predictor of working with more institutions in the future (\(P<0.001\), see Table S17 and Fig. S23), and having higher ED predicts well whether a scientist would work in the Ivy League in the future (\(P<0.001\), see Table S17 and Fig. S23). A further regression analysis, where these two factors are controlled for on top of the usual independent variables, finds that the coefficients of EP and ED remain statistically significant (\(P<0.001\), see Table S18). ### Combining Hypotheses A-D To see if combining all the above hypotheses would explain away our main findings, we take all the corresponding variables in question mentioned above and perform a regression analysis with the following equation: \[\begin{split}\text{LogCit}_{\text{future}}&=\beta _{0}+\beta_{1}\text{LogCit}_{\text{past}}+\beta_{2}P_{\text{past}}\\ &+\beta_{3}\text{year}_{\text{first}}+\beta_{4}\text{area}_{ \text{first}}+\beta_{5}\text{EP}_{\text{past}}\\ &+\beta_{6}\mathbf{X}_{i,\text{attribute}},\end{split}\] (S11) where \(\mathbf{X}_{i,\text{attribute}}\) is a vector collecting all the attributes we studied in Section S6. We see in Table S18 that the coefficients of the EP and ED are always statistically significant, i.e., the established positive relation between having higher future research impacts and being cautious explorers, cannot be explained away by the above hypotheses, separately or jointly. ### Mediation Analysis We examine the correlation between the above confounding factors and past EP/ED and find that several confounding factors we study in this section are correlated with Figure S23: **Possible mechanisms to explain why cautious explorers have more impact in the future.** We present the regression analyses to investigate if scientists with different EPs and EDs behave differently in these dimensions in the future: area popularity of future publications, probability of publishing both novel and conventional papers, team size, probability of publishing lead-author papers, co-author’s area and topic contribution, tendency of changing institutions and whether she would work in an Ivy League university. (a-b) are results with 10 career years as split points, (c-d) are results with 10 papers as split points. The normalized coefficients are calculated by the equation mentioned in the caption of Table S17. (*\(P<0.1\); **\(P<0.05\); ***\(P<0.01\). Error bars are based on standard errors.) the exploration metrics (see Table S19 and Table S20). To accurately assess the direct and mediating effects between the EP/ED and the future impact, we employ mediation analysis for hypothesis A-E, which enables us to assess both the direct path coefficients linking the treatment variables (the EP or ED) to outcome variables (the future impact) and the indirect path coefficients mediated by intermediate variables (the confounding factors). By doing so, we can examine the presence of the mediating effect by the confounding factors, as well as quantify the respective magnitudes of the direct effect of exploratory metrics and the mediating effect of confounding factors on future impact. In detail, we include LogCit\({}_{\text{past}}\), \(P_{\text{past}}\), year\({}_{\text{first}}\), and area\({}_{\text{first}}\) as control variables, following the approach in the vanilla regression model (Eq. (S4)). We separately set one of the EP and ED as the treatment variable (while controlling for the other), and different confounding factors as the mediating variable, aiming to determine the magnitude of the direct and mediating effects. To calculate these effects, we utilize the "Mediation" package10 in Python. The results of the mediation analysis are presented in Table S21 and Table S22. Notably, the average casual mediation effects (ACME) consistently exhibit small magnitudes, indicating a minor role of mediation in the overall effect. In contrast, the average direct effect (ADE) consistently shows statistical significance (\(P<0.01\)) and accounts for a substantial portion of the total effect, affirming the existence of a direct relationship between EP/ED and future impact. Hence, the correlation between the exploratory metrics and future impact cannot be explained away by these mediating factors. Footnote 10: [https://www.statsmodels.org/stable/generated/statsmodels.stats.mediation](https://www.statsmodels.org/stable/generated/statsmodels.stats.mediation). ## S7 Robustness Checks ### Different Subsets of Scientists #### s7.1.1 The Minimum Number of Publications Requirement In the main text, we select scientists with at least 10 publications to make sure they are not short-term researchers, and this selection inevitably (wrongfully) screens out some scientists. Here, we repeat our analysis under different selection rules (of authors with at least 5 publications and at least 20 publications) with other settings being default (10 career years or 10 papers being the split point), and find that the results remain statistically significant in the regressions (see Table S23). #### S7.1.2 Removing Scientists with Asian Last Names As mentioned in Section S1.1.1, the disambiguation process may bring identification errors to scientists, especially to those with Asian names. Therefore, we follow the approach suggested by other studies [21] and exclude scientists with the top 200 most common Asian last names, in a list collected from Wikipedia. After such a removal, we obtain a smaller dataset with 21,963 scientists and perform the same regression analysis (see Equation (S4)). We find that the coefficients of the EP and ED are still statistically significant (see Table S24), and this suggests that our conclusions are robust against possible inaccuracies in name disambiguation. ### Controlling for the Genders of Scientists In Section S2.3.1, we find that men tend to have higher EDs. Here, we check whether the difference in future impact between scientists with different EPs and EDs is influenced by the confounding factor gender. We add the gender of each scientist as a control variable to the regression Equation (S4). Through the gender assignment process mentioned in Section S1.1.4, we obtain a smaller set of scientists, each of whom is assigned a (predicted) gender. We run regressions on the dataset of 19,044 scientists--a subset of the original dataset, and find that even if we control for genders, the regression coefficients of the EP and ED exhibit phenomena consistent with our main findings (see Table S25). ### Alternative Definitions of the EP and ED In this section, we change the definitions of EP and ED to test the robustness of the findings on the exploration metrics. First, we utilize different PACS code digits (2/4/6) for the calculation of EP and ED. Then, we replace the major components of computing the ED with other possibilities by constructing alternative topic graphs, trying a classic graph node distance measurement, and computing paper distance differently. #### S7.3.1 Different PACS code digits We have always used _areas_--the first two digits of PACS codes in the APS dataset to calculate the EP, and _topics_--the entire six digits for the ED, which we call the 2-6 combination of PACS code digits for calculating the EP and ED. For the ED, we choose the intuitively more informative six digits since we want to capture finer-grained distances, and we prefer the ED to be discriminative. As a comparison, we plot the distributions of the ED calculated with the first 2, 4, and 6 digits in Figure S24, and they seem agreeable to our presumption that the distributions of the ED resulted from using 2 and 4 digits are more skewed and concentrated, i.e., many more scientists have similar EDs, and the EDs--as a metric--would be therefore carry less information. Besides the 2-6 combination used in the paper, we try the 2-2, 2-4, 4-4 and 4-6 combinations. In Table S26, we see that the EPs always (statistically) significantly contribute to the future performance when calculated with the first 2 or 4 PACS digits. The table also shows that choosing the first 2 or 4 digits to calculate the ED is less than ideal, if we compare it with choosing the entire 6 digits. As such, we choose the way the ED is computed in our study to make the metric more informative. #### 7.3.2 Different Topic Graphs In the main text, we use the co-occurrences of PACS codes in the same papers to construct the topic graph for measuring topic distances. Here, we try other popular alternatives, including the citation graph [7, 20] and the co-citing graph [28, 33]. * Citation graph: Unlike the co-occurrence based topic graph in the main text, the citation graph is directed. The edge from topic \(i\) to \(j\) indicates that at least one paper \(p_{i}\) of topic \(i\) cites a paper \(p_{j}\) of topic \(j\). Specifically, each of such citations contributes \(w^{\prime}_{i\to j}\) to the total edge weight from topic \(i\) to \(j\) by \[w^{\prime}_{i\to j}=\frac{1}{n_{p_{i}}\times r_{p_{i}}\times n_{p_{j}}},\] (S12) where \(n_{p_{i}}\) and \(r_{p_{i}}\) denote the number of topics and the number of references in \(p_{i}\), respectively. Then, the total weight from topic \(i\) to topic \(j\) equals to the sum of \(w^{\prime}_{i\to j}\) over all such citations by \(w_{i\to j}=\sum w^{\prime}_{i\to j}\). To calculate the distance between topic \(i\) and \(j\), we adopt the _weighted overlap_ metric in the main text in the obtained _directed_ citation graph, by considering the incoming and outgoing links of the node pair separately and taking the average of the two. This approach is similar to that in [5], with the difference being that we take the average to scale the similarity to \([0,1]\), making it consistent with our other similarity metrics. Specifically, the _weighted overlap_ of topic \(i\) and \(j\) based on their outgoing links is denoted as \(O^{\text{out}}_{ij}\) and that based on incoming links as \(O^{\text{in}}_{ij}\), and both are calculated by \[O^{\text{out}}_{ij}=\frac{W^{\text{out}}_{ij}}{s^{\text{out}}_{i }+s^{\text{out}}_{j}-w_{i\to j}-w_{j\to i}-W^{\text{out}}_{ij}},\] (S13) \[O^{\text{in}}_{ij}=\frac{W^{\text{in}}_{ij}}{s^{in}_{i}+s^{\text {in}}_{j}-w_{i\to j}-w_{j\to i}-W^{\text{in}}_{ij}},\] (S14) where \(s^{\text{out}}_{i}\) denote the sum of weights from all \(i\)'s outgoing edges and \(W^{\text{out}}_{ij}\) is the weight of \(i\) and \(j\)'s overlapped outgoing neighbours. The _weighted overlap_ between topic \(i\) to topic \(j\) in a directed graph is then calculated by \[O^{\prime}_{ij}=\frac{O^{out}_{ij}+O^{in}_{ij}}{2}.\] (S15) * Co-citing graph: An edge between topic \(i\) and \(j\) indicates that papers of topic \(i\) and papers of topic \(j\) cite the same paper(s). Each time a paper of topic \(i\) and a paper of topic \(j\) cite the same paper(s), it contributes \(w^{\prime}_{ij}\) to total edge weight between topic \(i\) and topic \(j\), which is calculated by \[w^{\prime}_{ij}=\frac{1}{n_{p_{i}}\times n_{p_{j}}},\] (S16) where \(n_{p_{i}}\) (\(n_{p_{j}}\)) is the number of topics in \(p_{i}\) (\(p_{j}\)). The total weight between topic \(i\) and topic \(j\) is \(w_{ij}=\sum w^{\prime}_{ij}\). The results in Table S27 tell us that these different constructions of topic graphs do not affect our conclusions. #### s7.3.3 Different Node Distance Metrics We try two alternative node distance metrics, one basically an unweighted version of the metric used in the main text and the other is based on graph embedding through neural networks. In the main text, we have computed the _weighted overlap_ of two nodes' neighbours as the two nodes' distance on the graph, as described in Section S2.2.2. It can be seen as a weighted graph version of the classic unweighted Jaccard similarity [19], and the similarity between node \(i\) and \(j\) can be defined by \[\mathrm{Jac}_{ij}=\frac{|\Lambda_{i}\cap\Lambda_{j}|}{|\Lambda_{i}\cup\Lambda _{j}|},\] (S17) where \(\Lambda_{i}\) and \(\Lambda_{j}\) are their sets of neighbouring nodes, respectively. Here we adopt the unweighted version and take \((1-\mathrm{Jac}_{ij})\) to convert it to a distance metric. Using this alternative distance metric, the regression results in Table S28 suggest the same conclusions. The other node distance metric is based on continuous lower-dimensional feature representations for the nodes learnt by node2vec [6], which takes into account higher-order connections between nodes. The distance between two nodes can then be calculated as the cosine distance for the corresponding feature vectors. As detailed in [6], we select various hyperparameters for node2vec, such as dimension \(d=128\), walks per node \(r=10\), walk length \(l=80\), context size \(k=20\), return parameter \(p=4\), and in-out parameter \(q=0.25\), based on link prediction experiments. We then run node2vec on our topic co-occurrence graph to acquire node representations. Topic similarity is computed using the following equation: \[\text{Node2VecSim}_{ij}:=\text{MinMax}(\cos(f(i),f(j))),\] (S18) where MinMax and \(\cos\) refer to the min-max scaling and cosine similarity operations, respectively, and \(f(i)\) corresponds to the representation vector of node \(i\) acquired above. We then obtain the distance metric by subtracting Node2VecSim\({}_{ij}\) from 1. We observe that when utilizing the EP calculated through node2vec for regression, we have robust results consistent with our main findings; see Table S28. #### 5.3.4 Different Paper Distance When computing the ED of a scientist, we have calculated a paper \(p_{i}\)'s distance from the \(m\) papers before it. The method used in the main text (see Section S2.2.3) first constructs two sets, among which \(T_{i}\) contains the topics of \(p_{i}\) and \(T_{i,m}\) consists of all topics from the \(m\) papers before \(p_{i}\). The task then becomes calculating the distance between these two sets. For a topic \(t_{j}\) in \(T_{i}\), the method calculates its distance to all possible topics in \(T_{i,m}\) and takes the average to obtain \(t_{j}\)'s distance to the set \(T_{i,m}\). This is repeated over all topics in \(T_{i}\) and their distances to \(T_{i,m}\) are averaged to obtain the paper distance between \(p_{i}\) and the \(m\) papers before \(p_{i}\). As an alternative, we use the classic Hausdorff distance [8] to calculate the distance between two sets, which takes the minimum distance from \(t_{j}\) to all topics in set \(T_{i,m}\) as \(t_{j}\)'s distance to \(T_{i,m}\), and finds the maximum among all the node-to-set distances to be \(p_{i}\)'s paper distance from the \(m\) papers before it. It is calculated by \[\text{PD}_{p_{i}}=\max_{t_{j}\in T_{i}}\min_{t_{k}\in T_{i,m}}\text{TD}_{t_{j} t_{k}}.\] (S19) Table S29 shows consistent regression results using the Hausdorff distance, compared with these in the main text. ### Citation Measurements In the main text and the above, we measure a paper's impact by taking the logarithm of the number of citations it receives within five years after it is published (5-year log-citations, \(\log c_{5}\)), and here we try 10-year log-citations as an alternative. Furthermore, considering the fact that papers published in different periods of time [21] and in different research areas [32] may have different trends of popularity, we take into account possible factors that may affect the outcomes of interest. Finally, adopting a different angle of measuring academic achievements as suggested in [21], we could pay attention to a scientist's highest impact work in the future, which is the one with the highest \(\log c_{5}\) or \(\log c_{10}\). #### 7.4.1 10-Year Citations We try the 10-year log-citations (\(\log c_{10}\)) to measure the impact of papers over a longer period, as in [21]. Note that when using 10-year citations as citation measurement, only papers published before the end of 2010 are used in the regression. Table S30 shows that our conclusions remain the same. #### 7.4.2 Normalized Citations We normalize the citations with two existing methods to measure a paper's impact relatively to its peers, to offset the influence of papers being published in different years and having different areas. Specifically, the first method [32] divides a paper's number of citations by the estimated number of citations that a paper published in the same year with the same research areas would receive. Since the focal paper \(p\) may have several research areas (that may have duplicates), we compute \(e_{i}\) as the 5-year log-citations averaged over all papers published in the same year that are associated with area \(i\), and compute the arithmetic mean of all the \(e_{i}\) of \(p\) for the estimated log-citations \(e\) that an average peer paper of \(p\) would receive \[\begin{split} e_{i}&=\frac{1}{m}\sum_{j}^{m}c_{i,j},\\ e&=\frac{1}{n}\sum_{i}^{n}e_{i},\end{split}\] (S20) where \(c_{i,j}\) is the 5-year log-citations received by paper \(j\) in area \(i\), \(m\) is the number of papers in \(i\) in the same year and of the same area as the paper \(p\), and \(n\) is the number of areas paper \(p\) covers. Paper \(p\)'s normalized 5-year log-citations \(c_{\text{norm}}\) is then calculated by \(c_{\text{norm}}=c_{\text{raw}}/e\), where \(c_{\text{raw}}\) is the (raw) log-citations of \(p\). To accommodate the fact that papers are often associated with more than one area, the second method [25] proposes that \(c_{i,j}\) should be weighted by \(f_{i,j}\), which is the fraction that area \(i\) takes up among all areas of paper \(j\). The normalizing factor \(e\) admits the harmonic mean of \(e_{i}\): \[\begin{split}& e_{i}=\frac{\sum_{j}^{m}c_{i,j}f_{i,j}}{\sum_{j}^{m}f_{ i,j}},\\ & e=\bigg{(}\frac{\sum_{i}^{n}e_{i}^{-1}}{n}\bigg{)}^{-1},\\ & c_{\text{norm}}=c_{\text{raw}}/e.\end{split}\] (S21) Table S30 shows that under these two methods of normalizing citations, the main findings remain true. #### 7.4.3 Percentiles among Peers One way to take care of the influences on the paper citations from both research areas and time periods is comparing a paper's citations with other papers in the same area and published in the same year, and using its relative percentile as the impact measurement [21]. Here we calculate the percentiles under \(\log c_{5}\) and if a paper's 5-year log-citations is at the \(k\)-th percentile, it indicates that \(k\%\) of the papers from the same area and year have fewer \(\log c_{5}\) citations than the paper in question. Since papers are usually assigned with more than one area, we calculate the maximal and mean percentile in all its areas. Analyses show that under these two percentile measurements, our conclusions still hold (Table S30). #### 7.4.4 Maximum Future Citations In this section, we use the maximum 5-year log-citations or the maximum 10-year log-citations among all the scientist's publications after the split point to measure her future impact. This measurement [21] is meant to capture a scientist's highest impact--another measurement of academic success--in the future. Table S30 shows that we observe similar findings that are consistent with those reported above, if we adopt this alternate way of measuring a scientist's future accomplishments. ### Different Look-back Windows When calculating the EPs and EDs, we backtrack the focal paper's past \(J\) papers or past \(K\) years, which is called the _look-back_ period, to decide whether it is an exploratory paper or to compute its paper distance. In the main text, we have used \(J=5\) papers, and here we repeat the experiments by varying the look-back period to be the past \(J\) papers and the last \(K\) years from 1 to 15 as well as setting it to \(\infty\). The results in Tables S31 and S32 are consistent with these in the main text. ### Propensity Score Matching #### S7.6.1 Varying Grouping Quantiles In Sections S4.2.1 and S4.2.2, we select the scientists with either of their exploration metrics (EP and ED) in the top 50% of all relevant values to be the _high_ group, and those with their metrics in the bottom 50% the _low_ group. Here we change the definition of being _high_ (_low_) in either metric to being in the top (bottom) 40%, 30%, 25%, 20% and 15%, respectively, and repeat the experiments. The results are consistent with those in the main text (see Tables S33 and S34). #### S7.6.2 Shuffling after Matching--The Null Model In order to verify that the differences between the matched groups are not due to the random nature of quantities involved, we create synthetic datasets by shuffling scientists and their papers after the groups are matched, so that the resulting future performance are randomized. This will be referred to as the null model. We calculate the gap between the randomized future performance of the treatment group (with "high" metric) and that of the control group (with "low" metric) to see whether the synthetic datasets give a gap more extreme (larger in absolute values in this case) than the actually observed one (cf. \(P\)values in permutation tests). This can be repeated many times, and if more extreme gaps occur a lot, our finding are not robust enough. Specifically, we uniformly at random reassign scientists to papers and make sure that each scientist is assigned the same number of papers as she actually had (in a certain year) and each paper has the same number of authors as it actually had. We repeat the experiment 1,000 times, and observe none of such 'extreme random cases' (see the paper-level rows in Table S35), which suggests the robustness of our findings. Alternatively, instead of randomly shuffling the author-paper relations, it is possible to shuffle future performances of scientists uniformly at random. Such results are reported in the author-level rows of Table S35, and at the worst 4 out of 1,000 times reported more extreme intergroup gaps than what is actually observed. Again, this is a strong indicator that our results are robust against what any randomness allows for. ### S7.7 Propensity Score Weighting #### S7.7.1 Shuffling after Weighting--The Null Model Similarly to what was done in S7.6.2, we may create synthetic datasets as the null model by reshuffling uniformly at random the author-paper relations--the paper-level reshuffling, or the future performances of scientists--the author-level reshuffling. With both types of reshuffling, we repeat the propensity score weighting methodology of Section S4.2.3 on the synthetic datasets. The results are summarized in Table S36. Simply put, the high/low EP/ED groups are indistinguishable in future performances under the null model, and this is again in accordance with the conviction that our findings are more than reliable. ### Perturbing Independent Variables To further ensure the robustness of our regression results against potential variations in the independent variables, we introduce artificial Gaussian noise \(\epsilon\sim N(0,\sigma^{2})\) and incorporate it into each scientist's independent variables of EP, ED, and LogCit\({}_{\text{past}}\) separately. Through a series of experiments, we systematically adjust the value of \(\sigma\) to determine the maximum level of perturbation that can be introduced while still maintaining consistent regression results. The detailed regression outcomes are presented in Table S37, where we see the regression results are robust when adding different levels of perturbation to the independent variables.
2304.00658
Improving Meeting Inclusiveness using Speech Interruption Analysis
Meetings are a pervasive method of communication within all types of companies and organizations, and using remote collaboration systems to conduct meetings has increased dramatically since the COVID-19 pandemic. However, not all meetings are inclusive, especially in terms of the participation rates among attendees. In a recent large-scale survey conducted at Microsoft, the top suggestion given by meeting participants for improving inclusiveness is to improve the ability of remote participants to interrupt and acquire the floor during meetings. We show that the use of the virtual raise hand (VRH) feature can lead to an increase in predicted meeting inclusiveness at Microsoft. One challenge is that VRH is used in less than 1% of all meetings. In order to drive adoption of its usage to improve inclusiveness (and participation), we present a machine learning-based system that predicts when a meeting participant attempts to obtain the floor, but fails to interrupt (termed a `failed interruption'). This prediction can be used to nudge the user to raise their virtual hand within the meeting. We believe this is the first failed speech interruption detector, and the performance on a realistic test set has an area under curve (AUC) of 0.95 with a true positive rate (TPR) of 50% at a false positive rate (FPR) of <1%. To our knowledge, this is also the first dataset of interruption categories (including the failed interruption category) for remote meetings. Finally, we believe this is the first such system designed to improve meeting inclusiveness through speech interruption analysis and active intervention.
Szu-Wei Fu, Yaran Fan, Yasaman Hosseinkashi, Jayant Gupchup, Ross Cutler
2023-04-02T23:52:24Z
http://arxiv.org/abs/2304.00658v2
# Improving Meeting Inclusiveness using Speech Interruption Analysis ###### Abstract. Meetings are a pervasive method of communication within all types of companies and organizations, and using remote collaboration systems to conduct meetings has increased dramatically since the COVID-19 pandemic. However, not all meetings are inclusive, especially in terms of the participation rates among attendees. In a recent large-scale survey conducted at Microsoft, the top suggestion given by meeting participants for improving inclusiveness is to improve the ability of remote participants to interrupt and acquire the floor during meetings. We show that the use of the virtual raise hand (VRH) feature can lead to an increase in predicted meeting inclusiveness at Microsoft. One challenge is that VRH is used in less than 1% of all meetings. In order to drive adoption of its usage to improve inclusiveness (and participation), we present a machine learning-based system that predicts when a meeting participant attempts to obtain the floor, but fails to interrupt (termed a 'failed interruption'). This prediction can be used to nudge the user to raise their virtual hand within the meeting. We believe this is the first failed speech interruption detector, and the performance on a realistic test set has an area under curve (AUC) of 0.95 with a true positive rate (TPR) of 50% at a false positive rate (FPR) of \(<1\%\). To our knowledge, this is also the first dataset of interruption categories (including the failed interruption category) for remote meetings. Finally, we believe this is the first such system designed to improve meeting inclusiveness through speech interruption analysis and active intervention. speech interruption analysis, meeting inclusiveness, remote collaboration, machine learning + Footnote †: [leftmargin=*] label these clips accurately, we developed a crowdsourced labeling system. We validated the accuracy of the labeling thoroughly, a critical step for ensuring the model achieves the desired accuracy. To the best of our knowledge, this is the first labeled dataset that categorizes the different types of interruptions in meetings. Production CMC systems with millions of users have very strict accuracy requirements to ensure a smooth user experience. From our production data, we observed 40 interruptions for typical half-hour meetings with 4 participants. We established the FPR of our detector needed to be \(\leq\) 1%; this ensures that an average user would see one false positive every 10 half-hour meetings. Furthermore, we found that failed interruptions represent 10% of speech overlaps so setting our TPR to \(>\) 40% ensures that we capture at least two failed interruptions per meeting on average. Meeting such a strict criterion required us to evaluate several architectural choices. Failed interruptions are usually short speech utterances. The core challenge is to disambiguate such events with other short speech utterances such as agreements (e.g., "yeah", "makes sense") and acknowledgments (e.g., "uh-huh", "hmmm"). Given the small number of labeled clips (40K), it would be impossible to train a model from scratch. Therefore we leveraged the recent advances in self-supervised learning (SSL) representations that are learned on tens of thousands of hours of unlabeled data ((Brock et al., 2017), WavLM (He et al., 2017)). Our contributions in this paper are: * We frame the problem of detecting failed interruptions and create the first known dataset for categorizing interruptions in remote meetings with a labeling accuracy of 95%. * We provide the first predictor of failed speech interruption we are aware of. The failed speech interruption predictor achieves an AUC of 0.95 with a TPR \(>\) 50% and FPR \(<\) 1% in a realistic test set, which is good enough for a production CMC system. * We believe this is the first functional system designed to improve meeting inclusiveness through speech interruption analysis. In Section 2 we review related work in this area. In Section 3 we show that when the VRH feature in Microsoft Teams is used, it statistically increases inclusiveness. Section 4 describes the training dataset and test set used by our model, and Section 5 describes the model architecture and training. Section 6 and 7 give the results and some possible applications of the model, respectively. We provide conclusions and future work in Section 8. ## 2. Related Work Characterizing the conversational process is a well-studied topic. Schegloff presents a detailed account of the differences between "turn-taking" and "interruptions" (Selena et al., 2017). This work defines an interruption and cites clear examples of what should not be considered interruptions (e.g., signaling the current speaker to continue, gestures such as laughter, annoyance). A formal structure for organized turn-taking is presented in (Selena et al., 2017). This work defines the notion of a "false start" as turn-taking violations that need to be repaired. These repair rules are intended to address the intent to acquire the floor. However, the work assumes that the repair mechanisms will be followed (which isn't observed in practice). Margariti et al. present a quantitative analysis of turn-taking experienced in CMC systems (Margariti et al., 2017). They find that if a single speech overlap occurs, the odds of an overtake are roughly the same as the original speaker keeping the turn. However, when the same interrupter attempts multiple times, it is far more likely that an overtake will occur. The work does not differentiate between a failed interruption versus an interjection (also known as a "backchannel"); both result in a failed overtake. Sellen studies the impact on turn-taking and interruptions using CMC systems (audio-only and video) compared to same-room conversations (Sellen et al., 2017). Sellen's results show that simultaneous starts (leading to failed interruptions) occurred at the same rate in same-room conditions as they did in technology-mediated solutions. The work also states that a turn (or floor) has been acquired if a speaker is not interrupted for more than 1.5 seconds, a helpful quantitative definition used in our labeling process. Sellen's work, however, assumes good network and device conditions. The effect of network latency on "conversational interactivity" in VoIP systems is studied by Hammer et al. (Hammer et al., 2017). This work introduces the notion of active and passive interruptions. An active interruption is an intentional interruption whereas a passive interruption is unintended and occurs due to the delayed effect of hearing the remote speaker. The authors report that delay impacts passive interruption far more significantly than an active interruption. Every 100ms delay leads to a 15% relative increase in passive interruption rate. Echo suppression artifacts from double talk scenarios lead to attenuation leading to challenges in the ability to interrupt in CMC systems (Selena et al., 2017). International Computer Science Institute (ICSI) and AMI multi-party meeting datasets provide a good starting point (ICSI (Selena et al., 2017), AMI (He et al., 2017)) for analysing meeting interruptions. The ICSI dataset contains 75 real meetings and 72 hours of speech. All the meetings have a mixed audio file and one file for each speaker (headed, open microphone, etc.). It comes with detailed word-level transcriptions, and annotations capturing interruptions along with backchannels. AMI is a 100-hour meeting corpus recording in three rooms with different acoustic properties using non-native speakers. Speakers in ICSI and AMI are wearing headsets while being present in the same room. While each speaker is represented as a unique channel, cross-talk between speakers is not eliminated as the microphone associated with one participant is picking up speech from another participant. Remote-only meetings will not have this same issue. To address this bias, we collected our own data using our conferencing solution with remote-only participants. As a result, we primarily use these data sources to augment our training data; we do not use them to construct our test set as our typical remote-only meeting Figure 1. The virtual raise hands (VRH) option present in Microsoft Teams. scenarios do not have cross-talk. These datasets do not contain labels for failed interruptions. To the best of our knowledge, there does not exist a labeled audio dataset that captures the failed interruption category. In this work, we created detailed instructions for crowd-workers to categorize the different interruption types (including failed interruptions). Detecting speech overlaps in VoIP applications is primarily done through voice activity detectors (VADs) (Bordes et al., 2016; Bordes et al., 2017; Bordes et al., 2018; Bordes et al., 2019). Baron used a prosody-based approach to predict interruptions and "jump-in" points. Pitch and pause features significantly outperformed language models in predicting jump-in points (Bordes et al., 2017). Makhervaks et al. build a model for detecting "hotspots" (i.e., regions of high interest) during meetings (Makhervaks et al., 2018). The model comprises features derived from speech activity, language model embedding and prosody. Acknowledgements (or agreements) such as "hmm", "yeah" are commonly referred to as "Backchannels" in the literature (Kennedy et al., 2017; Kiennedy et al., 2017). Ruede et al. categorize backchannels using word embeddings. In (Kennedy et al., 2017) Kennedy et al. built a support-vector based model for categorizing laughter using mel-frequency cepstral coefficients (MFCCs). Keyword spotting models represent a special case of accoustic event detection aimed at detecting only a small set of words (Kiennedy et al., 2017; Bordes et al., 2019). To the best of our knowledge, no work has aimed at categorizing the different types of speech overlaps in remote meetings. Disambiguating failed interruptions from backchannels is at the core of this challenge as those two event types have similar acoustic characteristics, and they are very similar in duration. Yang et al. and Fitzgerald et al. built models to detect events they term "disfluencies" (Fitzgerald et al., 2017; Ruede et al., 2018). These include utterance repetitions, revisions, and false starts. In both these works, the assumption is that the disfluencies are repaired, which does not lead to a failed interruption. Creating a model to detect disfluencies (i.e. failed interruptions) is a complex endeavor. This task can benefit from applying self-supervised approaches applied to other similar downstream tasks such as keyword spotting (Bordes et al., 2019). There is a growing body of SSL representations (embeddings) trained from raw audio data (Wav2Vec (Bordes et al., 2017), HuBERT (Hu et al., 2017), WavLM (Bordes et al., 2017)). In terms of the downstream task of automatic speech recognition (ASR), each of these SSL approaches has been able to improve on the previous state-of-the-art, demonstrating the effectiveness of these embeddings for extracting language context. In our work, we use these embeddings generated from raw audio as an input to our interruption classifier. There is only one work in measuring meeting inclusiveness we are aware of (Kiennedy et al., 2017). The authors conducted a survey of 16K employees (3.3K valid responses) in a large technology company and developed a multivariate model to extract the relationship between meeting effectiveness, inclusion, and their contributing factors. The study showed that 80% of the meetings in this company were inclusive, showing a significant room for improvement. The model indicates that participation is the main contributor to the perceived meeting inclusiveness, 47% larger than the next important factor. Additionally, the survey results included that the top feature request from participants to improve meeting inclusiveness was a "better ability for remote participants to interrupt." Gender has been studied as a factor in conversations and participation. Eecke et al. shows that women are more often interrupted in conversations than men, and that men interrupt women more often than they interrupt men (Fitzgerald et al., 2017). Leaper et al. (Leaper et al., 2019) conducts a meta-analysis that shows men talk more than women in conversations. James et al. provides a review of the extensive research on this topic of gender bias in conversation (James et al., 2019). ## 3. Analysis of Virtual Raise Hand and Inclusiveness The VRH feature is designed to facilitate the participation of remote participants in a popular video-conferencing application. The goal of this section is to quantify the impact of VRH usage in meeting inclusiveness. Here, VRH usage refers to any interaction with the VRH feature during the meeting by any of the participants (either raising a hand or lowering a hand oneself or on behalf of someone else). The impact is estimated by comparing the meeting inclusiveness with and without VRH usage for comparable sets of meetings. In our system, VRH usage information is available for all meetings through call technical telemetry. Meeting inclusiveness is measured using a machine learning (ML) model that predicts an inclusiveness score based on the rest of the call telemetry (except for VRH usage). This approach enables a generalizable estimation based on a large sample of real meetings instead of artificial study groups. Next, we will describe the development of ML predictor for inclusiveness score and its application to estimate the impact of VRH. ### Predictive Model for Inclusiveness The initial survey results from (Kiennedy et al., 2017) motivated the development of an in-app end-of-call questionnaire that measures user-perceived meeting effectiveness and inclusiveness for a randomly selected set of meetings. The survey includes two 5-scale questions (3 being neutral): 1. How effective was this meeting at achieving its goals? 2. How included did you feel during the meeting? The in-app survey produced more than 40K ratings that when joined with meeting telemetry were used to fit a predictive model for meeting inclusiveness and effectiveness. The model consumes 28 engineered features derived from call quality, reliability, meeting size, audio participation (speaking during the call), video or screen share usage, meeting duration, and meeting time. All these features proved to be important in describing the rating variations via a Graphical model reported in (Kiennedy et al., 2017). Similar to (Kiennedy et al., 2017), the in-app data also showed that participation is one of the dominant features to predict and describe inclusiveness. The model used for predictive purposes in this work is a light-GBM (Kiennedy et al., 2017) binary classifier that generates the probability of a user providing a 4-star or 5-star rating to the inclusiveness question. Performance on the test set is measured via cross-validation with 50 random test-train splits and shows the AUC to be 0.75+/-0.02 for the test set. To predict the user ratings for calls with no user rating, we convert the predicted probability to a binary score by applying a threshold that ensures the FPR is not larger than 5%. FPR is defined as the rate of incorrect classification when the actual user rating for inclusiveness is not 4- or 5-stars. At 5% FPR, the model has 32% TPR and can only flag 32% of 4- or 5-star calls with the available telemetry. Therefore, the predicted baseline of inclusiveness score by the model is much lower than the actual user ratings (if available). However, the model is still sensitive enough to estimate the lift in predicted scores because of using VRH feature. This is discussed in the next section. ### Raise Hand Impact on Predicted Meeting Inclusiveness We compared the predicted inclusive score between meetings with and without VRH engagement. Predictions are binary values generated by the model described in the previous section. Since VRH is a feature available to all users and is not subject to any controlled experimentation (Krishnan et al., 2017), it is not possible to conduct a causal analysis. Instead we apply Propensity Score stratification (Bordes and McAllister, 2010) as a pseudo-experiment method. Propensity score (PS) methods are effective techniques for estimating causal relations in the absence of controlled experimentation (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2018). (Krishnan et al., 2018) showed that if the propensity scores are set properly, they can eliminate more than 90% of the bias induced by confounding factors. Confounding factors are any attribute of the data that may have a different distribution in the treatment and control samples, and additionally, their impact on the outcome variable can be mixed with the effect of the outcome variable. In this study, the outcome variable is meeting inclusiveness, the treatment group is the set of calls where VRH is used (called VRH calls), and the control group is the set of calls without VRH engagement. In this study, the main confounding factors are meeting size, meeting duration, and the choice of media (e.g., video or screen share). For example, VRH is mostly used in meetings with 4 or more participants that are longer than 30 minutes. Therefore, the distribution of meeting size and meeting duration in VRH calls is significantly different from average calls. To reduce this bias, we generate PS values using a logistic regression model that predicts VRH activity as a function of confounding factors. These PS values range between 0 and 1 and are used to generate five equal-sized bins. Within each bin, the distribution of confounding factors is statistically similar and provides comparable sets for estimating the VRH impact. According to (Krishnan et al., 2017) increasing the number of bins can improve the accuracy of inference but the margin becomes smaller with more than 10 bins. In our analysis, the results were mostly consistent between 5 and 10 bins with minimal gain in the variance of the predicted delta, i.e., a smaller 95% confidence interval (CI). Using PS stratification, the impact of VRH on the predicted inclusive score is a 3.4% absolute increase with 95% CI of (2.9%, 3.9%). This means that for meetings with more than 2 participants, using VRH can improve the predicted inclusive score by an absolute 3.4% on average at a 95% confidence level based on the ML model for inclusiveness. ## 4. Dataset In this section, we detail our efforts to create an accurate train and test set for developing our interruption detector. We define the speech overlap categories in Section 4.1 and specify our dataset requirements in Section 4.2. Next, we tabulate the data sources used in Section 4.3. In Section 4.4, we outline our labeling process to provide an account of the nuances in labeling such a dataset. Finally, in Section 4.5, the train and test sets are described. ### Speech Overlap Categories The speech overlap categorization assumes that the floor is held by a speaker. An interrupter is a second participant that speaks to create overlapping speech; they may or may not intend to obtain the floor. * **Backchannel**: A short period of conversation when a speaker conveys attention, understanding, or agreement in the background. The intention is not to obtain the floor. For example: "yeah", "Mm-hmm", "uh-huh". * **Failed Interruption**: The interrupter attempts to obtain the floor by speaking at the same time as the current speaker, but they fail to obtain the floor. * **Interruption (or successful interruption)**: The interrupter overlaps with the first speaker before the sentence of the first speaker is complete. The interrupter successfully obtains the floor and the attention of all the other participants. * **Laughter**: The interrupter laughs while the first speaker is talking. The interrupter does not get the floor and does not intend to get the floor. * **Other**: This represents audio overlap scenarios without clear speech content. This could represent overlaps with no intelligible words (e.g., garbled speech, throat clearing), or background noise (e.g., mouse clicks). ### Dataset Requirements The CMC system (Microsoft Teams) we are studying can separate the audio of each remote participant. As a result, we only considered data sets where the audio of each participant was captured in a separate channel. This design choice simplifies the solution as a mixed channel dataset would require the speakers to first be separated (i.e., speaker diarization). Our scenario represents multi-party business meetings, hence we were interested in conversational dynamics arising from remote meetings with three (3) or more participants. We restrict ourselves to the English language but captured different accents. We prioritized accents from the following locales: United States (US), Great Britain (GB), India (IN), and Germany (DE). These locales were obtained based on countries with the highest product \begin{table} \begin{tabular}{c c c|c c c c c|c} \hline Data Source & Raw Audio Hours & Labeled By & Backchannel & Failed Interruption & Interruption & Laughter & Other & Total \\ \hline AMI & 100 & Crowdsourced & 8,760 & 2,270 & 5,630 & 1,650 & 150 & 18,450 \\ ICSI & 72 & Crowdsourced & 4,650 & 1,060 & 3,570 & 50 & 100 & 9,420 \\ Original Data & 77 & Crowdsourced & 3,840 & 1,140 & 2,570 & 1,430 & 200 & 9,180 \\ Original Data & 15 & Expert & 1,310 & 320 & 860 & 340 & 150 & 2,970 \\ \hline All & 264 & All & 18,560 & 4,790 & 12,630 & 3,470 & 600 & 40,020 \\ \hline \end{tabular} \end{table} Table 1. Number of Clips by Source and Label usage. The dataset should have an equal representation of male and female speakers. The age group of the speakers in the dataset is required to be higher than 18 since these conversations represent business meetings. For our current version of the dataset, we restricted ourselves to speakers in low noise conditions with headsets and good networks. In the future, we plan to capture data with varying network (latency), device (e.g., open speakers), and acoustic (e.g., noise) conditions. To ensure that we can validate the quality of predictions from our detector, we set a requirement for the accuracy of the labels in the test set to be 95% as measured by the Fleiss' Kappa metric (Kipper, 2018). ### Data Sources In our development process, we use both qualified publicly available datasets and our original datasets as input data sources. #### 4.3.1. Public Data Source We use the AMI Meeting Corpus (Bordes et al., 2017) and the ICSI Meeting Corpus (Krizhevsky et al., 2017) as our public data sources. Both corpora meet our scenario requirements, however, there are a few limitations too. Both corpora were collected with participants sitting in the same room with headsets on. This results in two problems: first, all meetings are face-to-face, which could be different from online communication; second, with people sitting in the same room, it includes cross-talk when people's microphones pick up other people's voices. We use these datasets to augment our training data. #### 4.3.2. Original Data Source To mitigate the challenges from the public datasets and create accurate test sets for remote meeting scenarios, we captured data using our conferencing product. Another goal of capturing this data was to improve the dataset diversity. We have collected 90 hours of meeting recordings in 148 meetings. Each meeting has 3 or 4 participants. The participants join remotely using our product, and discuss one or more business topics having some natural contention (e.g., prioritization, product design, etc.). In the data collection stage, we also ensure that we have sufficient speaker diversity for gender, accent, and age. In Table 2, We show the detailed demographic information of our original data at meeting-participant level. ### Labeling Process There are two major steps in creating labeled data: 1. Detect speech overlap using an accurate VAD to create candidate clips, and 2. Label candidate clips using an accurate labeling procedure. These clips were used to create the train and test sets. The training dataset was labeled using a crowdsourcing service (Bordes et al., 2017). We had experts label 3,000 clips to create a test set with an accuracy of \(\geq\) 95%, and to help validate crowdsourcing label quality using golden sets. #### 4.4.1. Candidate Clips For each meeting, we scan each channel with an accurate VAD (Bordes et al., 2017) to locate the timestamps where speech overlap occurs. Given that we don't want to trigger our feature too often, especially in active discussions, we also only keep overlaps in which the interrupter has been silent for at least 3 seconds before jumping in and the interrupter's utterance needs to be at least 0.3 seconds long. Then, for labeling and training purposes, we export a 10-second stereo audio clip for each speech overlap detected. The 5th second of each clip is the start point of the speech overlap. To improve labeling accuracy the clips are created in stereo, with the interrupter's audio on the right channel, while all the other audios are merged into the left channel. #### 4.4.2. Crowdsourcing Labeling We provided detailed instructions to the crowd-workers for their labeling task. The same instructions were used by experts as a validation of the instructions. These instructions specified set-up instructions (e.g., headset usage), training modules, qualification tests, and golden clips to validate the quality of the expert labeling (Kipper, 2018). These instructions outlined examples to help disambiguate areas of confusion. One noteworthy challenge was to provide labels for clips that had multiple categories (e.g., a backchannel followed by a failed interruption). We follow the practice from the ImageNet paper to create a hierarchy of categories and represent one category per task (Krizhevsky et al., 2017). The hierarchy consisted of splitting the overlaps into two levels. Level 1 comprised on successful interruption versus no-interruption. For the no-interruption bucket, the following precedence was followed if multiple categories were present: failed interruption > backchannel > laughter > other. This precedence was based on the cost of mis-classification (failed interruptions are sparser and hence get higher precedence). We evaluated the quality of the labels by varying the number of annotators and converged on using 7 unique votes from annotators. #### 4.4.3. Expert Labeling We randomly selected 3,000 clips from our original data source and had them labeled by internal experts. In particular, 450 of these clips are labeled by five experts. Multiple rounds of discussions were conducted to ensure complete alignment and refinements of the instructions. Overall, 92% of these clips reach a 5-out-of-5 agreement and a Fleiss's Kappa of 95%. The rest of the 2,550 clips are labeled by up to two experts. The quality of the 2,550 clips was validated by randomly sampling 150 clips and having all experts label them to ensure we met the accuracy bar. The expert labeling effort was critical in creating an accurate test set and evaluating the crowdsourced labeling system. We iterated on this process over multiple rounds to ensure the labeling process had a 100% coverage of clips with speech overlap. Among those candidate clips, we achieved an accuracy of 95%. \begin{table} \begin{tabular}{c|c c c} \hline \hline Demographic Info & Sub-categories & Count & Percentage \\ \hline Gender & Female & 292 & 53\% \\ & Male & 251 & 46\% \\ & Non-binary & 7 & 1\% \\ \hline \multirow{2}{*}{\begin{tabular}{c} Accent \\ \end{tabular} } & US & 256 & 47\% \\ & GB & 111 & 20\% \\ & IN & 111 & 20\% \\ & DE & 63 & 11\% \\ & Other & 9 & 2\% \\ \hline \multirow{2}{*}{\begin{tabular}{c} Age Group \\ \end{tabular} } & 18-24 & 98 & 18\% \\ & 25-34 & 184 & 34\% \\ & 35-44 & 107 & 19\% \\ & 45+ & 161 & 29\% \\ \hline \multirow{2}{*}{ \begin{tabular}{c} Total \\ \end{tabular} } & Total & 550 & 100\% \\ \hline \hline \end{tabular} \end{table} Table 2. Demographic Information of Original Data Source ### Train and Test Sets The total number of clips labeled by crowd-workers and experts is shown in Table 1. **Test Set**: For the test set, we use high-quality expert-labeled clips. We randomly select 250 clips from each category (not considering other) to create a fixed test set of 1,000 clips. **Train Set**: The train set is comprised of clips labeled by crowd-workers. We found that the 70% agreement level (i.e., 5 out of 7 provided the same label) provided better model performance compared to the majority vote. This threshold provided a good trade-off between high label quality and training data volume. We did comparisons on 1,100+ clips with the expert labels as ground truth to arrive at this decision. As a result, we only used clips that could reach the 70% consensus level. ## 5. Model The overall model structure is shown in Figure 2. In model training, we only use the audio after the speech overlap starts. Therefore, the model input is the last 5 seconds of the two-channel clips described in Section 4.4.2. The right channel contains only the voice of the interrupter, and the left channel consists of the mixed voice of all other participants. A feature extractor is then applied to the raw waveform to obtain useful features. Because SSL representations for speech have achieved state-of-the-art performance on several downstream tasks, in this study, we also extract the high-level embeddings through pretrained-SSL models. Before feeding the embeddings to the classifier, a pooling operation along the time axis is used to reduce the input dimension. Here we use attention pooling (AP) (Wang et al., 2017) to obtain utterance-level representation \(\mathbf{U}\in R^{d\times 1}\) from a sequence of frame-level representations \(\mathbf{H}\), where \(\mathbf{H}\in R^{d\times M}\) is the embedding extracted from the SSL model, \(d\) is the dimension of the feature, and \(M\) is the number of frames. The attention weight \(\mathbf{Q}\in R^{1\times M}\) of each frame is first calculated using: \[\mathbf{Q}=\text{softmax}(\mathbf{WH}) \tag{1}\] where \(\mathbf{W}\in R^{1\times d}\) can be treated as a template to decide which frame is more important. Finally, utterance-level representation \(\mathbf{U}\) can be obtained through weighted sum: \[\mathbf{U}=\text{HQ}^{T} \tag{2}\] Uterance-level representation \(\mathbf{U}\) is then fed into a 5-layer feed-forward neural network (also called DNN), with the number of node for each layer as (Shen et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019) and LeakyReLU (Leaky et al., 2015) as the activation function. Cross entropy is applied as a loss function with stochastic gradient descent as an optimizer using a learning rate of 0.0015. ## 6. Results ### Performance Comparison Between Different Input Features and Classifier Model In this section, we first show the results of different feature extraction methods. In addition to the SSL embeddings, the results of using MFCC, magnitude spectrogram, and openSMILE acoustic feature (Hu et al., 2017) are also presented as baselines (for openSMILE feature, we didn't apply AP, as it is already an utterance-level representation). Because in this study we care more about the performance of detecting the _failed interruption_ class, in Table 3, both AUC and TPR at 1% FPR are based on the _failed interruption_ class. From the table, it can be observed that the performance of conventional features such as MFCC and magnitude spectrogram are far away from our goal. The reason may be because a suitable feature for this task should be **speaker/noise independent**(Hu et al., 2017) and contain **semantic information**. It may be hard to extract useful information and discard useless one from traditional features with limited training data. Comparing the three SSL-based embeddings with base model size (i.e., \(Way2vec2Base\), \(HuBERTBase\) and \(WaLMBase\)), WavLM performs the best which is consistent with the results shown in the SUPERBB benchmark (Wang et al., 2019) for other speech tasks. For these three embeddings, we calculate the weighted sum of the representations from different transformer layers with learnable weights as the input to the attention-pooling module. We found that this can significantly improve the performance compared to that only using the embedding from the last transformer layer. To further improve the performance, we apply \(WayLM_{Large}\) for feature extraction and get a TPR \(>\) 50% at FPR=1%. Next, we fix \(WaLM_{Large}\) as our feature extractor and see the performance with different numbers of input channels and classifier models. From Figure 3 and the first two rows in Table 4, we can observe that without the information from the left channel (mixed voice from all other participants), the performance drops Figure 2. Proposed model structure for speech interruption detection. seriously, which may be caused by the confusion between failed interruption and successful interruption (will verify this in the next section). We then want to see the effect of AP on the model performance. Removing AP from the downstream task means that the flattened frame-level representations H are directly fed into the DNN classifier. This not only increases the model size of the classifier significantly but from the third row in Table 4, the TPR also decreases by 20%. The reason may be because the input dimension is too large compared to the number of training data. ### Confusion Matrix Analysis To further analyze the prediction results of \(WaLM_{Large}\), we present the confusion matrix in Table 5 and Table 6 for two channels and one channel input, respectively. We included another column called 'Below 1% FPR-threshold' to represent the cases where the failed interruption should be the predicted class by the \(argmax(.)\) function, but the confidence score is smaller than the threshold for a 1% FPR. From Table 5, it can be observed that the most confusing case for the model is to distinguish between backchannel and failed interruption. We argue that this is because both of them have very similar voice activity patterns (i.e., for the interrupter: a short period of speaking, and for other participants: keep talking). In other words, the model cannot simply tell them by audio energy distribution, it has to infer the intention through semantic information from SSL embeddings. By comparing Table 5 and Table 6, when the ground truth is failed interruption, the model with one channel input is more easily misclassified as a successful interruption, which verifies our assumption made in the previous section that left channel (mixed voice from all other participants) can help the model to distinguish between successful and failed interruption. ### Relation to Other Speech Tasks Speech interruption classification is a relatively new topic. As a result, we want to know its relation to traditional speech tasks. As mentioned in Section 6.1, the SSL-based embeddings come from the weighted sum of different transformer layers with learnable weights. The learned weights can hence give us some information about which layers are more important for a certain task. In Figure 4, we take the learned layer weights from \(WaLM_{Base+}\) as an example to compare our weights with those learned in other speech tasks of the SUPERB benchmark. From the figure, we can observe that the pattern of our learned weights is most similar to the one learned in the Keyword Spotting (KS) task. This implies that the input features used for the two tasks are similar to each other, and they are the most related tasks. We conjecture that this is because when the model tries to distinguish between backchannel and failed interruption, it relies on some mechanism similar to KS (e.g., in the case of backchannel, the interrupter usually says something like: "yeah", and "Mm-hmm", etc.) \begin{table} \begin{tabular}{c|c|c c} \hline \hline & & \multicolumn{2}{c}{Failed Interruption:} \\ \hline Input & dimension of \(H\) (\(d\times M\)) & \# parameters & AUC & TPR@ 1\% FPR \\ \hline MFCC & (2\(\times\)40) \(\times\) 401 & 0.4M & 0.667 & 3.48\% \\ Spectrogram & (2\(\times\)257) \(\times\) 313 & 0.4M & 0.736 & 6.24\% \\ openSMILE & (2\(\times\)88) \(\times\) 1 & 0.4M & 0.816 & 11.56\% \\ \(Wa2vec2Base\) & (2\(\times\)768) \(\times\) 249 & 95M & 0.925 & 32.28\% \\ \(HuBERT_{Base}\) & (2\(\times\)768) \(\times\) 249 & 95M & 0.934 & 31.52\% \\ \(WaLM_{Base+}\) & (2\(\times\)768) \(\times\) 249 & 95M & 0.943 & 37.80\% \\ \(WaLM_{Large}\) & (2\(\times\)1024) \(\times\) 249 & 316M & 0.949 & 50.93\% \\ \hline \hline \end{tabular} \end{table} Table 3. Classifier results for the _Failed Interruption_ class with different input features (Each number is the average result of 10 different runs). Dimension of \(H\) and the number of parameters for the end-to-end model (SSL model + classifier) are also shown (Note that for the dimension \(M\), the frame shift for SSL embedding is 20 ms and we use the default setting from torchaudio library for MFCC and spectrogram). Figure 3. Learning curve with different numbers of input channels \begin{table} \begin{tabular}{c|c|c c} \hline \hline & & \multicolumn{2}{c}{Failed Interruption:} \\ \hline Input channels & classifier model & AUC & TPR@ 1\% FPR \\ \hline 2 channels & AP+DNN & 0.949 & 50.93\% \\ right channel & AP+DNN & 0.918 & 37.93\% \\ 2 channels & DNN & 0.912 & 31.05\% \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison between different number of input channels and classifier models with \(WaLM_{Large}\) as the feature extractor ## 7. Applications As an example application of the failed speech interruption detection, we can nudge the failed interrupter to use the VRH feature, as shown in Figure 5. In addition, we can also remind the speaker who didn't yield that meetings are more inclusive when everyone has a chance to speak, especially if they are repeatably not yielding in that meeting. Finally, the failed speech interruption detections can be logged in the CMC system's calling telemetry and analyzed to further improve understanding of meeting effectiveness and inclusiveness, to conduct AB tests to evaluate new features in the CMC system to reduce failed speech interruptions, and to improve the predictive models of effectiveness and inclusiveness. ## 8. Conclusions In this paper, we describe a method to improve meeting inclusiveness through speech overlap analysis. We introduce the challenge of "failed interruptions" in remote meetings based on our findings. We created the first accurate labeled dataset to address this challenge in CMC systems. By leveraging recent advances in self-supervised learning representations in speech, we built a detector that achieves a TPR of more than 50% with a FPR of less than 1%. The dataset needs to be expanded to support multiple languages and scenarios such as varying network and device conditions. We plan to use this dataset to host a challenge and release the baseline model for stimulating active research on this topic. We are also integrating the model into Microsoft Teams and conducting AB tests to measure the improvement of inclusiveness and overall meeting effectiveness. Finally, we are developing a full-duplex machine learning-based acoustic echo canceller (e.g., see (Wang et al., 2018)) that also helps remote participants interrupt in a meeting and better participate in meetings. ## Acknowledgements We acknowledge Juhee Cho, Alex Chzhen, and Chinmaya Madan from the Teams team for their contributions in the model and data infrastructure. The Azure cognition team provided strong guidance with the model development effort. Specifically, we thank Yu Wu, Zhou Chen, and Dimitrios Dimitriadis for the productive discussions. \begin{table} \begin{tabular}{c|c c c c c} \hline & \multicolumn{5}{c}{Predicted} \\ \hline Ground Truth & Backchannel & \begin{tabular}{c} Failed \\ Interruption \\ \end{tabular} & Interruption & Laughter & \begin{tabular}{c} Below \\ 1\% FPR-Threshold \\ \end{tabular} \\ \hline Backchannel & 201 & 4 & 3 & 20 & 22 \\ Failed Interruption & 42 & 85 & 35 & 12 & 76 \\ Interruption & 8 & 4 & 207 & 11 & 20 \\ Laughter & 29 & 0 & 0 & 221 & 0 \\ \hline \end{tabular} \end{table} Table 6. Confusion matrix of the prediction results from \(WaLM_{large}\) for one channel input. Figure 4. The learned weights from different layers of the \(WaLM_{Base+}\) transformer model for different tasks. Figure 5. Example application of the failed speech interruption detector \begin{table} \begin{tabular}{c|c c c c c} \hline & \multicolumn{5}{c}{Predicted} \\ \hline Ground Truth & Backchannel & \begin{tabular}{c} Failed \\ Interruption \\ \end{tabular} & Interruption & Laughter & \begin{tabular}{c} Below \\ 1\% FPR-Threshold \\ \end{tabular} \\ \hline Backchannel & 219 & 6 & 8 & 11 & 6 \\ Failed Interruption & 47 & 131 & 19 & 7 & 46 \\ Interruption & 9 & 1 & 225 & 7 & 8 \\ Laughter & 37 & 0 & 1 & 212 & 0 \\ \hline \end{tabular} \end{table} Table 5. Confusion matrix of the prediction results from \(WaLM_{large}\) for two channels input.
2308.11480
Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection
Improving the reliability of deployed machine learning systems often involves developing methods to detect out-of-distribution (OOD) inputs. However, existing research often narrowly focuses on samples from classes that are absent from the training set, neglecting other types of plausible distribution shifts. This limitation reduces the applicability of these methods in real-world scenarios, where systems encounter a wide variety of anomalous inputs. In this study, we categorize five distinct types of distribution shifts and critically evaluate the performance of recent OOD detection methods on each of them. We publicly release our benchmark under the name BROAD (Benchmarking Resilience Over Anomaly Diversity). Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts. In other words, they only reliably detect unexpected inputs that they have been specifically designed to expect. As a first step toward broad OOD detection, we learn a generative model of existing detection scores with a Gaussian mixture. By doing so, we present an ensemble approach that offers a more consistent and comprehensive solution for broad OOD detection, demonstrating superior performance compared to existing methods. Our code to download BROAD and reproduce our experiments is publicly available.
Charles Guille-Escuret, Pierre-André Noël, Ioannis Mitliagkas, David Vazquez, Joao Monteiro
2023-08-22T14:52:44Z
http://arxiv.org/abs/2308.11480v1
# Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection ###### Abstract Improving the reliability of deployed machine learning systems often involves developing methods to detect out-of-distribution (OOD) inputs. However, existing research often narrowly focuses on samples from classes that are absent from the training set, neglecting other types of plausible distribution shifts. This limitation reduces the applicability of these methods in real-world scenarios, where systems encounter a wide variety of anomalous inputs. In this study, we categorize five distinct types of distribution shifts and critically evaluate the performance of recent OOD detection methods on each of them. We publicly release our benchmark under the name BROAD (Benchmarking Resilience Over Anomaly Diversity). Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts. In other words, they only reliably detect unexpected inputs that they have been specifically designed to expect. As a first step toward broad OOD detection, we learn a generative model of existing detection scores with a Gaussian mixture. By doing so, we present an ensemble approach that offers a more consistent and comprehensive solution for broad OOD detection, demonstrating superior performance compared to existing methods. Our code to download BROAD and reproduce our experiments is publicly available1 Footnote 1: Code to get BROAD is available at [https://github.com/ServiceNow/broad](https://github.com/ServiceNow/broad). Code to reproduce our experiments is available as an OpenOOD [83] fork at [https://github.com/ServiceNow/broad-openood](https://github.com/ServiceNow/broad-openood). ## 1 Introduction A significant challenge in deploying modern machine learning systems in real-world scenarios is effectively handling out-of-distribution (OOD) inputs. Models are typically trained in closed-world settings with consistent data distributions, but they inevitably encounter unexpected samples when deployed in real-world environments. This can not only degrade the user experience but also potentially result in severe consequences in safety-critical applications [39; 70]. There are two primary approaches to enhancing the reliability of deployed systems: OOD robustness, which aims to improve model accuracy on shifted data distributions [17; 20], and OOD detection [82; 13], which seeks to identify potentially problematic inputs and enable appropriate actions (e.g., requesting human intervention). When correct classification is achievable, robustness is often considered preferable since it allows the system to operate with minimal disruption. Robustness has been investigated for various types of distribution shifts [67; 26; 29]. However, attaining robustness can be challenging, and in some cases, impossible when no known "correct" answer exists for the model to provide. OOD detection research has primarily focused on a single type of distribution shift. Despite differences in terminology and minor variations, the main settings of OOD detection in the literature--including open set recognition (OSR), anomaly detection, novelty detection, and outlier detection--primarily utilize novel classes as OOD inputs (see Yang et al. [82] for a comprehensive analysis of their differences). Previous research has explored the detection of various distribution shifts such as adversarial attacks [2, 32] and artificially generated images [35, 51, 49]. However, these studies rarely categorize them as OOD samples. While there are a few works that simultaneously detect novel labels and adversarial attacks [43, 24], the broad detection of diverse types of distribution shifts remains largely unaddressed. This limitation significantly hinders the application of these methods in real-world scenarios. If the ultimate aim is to enhance reliability in the face of unexpected inputs that may vary in nature, then relying on homogeneous benchmarks with similar distribution shifts will likely result in methods that are overspecialized and perform unreliably on _out-of-distribution distribution shifts_. This presents a considerable limitation to existing methods, given the inherently unpredictable nature of out-of-distribution samples. Real-world systems may encounter a wide range of anomalous inputs, on which their performance are likely to degrade. Consequently, the capacity for OOD detection systems to detect an array of distribution shifts that accurately represent the variety of real-world occurrences becomes a critical imperative. These concerns are confirmed in Figure 2, which displays the distributions of maximum softmax (MSP) [27], ViM [80], and MDS [44] scores on several shifted distributions relative to clean data Figure 1: An overview of BROAD: illustrating the benchmarks employed for each distribution shift category, with ImageNet-1K serving as the in-distribution reference. Figure 2: Figure illustrating the score distributions of MSP, ViM, and MDS across varying datasets. While all three methods successfully discriminate between ImageNet and iNaturalist, their effectiveness fluctuates across other types of distribution shifts. (ImageNet-1k). Although all scores effectively distinguish samples from iNaturalist [33; 76], a common benchmark for detecting novel classes, their performance on other types of distribution shifts is inconsistent. Furthermore, OOD detection methods often require tuning or even training on OOD samples [47; 44; 46], exacerbating the problem. Recent research has attempted the more challenging task of performing detection without presuming access to such samples [54; 24; 80]. Nevertheless, they may still be inherently specialized towards specific distribution shifts. For example, CSI [73] amplifies the detection score by the norm of the representations. While this improves performance on samples with novel classes (due to generally lower norm representations), it may conversely impair performance in detecting, for instance, adversarial attacks, which may exhibit abnormally high representation norms. The scarcity of diversity in OOD detection evaluations in previous studies may be attributed to the perceived preference for OOD robustness when OOD samples share classes with the training set. Nevertheless, this preference may not always be well-founded. Firstly, previous works have indicated a potential trade-off between in-distribution accuracy and OOD robustness [75; 88], although a consensus remains elusive [85]. On the other hand, many OOD detection systems serve as post-processors that do not impact in-distribution performances. Additionally, there are practical scenarios where the detection of OOD inputs proves valuable, regardless of robustness. For instance, given the increasing prevalence of generative models [66; 62; 68], deployed systems may need to differentiate synthetic images from authentic ones, independent of performance [49; 40]. Lastly, other types of shifts exist where labels belong to the training set, but correct classification is undefined, rendering robustness unattainable (see section 2.5). Motivated by these observations, our work concentrates on broad OOD detection, defined as the simultaneous detection of OOD samples from diverse types of distribution shifts. Our primary contributions include: * A comprehensive benchmarking of recent OOD detection methods on BROAD, along with an analysis of their respective performance. * The introduction of Benchmarking Resilience Over Anomaly Diversity (BROAD), an extensive OOD detection benchmark comprising datasets from five distinct types of distribution shifts relative to ImageNet: novel classes, adversarial perturbations, synthetic images, corruptions, and multi-class inputs. We also introduce CoComageNet, a subset of COCO [48]2. Footnote 2: The code and datasets are publicly available * The development and evaluation of a judiciously designed generative ensemble method based on a Gaussian mixture of existing detection statistics to achieve broad detection against all types of distribution shifts, resulting in significant gains over existing methods in broad OOD detection. Section 2 elucidates BROAD while section 3 introduces studied methods as well as our generative ensemble method based on Gaussian mixtures. In section 4, we evaluates different methods against each distribution shift. Finally, section 5 provides a synopsis of related work, while ## 2 Distribution Shift Types in BROAD In this study, we employ ImageNet-1K [15] as our in-distribution. While previous detection studies have frequently used CIFAR [42], SVHN [60], and LSUN [86] as detection benchmarks, recent work has highlighted the limitations of these benchmarks, citing their simplicity, and has called for the exploration of detection in larger-scale settings [28]. Consequently, ImageNet has emerged as the most popular choice for in-distribution. Our benchmark, BROAD, encompasses five distinct types of distribution shifts, each represented by one to four corresponding datasets, as summarized in Figure 1. ### Novel Classes The introduction of novel classes represents the most prevalent type of distribution shift in the study of OOD detection. In this scenario, the test distribution contains samples from classes not present in the training set, rendering accurate prediction unfeasible. For this particular setting, we employ three widely used benchmarks: iNaturalist [33; 76], ImageNet-O [30], and OpenImage-O [80; 41]. ### Adversarial Perturbations Adversarial perturbations are examined using two well-established attack methods: Projected Gradient Descent (PGD)[55] and AutoAttack[12]. Each attack is generated with an \(L_{\infty}\) norm perturbation budget constrained to \(\epsilon=0.05\), with PGD employing 40 steps. In its default configuration, AutoAttack constitutes four independently computed threat models for each image; from these, we selected the one resulting in the highest confidence misclassification. A summary of the models' predictive performance when subjected to each adversarial scheme can be found in Table 1. The relative detection difficulty of white-box versus black-box attacks remains an open question. Although white-box attacks are anticipated to introduce more pronounced perturbations to the model's feature space, black-box attacks might push the features further away from the in-distribution samples. To elucidate this distinction and provide a more comprehensive understanding of detection performance, we generate two sets of attacks using both PGD and AutoAttack: one targeting a ResNet50 [25] and the other a Vision Transformer (ViT) [18]. Evaluation is performed on both models, thereby ensuring the inclusion of two black-box and two white-box variants for each attack. Common practice in the field focuses on the detection of successful attacks. However, identifying failed attempts could be advantageous for security reasons. To cater to this possibility, we appraise detection methods in two distinct scenarios: the standard Distribution Shift Detection (DSD), which aims to identify any adversarial perturbation irrespective of model predictions, and Error Detection (ED), which differentiates solely between successfully perturbed samples (those initially correctly predicted by the model but subsequently misclassified following adversarial perturbation) and their corresponding original images. ### Synthetic Images This category of distribution shift encompasses images generated by computer algorithms. Given the rapid development of generative models, we anticipate a growing prevalence of such samples. To emulate this shift, we curated two datasets: one derived from a conditional BigGAN model [7], and another inspired by stable diffusion techniques [68]. In the case of BigGAN, we employed publicly available models3 trained on ImageNet-1k and generated 25 images for each class. For our stable diffusion dataset, we utilized open-source text-conditional image generative models4. To generate images reminiscent of the ImageNet dataset, each ImageNet class was queried using the following template: Footnote 3: [https://github.com/lukemelas/pytorch-pretrained-gans](https://github.com/lukemelas/pytorch-pretrained-gans) Footnote 4: [https://huggingface.co/stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) High quality image of a {class_name}. This procedure was repeated 25 times for each class within the ImageNet-1k label set. Given that a single ImageNet class may have multiple descriptive identifiers, we selected one at random each time. ### Corruptions The term _corruptions_ refers to images that have undergone a range of perceptual perturbations. To simulate this type of distribution shift, we employed four distinct corruptions from ImageNet-C [26]: defocus blur, Gaussian noise, snow, and brightness. All corruptions were implemented at the maximum intensity (5 out of 5) to simulate challenging scenarios where OOD robustness is difficult, thus highlighting the importance of effective detection. Analogous to the approach taken with adversarial perturbations, we implemented two distinct evaluation scenarios: Distribution Shift \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Clean Acc. & \begin{tabular}{c} White-box \\ \end{tabular} & \begin{tabular}{c} Black-box \\ \end{tabular} \\ \hline RN50 & 74.2\% & 39.3\% & 28.2\% & 68.0\% & 43.3\% \\ ViT & 85.3\% & 0.4\% & 50.8\% & 77.1\% & 65.8\% \\ \hline \hline \end{tabular} \end{table} Table 1: Prediction accuracy of the two evaluated models across the range of perturbation settings examined in our study. Detection (DSD), aiming to identify corrupted images irrespective of model predictions, and Error Detection (ED), which discriminates between incorrectly classified OOD samples and correctly classified in-distribution samples, thus focusing solely on errors introduced by the distribution shift. ### Multiple Labels In this study, we propose CoComageNet, a new benchmark for a type of distribution shift that, to the best of our knowledge, has not been previously investigated within the context of Out-of-Distribution (OOD) detection. We specifically focus on _multiple labels_ samples, which consist of at least two distinct classes from the training set occupying a substantial portion of the image. Consider a classifier trained to differentiate dogs from cats; an image featuring both a dog and a cat side-by-side presents an ambiguous label, and classifying it as either dog or cat would be erroneous. In safety-critical applications, this issue could result in unpredictable outcomes and requires precautionary measures, such as human intervention. For instance, a system tasked with identifying dangerous objects in images could misclassify an image featuring both a knife and a hat as safe by identifying the image as a hat. The CoComageNet benchmark is constructed as a subset of the CoCo dataset [48], specifically, the 2017 training images. We first identify 17 CoCo classes that have equivalent counterparts in ImageNet (please refer to appendix A for a comprehensive list of the selected CoCo classes and their ImageNet equivalents). We then filter the CoCo images to include only those containing at least two different classes among the selected 17. We calculate the total area occupied by each selected class and order the filtered images based on the portion of the image occupied by the second-largest class. The top 2000 images based on this metric constitute CoComageNet. By design, each image in CoComageNet contains at least two distinct ImageNet classes occupying substantial areas. Although CoComageNet was developed to study the detection of multiple label images, it also exhibits other less easily characterized shifts, such as differences in the properties of ImageNet and CoCo images, and the fact that CoComageNet comprises only 17 of the 1000 ImageNet classes. To isolate the effect of multiple labels, we also construct CoComageNet-mono, a similar subset of CoCo that contains only one of the selected ImageNet classes (see appendix A for details). As shown in appendix A, detection performances for all baselines on CoComageNet-mono are near random, demonstrating that detection of CoComageNet is primarily driven by the presence of multiple labels. Finally, to reduce the impact of considering only a subset of ImageNet classes, we evaluate detection methods using in-distribution ImageNet samples from the selected classes only. \begin{table} \begin{tabular}{l c c c c c c c c c|c c c} \hline \hline & \multicolumn{3}{c}{Novel classes} & Adv. Attacks & \multicolumn{3}{c}{Synthetic} & \multicolumn{3}{c}{Corruptions} & \multicolumn{3}{c}{Multi-labels} & \multicolumn{3}{c}{Average} \\ & ViT & RN50 & ViT & RN50 & ViT & RN50 & ViT & RN50 & ViT & RN50 & ViT & RN50 \\ \hline CADet \(m_{\text{m}}\) & 20.91 & 66.79 & 67.12 & 62.4 & 59.82 & 55.65 & 79.67 & 87.15 & 54.24 & 56.88 & 56.35 & 65.77 \\ ODIN & 91.73 & 73.58 & 52.29 & 54.44 & 62.74 & 61.49 & 79.68 & 88.52 & 70.75 & 64.46 & 71.44 & 68.5 \\ MAX LOGITS & 95.25 & 73.67 & 59.73 & 59.62 & 66.08 & 57.65 & 83.60 & 90.87 & 71.63 & 62.79 & 75.26 & 68.92 \\ Logits norm & 51.93 & 52.62 & 37.39 & 51.82 & 38.25 & 59.47 & 39.99 & 82.81 & 36.32 & 48.05 & 40.78 & 58.95 \\ MSP & 90.56 & 67.25 & 58.45 & 61.17 & 64.78 & 55.59 & 78.62 & 86.71 & 71.93 & **67.52** & 72.87 & 67.65 \\ MDS\({}_{i}\) & 53.35 & 63.52 & 67.73 & 55.04 & 54.92 & 56.18 & 31.47 & 76.52 & 63.43 & 36.81 & 54.18 & 57.61 \\ MDS\({}_{i}\) & **97.38** & 72.32 & 74.75 & 68.91 & 68.98 & 55.41 & 83.29 & 75.24 & 63.41 & 38.92 & 77.56 & 62.16 \\ MDS\({}_{\text{all}}\) & 89.17 & 72.66 & **85.64** & 71.49 & 72.45 & 60.89 & **95.55** & 89.42 & 26.06 & 30.01 & 73.77 & 64.89 \\ ReAct & 95.47 & 79.70 & 60.71 & 64.16 & 66.03 & 54.24 & 83.67 & 89.82 & 71.97 & 63.91 & 75.53 & 69.83 \\ GraADNorm & 90.85 & 75.53 & 65.17 & 56.52 & 71.19 & 65.87 & 85.00 & 83.99 & 69.59 & 54.45 & 76.56 & 68.29 \\ EBO & 95.52 & 73.8 & 59.72 & 59.59 & 65.91 & 57.72 & 83.83 & 91.14 & 71.27 & 61.55 & 75.25 & 68.76 \\ \(D_{a}\) & 91.27 & 67.95 & 58.62 & 61.44 & 64.95 & 55.65 & 81.57 & 87.43 & 72.49 & 67.15 & 73.78 & 67.92 \\ DiCE & 55.7 & 74.45 & 78.29 & 58.76 & 77.84 & 59.43 & 86.67 & 91.38 & 61.23 & 59.97 & 71.95 & 68.8 \\ ViM (ours) & 95.76 & **81.55** & 56.85 & 62.91 & 61.01 & 53.26 & 79.79 & 87.00 & 68.45 & 49.01 & 72.37 & 66.75 \\ Ens-V (ours) & 94.97 & 94.22 & 82.67 & 74.85 & **78.45** & 60.55 & 92.76 & 91.08 & 73.27 & 53.78 & **84.42** & 71.93 \\ Ens-R (ours) & 95.00 & 80.42 & 80.79 & **75.21** & 76.56 & 62.38 & 92.17 & 90.56 & **74.79** & 60.79 & 83.86 & 73.87 \\ Ens-F (ours) & 95.08 & 79.16 & 79.05 & 69.32 & 75.02 & 59.89 & 91.57 & **91.59** & 72.55 & 61.41 & 82.65 & 72.27 \\ \hline \hline \end{tabular} \end{table} Table 2: Distribution shift detection AUC for Visual Transformer and ResNet-50 across different types of distribution shifts. ## 3 Detection Methods In this study, our focus is predominantly on methods that do not require training or fine-tuning using OOD samples. This consideration closely aligns with real-world applications where OOD samples are typically not known _a priori_. Additionally, the practice of fine-tuning or training on specific types of distribution shifts heightens the risk of overfitting them. **Evaluated Methods:** We assess a the broad OOD detection capabilities of a variety of methods including React[72], ViM[80], GradNorm[34], EBO[50], Dice[71], DOCTOR[23], CADet[24], ODIN[47], and Mahalanobis Distance (MDS)[44]. Furthermore, we explore three statistics widely applied in post-hoc OOD detection: maximum softmax probabilities (MSP), maximum of logits, and logit norm. In the case of CADet, we solely utilize the intra-similarity score \(m_{\text{in}}\) with five transformations to minimize computational demands. For DOCTOR, we employ \(D_{\alpha}\) in the Totally Black Box (TBB) setting, disregarding \(D_{\beta}\) as it is functionally equivalent to MSP in the TBB setting when rescaling the detection threshold is accounted for (resulting in identical AUC scores). Odin typically depends on the fine-tuning of the perturbation budget \(\epsilon\) and temperature \(T\) on OOD samples. To bypass this requirement, we use default values of \(\epsilon=0.0014\) and \(T=1000\). These default parameters, having been tuned on a variety of datasets and models, have demonstrated robust generalization capabilities. Nevertheless, it should be noted that the choice of these values, despite being considered reasonable, does represent a caveat, as they were initially determined by tuning OOD detection of novel classes. In its standard form, the Mahalanobis detection method computes the layer-wise Mahalanobis distance, followed by training a logistic regressor on OOD samples to facilitate detection based on a weighted average of these distances. To eliminate the need for training on OOD samples, we consider three statistics derived from Mahalanobis distances: the Mahalanobis distance on the output of the first layer block (\(\text{MDS}_{l}\)), the Mahalanobis distance on the output of the last layer (\(\text{MDS}_{l}\)), and the Mahalanobis distance on the output of all layers averaged with equal weights (\(\text{MDS}_{\text{all}}\)). For the Vision Transformer (ViT), we focus on MDS on the class token, disregarding patch tokens. **Generative Modeling for Detection:** Consider \(\mathcal{X}\) as a data distribution with a support set denoted as \(X\), and let \(h:X\rightarrow\mathbb{R}^{d}\) be a map that extracts features from a predetermined neural network. The function \(h(x)\) can be defined arbitrarily; for instance, it could be the logits that the network computes on a transformation of a sample \(x\), or the concatenation of outputs from different layers, among other possibilities. However, generative modeling in the input space (i.e., when \(h\) is the identity function) is generally infeasible due to the exceedingly high dimensionality and intricate structure of the data. A generative model of \(h\) is tasked with learning the distribution \(p_{x\sim\mathcal{X}}(h(x))\), using a training set \((x_{i})_{i\leq N}\) that comprises independently sampled instances from \(\mathcal{X}\). Given a test sample \(y\), although it is intractable to directly infer \(p_{x\sim\mathcal{X}}(y=x)\), it is feasible to compute \(p_{x\sim\mathcal{X}}(h(y)=h(x))\), which can then be directly utilized as a detection score. A significant number of detection methods devise heuristic scores on \(h\) with the aim of maximizing detection performances on specific benchmarks, while often arbitrarily discarding information that could potentially be beneficial for other distribution shifts. In contrast, generative models learn an Figure 3: Covariance matrices of detection scores in-distribution for ViT (left) and ResNet-50 (right). estimator of the likelihood of \(h(x)\) without discarding any information. Their detection performances are only constrained by the information extracted by \(h\) and, naturally, their proficiency in learning its distribution. This inherent characteristic makes generative models particularly suitable for broad Out-of-Distribution (OOD) detection. By learning the comprehensive distribution of \(h\), these models negate the bias associated with engineering detection scores against specific distribution shifts. **Gaussian Mixtures Ensembling:** Gaussian Mixture Models (GMMs) are a versatile tool for learning a distribution of the form: \[x\sim\sum_{i}^{n}\pi_{i}\mathcal{N}(\mu_{i},\Sigma_{i}),\] Where \(n\) is the number of components, \(\pi\), \(\mu\) and \(\Sigma\) are the parameters of the GMM and are learned with the Expectation-Maximization (EM) algorithm. The utilization of GMMs for generative modeling of neural network behaviors to facilitate detection has been previously reported [9]. Methods that are based on the Mahalanobis distance bear similarity to this approach insofar as the layer-wise Mahalanobis score can be interpreted as the likelihood of the layer output for class-dependent Gaussian distributions, which are learned from the training set. Despite these advantages, such methods encounter the formidable challenge of learning generative models of the network's high dimensional representation space, a task made more difficult due to the curse of dimensionality. In response to this challenge, we propose the learning of a Gaussian mixture of the scores computed by existing OOD detection methods. While this approach still relies on heuristic scores, it presents an ensemble method that is able to amalgamate their respective information, while maintaining the dimension of its underlying variables at a significantly low level. As a result, it achieves a favorable tradeoff between the generative modeling of high dimensional feature spaces and the heuristic construction of one-dimensional detection scores. In addition to integrating their detection capabilities, this approach is adept at identifying atypical realizations of the underlying scores, even in situations where the marginal likelihood of each score is high, but their joint likelihood is low. To make our method as general as possible, **we do not assume access to OOD samples to select which scores to use** as variables of our GMM. We present in Figure 3 the covariance matrices of the different scores on a held-out validation set of ImageNet. To minimize redundancy, we avoid picking multiple scores that are highly correlated on clean validation data. To decide between highly correlated scores, we opt for the ones with highest in-distribution error detection performance (see first two columns of Table 3). Moreover, we discard logit norm and \(\text{MDS}_{\text{f}}\) due to their near-random error detection performance in-distribution. Given that score correlation varies between ViTs and ResNets, as evidenced in Figure 3, we derive two distinct sets of scores. We also propose a computationally efficient alternative based on methods with minimal overhead costs: **Ensemble-ViT** (Ens-V) = {GradNorm, ODIN, \(\text{MDS}_{\text{all}}\), \(\text{MDS}_{\text{l}}\), CADet, Dice, MSP, Max logits}, **Ensemble-ResNet** (Ens-R) = {GradNorm, ODIN, \(\text{MDS}_{\text{all}}\), \(\text{MDS}_{\text{i}}\), CADet, ReAct, ViM, \(D_{\alpha}\)}, **Ensemble-Fast** (Ens-F) = {MSP, Max logits, \(\text{MDS}_{\text{all}}\), \(\text{MDS}_{\text{l}}\), EBO}. We employ a held-out validation set of 45,000 samples and discard all that are incorrectly classified to train the GMM. This is essential as misclassified samples may produce atypical values of the underlying scores despite being in-distribution, which is demonstrated by the high in-distribution error detection AUC of selected scores. Finally, we train the GMM for a number of components \(n\in\{1,2,5,10,20\}\) and select \(n=10\) which maximizes the in-distribution error detection performances (see appendix C). ## 4 Evaluation We assess performance using the widely accepted area under the curve (AUC) metric for two distinct pretrained models: ResNet-50 (RN50) and Vision Transformer (ViT). All evaluations are conducted on a single A100 GPU, with the inference time normalized by the cost of a forward pass (cf. App. B). Our empirical results in the Distribution Shift Detection (DSD) setting, which aims to detect any OOD sample, are presented in Table 2. Results for the error detection setting, where the objective is to detect misclassified OOD samples against correctly classified in-distribution samples, are exhibited in Table 3. The results for each distribution shift type are averaged over the corresponding benchmark. Detailed performances and model accuracy, where applicable, for each dataset are offered in Appendix D. In the error detection setting, we conduct evaluations against adversarial attacks, corruptions, and in-distribution. The latter pertains to predicting classification errors on authentic ImageNet inputs. Please note that error detection is inapplicable to novel classes and multi-labels where correct classifications are undefined, and we do not consider error detection on synthetic images as it lacks clear motivation. **Existing methods:** A striking observation is the inconsistency of recent detection methods in the broad OOD setting. Methods that excel on adversarial attacks tend to underperform on multi-label detection, and vice versa. Each of the baselines exhibits subpar performance on at least one distribution shift, and almost all of them are Pareto-optimal. This underscores the necessity for broader OOD detection evaluations to inform the design of future methods. We observe that while detection performances are generally superior when utilizing a ViT backbone, a finding consistent with previous studies [80], the difference is method-dependent. For instance, MDS\({}_{\text{I}}\) ranks as the best baseline on ViT (when averaged over distribution shift types), but it is the third-worst with a ResNet-50. We further observe that many methods significantly outperform a random choice in the detection of synthetic images, regardless of the generation methods used (see Appendix D). This suggests that despite recent advancements in generative models, the task remains feasible. Interestingly, the performance of various methods relative to others is remarkably consistent between the DSD and error detection settings, applicable to both adversarial attacks and corruptions. This consistency implies a strong correlation between efficiently detecting OOD samples and detecting errors induced by distribution shifts, suggesting that there may not be a need to compromise one objective for the other. **Ensemble:** Our ensemble method surpasses all baselines when averaged over distribution shift types, evident in both the DSD and error detection settings, and consistent across both ViT and ResNet-50 backbones. With the exception of error detection with ResNet-50, where Doctor-alpha yields comparable results, our ensemble method consistently demonstrates significant improvements over the best-performing baselines. Specifically, in the DSD setting, Ens-V and Ens-R secure improvements of 6.86% and 4.04% for ViT and ResNet-50, respectively. While the ensemble detection rarely surpasses the best baselines for a specific distribution shift type, it delivers more consistent performances across types, which accounts for its superior averaged AUC. This finding endorses the viability of our approach for broad OOD detection. Despite the notable computational overhead for Ens-V and Ens-R (up to the cost of a forward pass for Ens-V with ResNet-50, as detailed in Appendix B), the inference of Ens-F atop a \begin{table} \begin{tabular}{l c c c c c c||c c} \hline \hline & \multicolumn{2}{c}{In-distribution} & \multicolumn{2}{c}{Adv. Attacks} & \multicolumn{2}{c||}{Corruptions} & \multicolumn{2}{c}{Average} \\ \cline{2-9} & ViT & RN50 & ViT & RN50 & ViT & RN50 & ViT & RN50 \\ \hline CadeT\(m_{\text{in}}\) & 54.63 & 56.50 & 70.02 & 67.85 & 81.55 & 90.11 & 68.73 & 71.49 \\ ODIN & 75.02 & 75.79 & 56.98 & 62.96 & 90.83 & 95.02 & 74.28 & 77.92 \\ Max logit & 80.64 & 77.55 & 67.67 & 68.37 & 95.55 & **96.84** & 81.29 & 80.92 \\ Logit norm & 36.83 & 50.11 & 34.05 & 55.94 & 33.65 & 84.37 & 34.84 & 63.47 \\ MSP & **89.16** & 86.31 & 70.29 & 74.04 & 95.75 & 95.93 & 85.07 & 85.43 \\ MDS\({}_{\text{f}}\) & 48.23 & 51.25 & 68.18 & 56.66 & 30.70 & 76.47 & 49.04 & 61.46 \\ MDS\({}_{\text{I}}\) & 74.92 & 55.53 & 82.98 & 72.39 & 96.39 & 75.85 & 84.76 & 67.92 \\ MDS\({}_{\text{all}}\) & 54.93 & 54.69 & 89.44 & 74.65 & **99.1** & 89.90 & 81.16 & 73.08 \\ REAct & 77.18 & 73.28 & 68.00 & 69.32 & 94.81 & 95.01 & 80.00 & 79.20 \\ GradNorm & 68.01 & 58.07 & 70.92 & 62.49 & 94.00 & 92.73 & 77.64 & 71.10 \\ EBO & 78.35 & 76.02 & 66.63 & 67.63 & 95.01 & 96.61 & 80.00 & 80.09 \\ \(D_{\alpha}\) & 89.00 & **86.50** & 70.37 & 73.97 & 95.96 & 96.26 & 85.11 & 85.58 \\ Dice & 56.91 & 70.00 & 80.6 & 65.71 & 89.13 & 95.53 & 75.55 & 77.08 \\ ViM & 75.72 & 73.74 & 63.37 & 69.83 & 91.64 & 93.03 & 76.91 & 78.87 \\ Ens-V (ours) & 82.61 & 72.81 & **89.52** & 80.47 & 98.27 & 94.73 & 90.13 & 82.67 \\ Ens-R (ours) & 83.69 & 77.24 & 88.17 & **83.79** & 97.84 & 95.92 & 89.90 & 85.65 \\ Ens-F (ours) & 85.84 & 79.20 & 88.72 & 82.60 & 98.41 & 96.49 & **90.99** & **86.10** \\ \hline \hline \end{tabular} \end{table} Table 3: Error detection AUC for Visual Transformer and ResNet-50 in-distribution and on two distribution shifts. forward pass only adds a modest 19% to 25% overhead, thus striking a reasonable balance between cost and performance. Interestingly, Ens-F trails only slightly in terms of performance in the DSD setting. In the error detection setting, Ens-F unexpectedly delivers the best results for both ViT and ResNet. ## 5 Related Work In this work, we study the detection of out-of-distribution (OOD) samples with a broad definition of OOD, encompassing various types of distribution shifts. Our work intersects with the literature in OOD detection, adversarial detection, and synthetic image detection. We also provide a brief overview of uncertainty quantification methods that can be leveraged to detect errors induced by distribution shifts. **Label-based OOD detection** has been extensively studied in recent years under different settings: anomaly detection [8; 64; 69], novelty detection [58; 65], open set recognition [56; 21; 6], and outlier detection [78; 31; 3]. Most existing methods can be categorized as either density-based [45; 37], reconstruction-based [16; 84], classification-based [79; 77] or distance-based [87; 74]. Methods can further be divided based on whether they require pre-processing of the input, specific training schemes, external data or can be used a post-processors on any trained model. See Yang et al. [82] for a complete survey. **Adversarial detection** is the task of detecting adversarially perturbed inputs. Most existing methods require access to adversarial samples [2; 90; 52; 59; 53; 4; 57], with some exceptions [32; 5; 24]. Since adversarial training does not transfer well across attacks [36], adversarial detection methods that assume access to adversarial samples are also unlikely to generalize well. Unfortunately, Carlini and Wagner [10] have shown that recent detection methods can be defeated by adapting the attack's loss function. Thus, attacks targeted against the detector typically remain undetected. However, adversarial attacks transfer remarkably well across models [11; 22], which makes deployed systems vulnerable even when the attacker does not have access to the underlying model. Detectors thus make systems more robust by requiring targeted attack designs. **Synthetic image detection** is the detection of images that have been artificially generated. Following the rapid increase in generative models' performances and popularity [66; 62; 68], many works have addressed the task of discriminating synthetic images from genuine ones [49]. They are generally divided between image artifact detection [49; 14; 89] and data-drive approaches [81]. Since generative models aim at learning the genuine distribution, their shortcomings only permit detection. As generative models improve, synthetic images may become indistinguishable from genuine ones. **Uncertainty quantification** (UQ) for deep learning aims to improve the estimation of neural network confidences. Neural networks tend to be overconfident even on samples far from the training distribution [61]. By better estimating the confidence in the network's predictions, uncertainty quantification can help detect errors induced by distribution shifts. See Abdar et al. [1], Kabir et al. [38], Ning and You [63] for overviews of UQ in deep learning. **Detection of multiple types of distribution shifts** has been addressed by relatively few prior works. The closest work in the literature is probably Guille-Escuret et al. [24] and Lee et al. [44] which aims at simultaneously detecting novel classes and adversarial samples. In comparison, this work evaluates detection methods on five different types of distribution shifts. To the best of our knowledge, it is the first time that such broad OOD detection is studied in the literature. ## 6 Conclusion We have evaluated recent OOD detection methods on BROAD, a diversified benchmark spanning 5 different distribution shift types, and found their performances unreliable. Due to the literature focusing on specific distribution shifts, existing methods often fail to detect samples of certain out-of-distribution shifts. To design systems capable of detecting a broad range of unexpected inputs, we have proposed an ensemble method based on Gaussian mixtures to combine the respective strengths of existing detection scores, and found it to obtain significant gains compared to previous works, even when limiting overhead computations to 25%. We encourage future work to consider more varied types of OOD samples for their detection evaluations, so that future methods will not see their success limited to unexpected inputs that are expected.
2306.07996
Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies
The accurate modelling of the Point Spread Function (PSF) is of paramount importance in astronomical observations, as it allows for the correction of distortions and blurring caused by the telescope and atmosphere. PSF modelling is crucial for accurately measuring celestial objects' properties. The last decades brought us a steady increase in the power and complexity of astronomical telescopes and instruments. Upcoming galaxy surveys like Euclid and LSST will observe an unprecedented amount and quality of data. Modelling the PSF for these new facilities and surveys requires novel modelling techniques that can cope with the ever-tightening error requirements. The purpose of this review is three-fold. First, we introduce the optical background required for a more physically-motivated PSF modelling and propose an observational model that can be reused for future developments. Second, we provide an overview of the different physical contributors of the PSF, including the optic- and detector-level contributors and the atmosphere. We expect that the overview will help better understand the modelled effects. Third, we discuss the different methods for PSF modelling from the parametric and non-parametric families for ground- and space-based telescopes, with their advantages and limitations. Validation methods for PSF models are then addressed, with several metrics related to weak lensing studies discussed in detail. Finally, we explore current challenges and future directions in PSF modelling for astronomical telescopes.
Tobias Liaudat, Jean-Luc Starck, Martin Kilbinger
2023-06-12T19:01:50Z
http://arxiv.org/abs/2306.07996v3
Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies ###### Abstract The accurate modelling of the Point Spread Function (PSF) is of paramount importance in astronomical observations, as it allows for the correction of distortions and blurring caused by the telescope and atmosphere. PSF modelling is crucial for accurately measuring celestial objects' properties. The last decades brought us a steady increase in the power and complexity of astronomical telescopes and instruments. Upcoming galaxy surveys like _Euclid_ and LSST will observe an unprecedented amount and quality of data. Modelling the PSF for these new facilities and surveys requires novel modelling techniques that can cope with the ever-tightening error requirements. The purpose of this review is three-fold. First, we introduce the optical background required for a more physically-motivated PSF modelling and propose an observational model that can be reused for future developments. Second, we provide an overview of the different physical contributors of the PSF, including the optic- and detector-level contributors and the atmosphere. We expect that the overview will help better understand the modelled effects. Third, we discuss the different methods for PSF modelling from the parametric and non-parametric families for ground- and space-based telescopes, with their advantages and limitations. Validation methods for PSF models are then addressed, with several metrics related to weak lensing studies discussed in detail. Finally, we explore current challenges and future directions in PSF modelling for astronomical telescopes. point spread function, inverse problems, weak gravitational lensing, image processing, super-resolution. ## 1 Introduction Any astronomical image is observed through an optical system that introduces deformations and distortions. Even the most powerful imaging system introduces distortions to the observed object. How to characterise these distortions is a subject of study known as PSF modelling. Specific science applications, like weak gravitational lensing (WL) in cosmology (see Kilbinger (2015); Mandelbaum (2018) for reviews), require very accurate and precise measurements of galaxy shapes. A crucial step of any weak lensing mission is to estimate the PSF at any position of the observed images. If the PSF is not considered when measuring galaxy shapes, the measurement will be biased, resulting in unacceptably biased WL studies. Furthermore, the PSF can be the predominant source of systematic errors and biases in WL studies. This fact makes PSF modelling a vital task. Forthcoming astronomical telescopes, such as the _Euclid_ space telescope (Laureijs et al., 2011), the _Roman_ space telescope (Spergel et al., 2015; Akeson et al., 2019), and the Vera C. Rubin Observatory (LSST Science Collaboration et al., 2009; Ivezic et al., 2019), raise many challenges for PSF models as the instruments are getting more complex and the imposed scientific requirements tighter. These factors have triggered and continue to trigger developments in the PSF modelling literature. PSF modelling is an interdisciplinary problem which requires knowledge of optics, inverse problems, and the target science application, in our case, weak gravitational lensing studies. The objective is to estimate the PSF at target positions, e.g., galaxy positions, from degraded star observations and complementary sources of information. Figure 1 shows an illustration of the problem. The PSF modelling problem is challenging as the model should account for the different variations of the PSF in the field of view, i.e., spatial, spectral and temporal. This review is related to these three scientific fields, discusses in detail the PSF and aims to help understand the different PSF modelling choices. We start by introducing optical concepts required to analyse optical imaging systems that are required to understand the more physically-based PSF models in section 2. Then, motivated by the optical introduction, we describe the adopted general observational model in section 3. section 4 introduces the different contributors to the PSF at the optical and detector level. section 5 gives an overview of state-of-the-art PSF modelling techniques and leads to section 6, which includes comments on the desirable properties of a PSF model. We end the review by describing different techniques for validating PSF models in section 7 and concluding in section 8. In addition, we include Table 1, which summarizes the notation and the different coordinates used throughout this article. Figure 1: An illustration of a field of view showing the PSF modelling problem. First, the PSF model should be estimated from the stars. The model should then be used to estimate the PSF at the target positions, e.g., galaxy positions. ## 0.2 Introduction to optics A rigorous treatment of the optics involved in the formation of the PSF on complex optical systems could be the sole topic of a review article. In this section we introduce simplified optical concepts for understanding the PSF, how to model it, and certain implicit assumptions usually adopted. This review follows the optic formalism of Goodman (2005). For a profound and rigorous description of optical theory, we refer the reader to the seminal book of Born and Wolf (1999) or the more concise work seen in Gross (2005). We refer to Schmidt (2010) for more information on practical wave propagation. If the reader is familiar with the Fourier optics literature, we recommend continuing to section 0.3. ### Scalar diffraction theory When studying the PSF, we are examining how an optical system with a specific instrument contributes to and modifies our observations. To understand how the optical system interacts with the propagation of light, we need to dig into the nature of light, an electromagnetic (EM) wave. To make a fundamental analysis, one would need to use Maxwell's equations, solve them with the optical system under study, and obtain the electric and magnetic fields. Solving a set of coupled partial differential equations is an arduous task. Several approximations can be made, if some conditions are met, to alleviate the mathematical burden of solving Maxwell's equations without introducing much error into the analysis. Diffraction theory provides a fundamental framework for analysing light propagation through an optical system. This is especially the case when working with EM waves in the optical range when the optical image is close to the focus region. The _Huygens-Fresnel principle_(Huygens, 1690; Fresnel, 1819; Crew et al., 1900) states that every point of a wavefront may be considered a secondary disturbance giving rise to spherical wavelets. At any later instant, the wavefront may be regarded as the envelope of all the disturbances. Fresnel's contribution to the principle is that the secondary wavelets mutually interfere. This principle provides a powerful method of analysis of luminous wave propagation. In Figure 1(a), the propagation of an incident plane wavefront through an obstacle, a single slit, is shown. The secondary \begin{table} \begin{tabular}{c l} \hline Variable & Description \\ \hline \multicolumn{2}{c}{_Coordinates_} \\ \hline \((x,y)\) & Pupil plane or output aperture plane coordinates \\ \((u,v)\) & Image or focal plane coordinates \\ \((\xi,\eta)\) & Object plane coordinates \\ \((\bar{u},\bar{v})\) & Pixel coordinates, the discrete counterpart of the image plane \\ \(\mathbf{p}_{i}\) & 3D spatial coordinate \\ \(\lambda\) & Wavelength \\ \(t\) & Time \\ \hline \multicolumn{2}{c}{_Notation_} \\ \hline \(\mathcal{I},\mathcal{H},\ldots\) & Calligraphic uppercase variables are continuous functions \\ \(I,H,\ldots\) & Uppercase variables are matrices \\ \(c_{m},b_{1}^{k},\ldots\) & Lowercase variables are scalars \\ \(I_{\mathrm{img}}(\bar{u},\bar{v};t|u_{i},v_{i})\in\mathbb{R}\) & Pixel value at position \((\bar{u},\bar{v})\) for the image \(I_{\mathrm{img}}\) with its \\ & centroid at position \((u_{i},v_{i})\) observed at time \(t\). \\ \(I_{\mathrm{img},(\cdot|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\) & Observed image with its centroid at position \((u_{i},v_{i})\) \\ \hline \end{tabular} \end{table} Table 1: Coordinates and notation used throughout this article. wavelets constitute the plane wavefront before the obstacle. Then, the wavefront shape is modified due to the obstacle, following the Huygens-Fresnel principle. Figure 2: **(a) Illustration of the Huygens-Fresnel principle and the modification of a wavefront due to an obstacle. Reproduced from Liaudat (2022). (b) Illustration of the different diffraction regions behind an aperture.** The secondary waves mutually interfere constructively or destructively, according to their phases. The analysis of the light propagation in a homogeneous medium is simple as the spherical wavelets interfere without obstacles, and the total wavefront propagates spherically in the medium. However, suppose the wave encounters an obstacle. In that case, the secondary waves in the vicinity of the boundaries of the obstacle will interfere in ways that are not obvious from the incident wavefront. Gustav Kirchhoff was a pioneer in providing a solid mathematic foundation to the _Huygens-Fresnel principle_ using Green's theorem (Kirchhoff, 1883). He started by deriving the _integral theorem by Kirchhoff_, which expresses the solution of the homogeneous wave equation at an arbitrary point in terms of the values of the solution and its first derivative at all points on an arbitrary closed surface surrounding the point (Born and Wolf, 1999, SS4.3.1). Then, he studied the diffraction by a planar screen, a simple setup which allows making assumptions simplifying the integral theorem. The assumptions are commonly known as the _Kirchhoff boundary conditions_(Kirchhoff, 1883). Although accurate in practice, these boundary conditions were proved inconsistent and were eliminated Sommerfeld (1896), giving origin to the _Rayleigh-Sommerfeld diffraction theory_(Rayleigh, 1897; Sommerfeld, 1964; Born and Wolf, 1999, SS8.11) that we will continue to describe below. Let us consider a diffractive aperture in a plane \((x,y)\) illuminated in the positive \(z\) direction. We will study the diffracted wave in a parallel plane \((u,v)\) at a normal distance \(z\) from the first plane. The \(z\)-axis is orthogonal to both planes and intersects them at their origins. Figure 3 illustrates the coordinate system described above. The diffracted wave is written as \[\mathcal{U}(\mathbf{p}_{1})=\frac{z}{\mathrm{j}\lambda}\iint_{\Sigma}\mathcal{ U}(x,y;0)\frac{\exp\left[\mathrm{j}\,k\,r_{01}\right]}{r_{01}^{2}}\,\mathrm{d}x \mathrm{d}y\,, \tag{1}\] Figure 3: Illustration of the coordinate system for the diffraction equations. Figure adapted from Liaudat (2022). where \(\mathsf{j}\) denotes the imaginary unit, \(\lambda\) is the wavelength, \(k=2\pi/\lambda\), \(\mathbf{p}_{0}=(x_{0},y_{0};0)\), \(\mathbf{p}_{1}=(u_{1},v_{1};z)\), \(r_{01}=\|\mathbf{p}_{1}-\mathbf{p}_{0}\|_{2}\), \(\Sigma\) is the aperture in the \((x,y)\) plane, and \(\mathcal{U}\) is the electric field. The incident wave is \(\mathcal{U}(\mathbf{p}_{0})\) and the diffracted wave is \(\mathcal{U}(\mathbf{p}_{1})\). There are two approximations in the derivation of Equation 1. The first approximation is that we are considering a _scalar theory of diffraction_, a scalar electric and magnetic field, and not the fields in their complete vectorial form. The scalar theory provides a full description of the EM fields in a dielectric medium that is linear, isotropic, homogeneous, and non-dispersive. However, even if the medium verifies these properties, if some boundary conditions are imposed on a wave, e.g., an aperture, some coupling is introduced between the EM field components and the scalar theory is no longer exact. Nevertheless, the EM fields are modified only at the edges of the aperture, and the effects extend over only a few wavelengths into the aperture. Therefore, if the aperture is large compared to the wavelength, the error introduced by the scalar theory is negligible. Refractive optical elements can also induce polarisation of the EM field. The level of accuracy desired will determine if the bias introduced can be neglected or needs to be taken into account. Although the current formulation is powerful in representing the diffraction phenomena, it is still challenging to work with the integral from Equation 1. As a consequence, we will explore further approximations to the _Rayleigh-Sommerfeld diffraction theory_ that will give origin to the _Fresnel diffraction_ and _Fraunhofer diffraction_. ### The Fresnel approximation The Fresnel approximation is based on the binomial expansion of the square root in the expression \(\sqrt{1+b}\) for some \(b\)1. The distance \(r_{01}\) can be expressed as Footnote 1: The binomial expansion is given by \(\sqrt{1+b}=1+\frac{1}{2}b-\frac{1}{8}b^{2}+\cdots\). \[r_{01}=z\sqrt{1+\left(\frac{u_{1}-x_{0}}{z}\right)^{2}+\left(\frac{v_{1}-y_{0} }{z}\right)^{2}}\,, \tag{2}\] which can be approximated, using the first two terms of the binomial expansion, as \[r_{01}\approx z\left(1+\frac{1}{2}\left(\frac{u_{1}-x_{0}}{z}\right)^{2}+ \frac{1}{2}\left(\frac{v_{1}-y_{0}}{z}\right)^{2}\right)\,. \tag{3}\] The \(r_{01}\) appearing in the exponential of Equation 1 has much more influence in the result than the \(r_{01}^{2}\) in the divisor. Therefore, we use Equation 3 to approximate \(r_{01}\) in the exponential, and for the divisor we approximate \(r_{01}^{2}\approx z^{2}\). Then, we can express the diffracted field as \[\mathcal{U}(u_{1},v_{1};z)=\frac{\exp\left[\mathsf{j}kz\right]}{\mathsf{j} \lambda z}\iint_{\Sigma}\mathcal{U}(x,y;0)\exp\left[\mathsf{j}\,\frac{k}{2z} \left[\left(u_{1}-x\right)^{2}+\left(v_{1}-y\right)^{2}\right]\right]\, \mathrm{d}x\mathrm{d}y\,, \tag{4}\] and if we expand the terms in the exponential, we get \[\mathcal{U}(u_{1},v_{1};z) =\frac{\exp\left[\mathrm{j}\mathrm{k}z\right]}{\mathrm{j}\lambda z} \exp\left[\mathrm{j}\,\frac{k}{2z}\left(u_{1}^{2}+v_{1}^{2}\right)\right] \tag{5}\] \[\iint_{\Sigma}\left\{\mathcal{U}(x,y;0)\exp\left[\mathrm{j}\, \frac{k}{2z}\left(x^{2}+y^{2}\right)\right]\right\}\exp\left[-\mathrm{j}\, \frac{2\pi}{\lambda z}\left(u_{1}x+v_{1}y\right)\right]\mathrm{d}x\mathrm{d}y\,.\] The Fourier transform (FT) expression can be recognised with some multiplicative factors in Equation 5. The diffracted wave is the FT of the product of the incident wave and a quadratic phase exponential. In this case, we have approximated the spherical secondary waves of the Huygens-Fresnel principle by parabolic wavefronts. The approximation in the Fresnel diffraction formula is equivalent to the _paraxial approximation_. This last approximation consists of a _small-angle approximation_ as it restricts the rays to be close to the optical axis. This restriction also allows us to approximate Equation 2 with Equation 3. The region where the approximation is valid is known as the _region of Fresnel diffraction_. In this region, the major contributions to the integral come from points \((x,y)\) for which \(x\approx u\) and \(y\approx v\), meaning that the higher-order terms in the expansion that we are not considering are unimportant. The _region of Fresnel diffraction_ can be seen as the coordinates \((u,v,z)\) that verify \[z^{3}\gg\frac{\pi}{4\lambda}\left(\left(u-x\right)^{2}+\left(v-y\right)^{2} \right)^{2}\,,\quad\forall(x,y)\in\Sigma. \tag{6}\] ### The Fraunhofer approximation We continue to present a further approximation that, if valid, can significantly simplify the calculations. The Fraunhofer approximation assumes that the exponential term with a quadratic dependence of \((x,y)\) is approximately unity over the aperture. The region where the approximation is valid is the _far field_ or _Fraunhofer region_. Figure 2b illustrates the different diffraction approximations as a function of the aperture's distance. The required condition to be in this region reads \[z\gg\frac{k\left(x^{2}+y^{2}\right)}{2}\,\quad\forall(x,y)\in\Sigma. \tag{7}\] The Fraunhofer diffraction formula is given by \[\mathcal{U}(u_{1},v_{1};z)=\frac{\exp\left[\mathrm{j}\mathrm{k}z\right]}{ \mathrm{j}\lambda z}\exp\left[\mathrm{j}\,\frac{k}{2z}\left(u_{1}^{2}+v_{1}^{2 }\right)\right]\iint_{\Sigma}\left\{\mathcal{U}(x,y;0)\right\}\exp\left[- \mathrm{j}\,\frac{2\pi}{\lambda z}\left(u_{1}x+yv_{1}\right)\right]\,\mathrm{ d}x\mathrm{d}y\,, \tag{8}\] where we can reformulate the previous equation using the FT as follows \[\mathcal{U}(u_{1},v_{1};z)=\frac{\exp\left[\mathrm{j}\mathrm{k}z\right]}{ \mathrm{j}\lambda z}\exp\left[\mathrm{j}\,\frac{k}{2z}\left(u_{1}^{2}+v_{1}^{ 2}\right)\right]\text{FT}\left\{i_{\Sigma}(x,y;0)\mathcal{U}(x,y;0)\right\} \left(\frac{u_{1}}{\lambda z},\frac{v_{1}}{\lambda z}\right)\,, \tag{9}\] where \(i_{\Sigma}\) is an indicator function over the aperture taking values in \(\{0,1\}\). It is also possible to consider image vignetting and multiply the indicator with a weight function so that the resulting function takes values in \([0,1]\). Cameras are sensitive to the light's intensity reaching their detectors. The instantaneous intensity of an EM wave is equal to its squared absolute value. Therefore, we can write the intensity of the diffracted wave as \[\mathcal{I}(u_{1},v_{1};z)=\left|\mathcal{U}(u_{1},v_{1};z)\right|^{2}=\frac{1}{ \lambda^{2}z^{2}}\left|\text{FT}\left\{i_{\Sigma}(x,y;0)\mathcal{U}(x,y;0) \right\}\left(\frac{u_{1}}{\lambda z},\frac{v_{1}}{\lambda z}\right)\right|^{2}\,, \tag{10}\] which is significantly simpler than the original Rayleigh-Sommerfeld expression from Equation 1. ### The effect of lenses The study of the diffraction phenomena is necessary but not sufficient to describe the effects of an optical system. Optical imaging systems are generally based on lenses or mirrors, which have the ability to form images. To simplify the analysis we study the effect of a single positive (converging) lens illuminated with monochromatic illumination and compute the impulse response of such a system. The coordinate system used for this analysis is shown in Figure 4. Let us write the output wave as a function of the input wave using the superposition integral as follows \[\mathcal{U}_{\mathrm{img}}(u,v)=\iint_{-\infty}^{+\infty}\mathcal{H}(u,v; \xi,\eta)\,\mathcal{U}_{\mathrm{obj}}(\xi,\eta)\mathrm{d}\xi\mathrm{d}\eta\,, \tag{11}\] where \(\mathcal{H}\) is the field's value at image coordinates \((u,v)\) due to a unitary point-source object at position \((\xi,\eta)\). We describe a monochromatic input wave reaching the entrance pupil of the lens coming from a point source located at \((\xi,\eta)\) at the object plane, which is located at a distance \(z_{o}\) from the lens. Following the _paraxial approximation_ we can write the waves at the entrance pupil as follows \[\mathcal{U}_{\mathrm{f}}(x,y)=\frac{1}{\mathrm{j}\lambda z_{o}}\exp\left[ \mathrm{j}\frac{k}{2\,z_{o}}\left((x-\xi)^{2}+(y-\eta)^{2}\right)\right]\,, \tag{12}\] Figure 4: Illustration of the coordinate system of the imaging systems we are studying. The central imaging system can be a single positive lens or the generalised _black box_ concept of an imaging system. The image plane coordinates are \((u,v)\), the input and output aperture plane coordinates are \((x,y)\), and the object plane coordinates are \((\xi,\eta)\). This figure has been adapted from Goodman (2005). and the wave at the output pupil as follows \[\mathcal{U}_{l^{\prime}}(x,y)=\mathcal{U}_{l}(x,y)\mathcal{P}(x,y)\exp\left[- \mathbf{j}\frac{k}{2f}\left(x^{2}+y^{2}\right)\right]\,, \tag{13}\] where \(f\) is the focal length of the lens and \(\mathcal{P}\) is the pupil function of the lens which accounts for the finite dimension of the lens, i.e., the obscured and unobscured areas. We have implicitly assumed that the pupil function is constant for any \((u,v)\) position considered. This assumption does not hold for wide-field imagers where there are obscurations involved in the pupil function. We continue by using the Fresnel diffraction formula from subsection 2.2 to compute the diffraction effect from the lens' exit pupil to the image plane. Replacing \(\mathcal{U}\) in Equation 5 with the output lens wave \(\mathcal{U}_{l^{\prime}}\) to compute the impulse response, we obtain \[\mathcal{H}(u,v;\xi,\eta)=\frac{1}{\lambda^{2}z_{o}z_{i}}\overbrace{ \exp\left[\mathbf{j}\frac{k}{2\,z_{i}}\left(u^{2}+v^{2}\right)\right]\exp \left[\mathbf{j}\frac{k}{2\,z_{o}}\left(\xi^{2}+\eta^{2}\right)\right]}^{ \text{(II)}} \tag{14}\] \[\iint_{-\infty}^{\infty}\mathcal{P}(x,y)\underbrace{\exp\left[ \mathbf{j}\frac{k}{2}\left(\frac{1}{z_{o}}+\frac{1}{z_{i}}-\frac{1}{f} \right)\left(x^{2}+y^{2}\right)\right]}_{\text{(I)}}\exp\left[-\mathbf{j}k \left(\left(\frac{\xi}{z_{o}}+\frac{u}{z_{i}}\right)x+\left(\frac{\eta}{z_{o} }+\frac{v}{z_{i}}\right)y\right)\right]\mathrm{d}x\mathrm{d}y.\] The previous formula of the impulse response of a positive lens is hard to exploit in a practical sense due to the quadratic phase terms. However, several approximations can be exploited to remove them: * We start studying the term (I) inside the integrand. We consider the image plane to coincide with the focal plane, i.e., \(z_{i}=f\), and the imaged object to be very far away from the entrance pupil. Consequently, the term (I) is approximately one. The part of the exponent which is close to zero is the following one \[\frac{1}{z_{o}}+\frac{1}{z_{i}}-\frac{1}{f}\approx 0,\] (15) which, in the case of equality, it is known as the _lens law_ of geometrical optics. * The term (II) only depends on the image coordinates \((u,v)\). The term can be ignored as we are interested in the intensity distribution of the image and it is not being integrated in Equation 11. * The term (III) depends on the object coordinates, is integrated in the convolution operation in Equation 11 and therefore, might change significantly the imaged object. We can neglect the influence of this term if its phase changes by a small amount, i.e., a small fraction of a radian, within the region of the object that mostly contributes to the image position \((u,v)\). A deeper discussion about the validity of the term (III) approximation can be found in Goodman (2005, SS5.3.2) and references therein. We can now apply the previous approximations to the calculation of the impulse response of an optical system with a positive lens. Under Fresnel diffraction, we simplify Equation 14 to obtain \[\mathcal{H}(u,v;\tilde{\xi},\tilde{\eta})\approx\frac{1}{\lambda^{2}z_{o}f} \iint_{-\infty}^{+\infty}\mathcal{P}(x,y)\exp\left[-\mathbf{j}\frac{2\pi}{ \lambda f}\left((u-\tilde{\xi})x+(v-\tilde{\eta})y\right)\right]\mathrm{d}x \mathrm{d}y\,, \tag{16}\] where \(m=-f/z_{o}\) is the magnification of the system, which could be positive or negative depending if the image is inverted or not, and the normalized (or reduced) object-plane coordinates are \(\tilde{\xi}=m\,\xi\) and \(\tilde{\eta}=m\,\eta\). The diffraction pattern is centred on the image coordinates, \(u=m\,\xi\) and \(v=m\,\eta\), which are the transformed coordinates of the impulse response's position \((\xi,\eta)\). The impulse response obtained in Equation 16 is Fraunhofer's diffraction pattern centred in \((u=\tilde{\xi},v=\tilde{\eta})\) and up to a scaling factor of \(1/\lambda z_{o}\). This result is a consequence of the choice of \(z_{i}\), such that it verifies the lens law, allowing us to drop out quadratic phase terms in the integral. We have obtained a simple formulation for the impulse response, but the optical system we studied is not used in practice to carry out galaxy imaging surveys. We need to extend the analysis to more general optical systems. ### Analysis of a general optical imaging system Let us now analyse a general optical imaging system composed of one or many lenses or mirrors of possibly different characteristics. We treat the optical system as a _black box_ characterised by the transformations applied to an incident _object_ scalar wave, \(\mathcal{U}_{\rm obj}\), into an output _image_ wave, \(\mathcal{U}_{\rm img}\). Figure 4 illustrates the new interpretation of the optical system, where we have replaced the previous single lens system with a _black box_. In this general model, we assume that the effect of the optical system between the entrance and exit pupils is well described by geometrical optics, which is an affine transformation. We also assume that all the diffraction effects can be associated with one of the two pupils, input or output (see Goodman (2005, S. 6.1) for more discussion on both assumptions). We choose the latter one and consider the diffraction of the output wave between the output pupil and the image plane. For the moment, our analysis continues to assume an ideal monochromatic illumination. The _ideal image_, \(\mathcal{U}_{\rm g}\), is defined as the input image when applied the effect of the geometrical optics inside the _black box_ and writes \[\mathcal{U}_{\rm g}\left(\tilde{\xi},\tilde{\eta}\right)=\frac{1}{|m|} \mathcal{U}_{\rm obj}\left(\frac{\tilde{\xi}}{m},\frac{\tilde{\eta}}{m} \right)\;,\quad\mbox{and}\quad\tilde{\xi}=m\xi,\quad\tilde{\eta}=m\eta\,, \tag{17}\] where \(m\) is the magnification factor of the optical system, and we expressed the images in _reduced coordinates_. Our analysis is based on the impulse response developed in the previous section. The approximations applied and the use of reduced object coordinates have made the system spatially invariant. This fact translates to having \(\mathcal{H}(u,v;\tilde{\xi},\tilde{\eta})=\mathcal{H}(u-\tilde{\xi},v-\tilde{ \eta})\), as the approximated impulse response from Equation 16 depends only in the difference of the image coordinates and the reduced object coordinates. The impulse response writes \[\mathcal{H}(u-\tilde{\xi},v-\tilde{\eta})=\frac{a}{\lambda f}\iint_{-\infty}^ {+\infty}\mathcal{P}(x,y)\exp\left[-\mathrm{j}\frac{2\pi}{\lambda f}\left((u- \tilde{\xi})x+(v-\tilde{\eta})y\right)\right]\mathrm{d}x\mathrm{d}y\,, \tag{18}\] where \(a\) is a constant amplitude that does not depend on the optical system under study. The superposition integral in Equation 11 relates the waves at the object an image position with the impulse response in a _spatially variant_ system. However, if the system is _spatially invariant_, the equation can be reformulated as the convolution equation, which writes \[\mathcal{U}_{\rm img}(u,v)=\iint_{-\infty}^{+\infty}\mathcal{H}(u-\tilde{\xi},v-\tilde{\eta})\,\mathcal{U}_{\rm g}(\tilde{\xi},\tilde{\eta})\;\mathrm{d} \tilde{\xi}\,\mathrm{d}\tilde{\eta}\,. \tag{19}\] The previous equation can be rewritten with the usual convolution notation as \[\mathcal{U}_{\text{img}}(u,v)=\left(\mathcal{U}_{\text{g}}\star\mathcal{H}\right) \left(u,v\right). \tag{20}\] In this general case of a system without aberrations and under the aforementioned approximations, we see that the output image is formed by a geometrical-optics transformation followed by a convolution with an impulse response from the Fresnel diffraction of the exit aperture. #### 2.5.1 Introducing optical aberrations In the previous development, we considered an ideal optical system without any aberrations, known as _diffraction-limited_. An aberrated optical system produces the imperfect convergence of rays, which can be expressed equivalently in wavefront space by deviations from the ideal reference sphere. The aberrations produce leads and lags in the wavefront with respect to the ideal sphere, see Figure 5. A complementary interpretation, from Goodman (2005), is that we start with the previous diffraction-limited system producing converging spherical wavefronts. Then, we add a phase-shifting plate representing the system's aberrations. The plate is located in the aperture after the exit pupil and affects the output wave's phase. To characterise the aberrations, we will use the _generalised pupil function_ that generalises the pupil function \(\mathcal{P}\) from Equation 18 and writes \[\mathcal{P}_{\text{gen}}(x,y;u,v)=\mathcal{P}(x,y;u,v)\exp\left[\text{j}\frac{ 2\pi}{\lambda}\mathcal{W}(x,y;u,v)\right]\,, \tag{21}\] Figure 5: Illustration of the wavefront error in a one-dimensional projection of an ideal setting where the optical system is represented as a single lens. Figure reproduced from Liaudat et al. (2023). where \(\lambda\) is the central wavelength of the incident wave, \(\mathcal{P}\) is the pupil function including the telescope's obscurations, and \(\mathcal{W}\) represents the optical path differences (OPD) between a perfectly spherical and the aberrated wavefront. We also refer to the OPD as the wavefront error (WFE). Figure 5 illustrates the concept of WFE. It is common to represent the WFE using a Zernike polynomial decomposition (Noll, 1976) as they are orthogonal in the unit disk and we generally use circular apertures in telescopes and optical systems in general. Figure 6b shows the first Zernike polynomials. The aberrations, \(\mathcal{W}\), and the pupil function, \(\mathcal{P}\), depend on the object's position in the focal plane as is seen in the \((u,v)\) coordinate dependence in Equation 21. Large telescopes with wide focal planes have spatially varying aberrations. The path travelled by the light rays changes considerably between distant points in the focal plane, changing the aberrations, \(\mathcal{W}\), too. The obscurations and the aperture, represented by the pupil function \(\mathcal{P}\), also change with the focal plane position. For example, Figure 6a illustrates the obscurations from the _Euclid_ telescope. One can notice a circular aperture with several obscurations in it, a secondary mirror and three arms supporting the mirror. What we observe in Figure 6a is a 2D projection of a 3D structure. This projection changes as a function of the focal plane position we are analysing, making the function \(\mathcal{P}\) dependent on the \((u,v)\) coordinates. In the impulse response of the optical system without aberrations from Equation 18, we had a spatially invariant system. This invariance allowed us to use the convolution rather than the superposition integral, which is a computationally practical formulation. If we now consider aberrations, we must inject the generalized pupil function appearing in Equation 21 into the impulse response formula from Equation 18. The addition of the \((u,v)\) dependency in \(\mathcal{P}_{\text{gen}}\) makes the impulse response \(\mathcal{H}\) spatially variant again. The study of \(\mathcal{H}\), the impulse response and the main topic of this review, is strongly spatially variant in systems with a large focal plane. Nevertheless, we can consider \(\mathcal{H}\) spatially invariant in its _isoplanatic region_. This region consists of close-by points in the focal plane, where the light has travelled similar paths giving small deviations of \(\mathcal{H}\). We are assuming a certain regularity in \(\mathcal{H}\) due to the optical system under study that allows the deviations to be small. In other words, we consider \(\mathcal{H}\) to be locally spatially invariant or spatially invariant in patches. Figure 7 illustrates the idea of an _isoplanatic region_. This local invariance assumption limits the size of the imaged objects under study, as they should have a certain size range with Figure 6: **(a)** An illustration of _Euclid_’s pupil function, which can be seen in Venancio et al. (2020), in the \((x,y)\) plane for a given position in the \((\xi,\eta)\). **(b)** Example of the first Zernike polynomial maps. respect to the support of \(\mathcal{H}\) so that all the objects being imaged lie within the aforementioned region. We consider the generalized pupil function evaluated at the centroid of the imaged object, \((u_{i},v_{i})\), and note the locally spatially invariant generalized pupil function as follows \[\mathcal{P}^{*}_{\text{gen}}(x,y|u_{i},v_{i})=\mathcal{P}^{*}(x,y|u_{i},v_{i}) \exp\left[\text{j}\frac{2\pi}{\lambda}\mathcal{W}^{*}(x,y|u_{i},v_{i})\right]\,. \tag{22}\] Injecting Equation 22, instead of Equation 21, to the impulse response in Equation 18 reads as follows \[\mathcal{H}(u-\tilde{\xi},v-\tilde{\eta}|u_{i},v_{i})=\frac{a}{ \lambda f}\iint_{-\infty}^{+\infty}\mathcal{P}^{*}(x,y|u_{i},v_{i})\exp\left[ \text{j}\frac{2\pi}{\lambda}\mathcal{W}^{*}(x,y|u_{i},v_{i})\right]\] \[\exp\left[-\text{j}\frac{2\pi}{\lambda f}\left((u-\tilde{\xi})x+ (v-\tilde{\eta})y\right)\right]\text{d}x\text{d}y\,, \tag{23}\] where we have made the system spatially invariant again allowing us to exploit the convolution formula in Equation 19. We have considered aberrations that only depend on the object's position in the focal plane, also known as achromatic aberrations. However, depending on the optical system under study, there might be wavelength-dependent aberrations. For example, some refractive components, or some components implementing complex thin film coatings, may introduce spurious spectral dependences to the optical system's response. If this is the case, we can add a wavelength dependence to the WFE function \(\mathcal{W}\) to account for these effects. #### 2.5.2 Polychromatic illumination: the coherent and the incoherent case We studied until now a system with ideal monochromatic light. It is time to shift to polychromatic light as telescopes have filters with finite bandwidths and hence allow multiple frequencies of light. For a more rigorous analysis of polychromatic illumination, we refer the reader to the theory of partial coherence [13, 14, 15]. Even if we study the system's Figure 7: Illustration of isoplanatic regions. Two rays from the same isoplanatic region travel through almost the same turbulence and suffer almost the same distortions. Figure reproduced from [13]. behaviour to light with a particular wavelength, this is practically never the case, as real illumination is never perfectly chromatic, even for lasers. Therefore, we consider a narrowband polychromatic illumination centred at a given wavelength \(\lambda\). The _narrowband assumption_ states that the bandwidth occupied is small with respect to the central wavelength. For polychromatic light, we follow Goodman (2005) and consider a time-varying phasor of the field, \(\mathcal{U}_{\mathrm{img}}(u,v;t)\), where its intensity is given by the time integration of its instantaneous intensity \[\mathcal{I}_{\mathrm{img}}(u,v)=\left\langle\left|\mathcal{U}_{\mathrm{img}}(u,v;t)\right|^{2}\right\rangle_{t}=\frac{1}{T}\int_{-T/2}^{T/2}\left|\mathcal{U }_{\mathrm{img}}(u,v;t)\right|^{2}\mathrm{d}t\,, \tag{24}\] where \(T\) is the detector integration time that is considered much greater than the optical wave period. We can generalise the field expression from Equation 19 by considering that light is polychromatic and that the impulse response \(\mathcal{H}\) is wavelength independent due to the narrowband assumption. The field then writes \[\mathcal{U}_{\mathrm{img}}(u,v;t)=\iint_{-\infty}^{+\infty}\mathcal{H}\left(u- \tilde{\xi},v-\tilde{\eta}\right)\mathcal{U}_{\mathrm{g}}\left(\tilde{\xi}, \tilde{\eta};t-\tau\right)\ \mathrm{d}\tilde{\xi}\mathrm{d}\tilde{\eta}\,, \tag{25}\] where \(\tau\) represents the delay of the wave propagation from \((\tilde{\xi},\tilde{\eta})\) to \((u,v)\). Continuing with the polychromatic analysis, we rewrite the intensity from Equation 24 as \[\mathcal{I}_{\mathrm{img}}(u,v)=\iint_{-\infty}^{+\infty}\mathrm{d}\tilde{\xi }_{1}\mathrm{d}\tilde{\eta}_{1}\iint_{-\infty}^{+\infty}\mathrm{d}\tilde{\xi} _{2}\mathrm{d}\tilde{\eta}_{2}\mathcal{H}\left(u-\tilde{\xi}_{1},v-\tilde{ \eta}_{1}\right)\mathcal{H}^{*}\left(u-\tilde{\xi}_{2},v-\tilde{\eta}_{2} \right)\mathcal{J}_{\mathrm{g}}\left(\tilde{\xi}_{1},\tilde{\eta}_{1};\tilde{ \xi}_{2},\tilde{\eta}_{2}\right), \tag{26}\] where \(\mathcal{H}^{*}\) is the conjugate of \(\mathcal{H}\), \(\mathcal{J}_{\mathrm{g}}\) is known as the _mutual intensity_ which describes the spatial coherence of \(\mathcal{U}_{\mathrm{g}}\) at two points and writes \[\mathcal{J}_{\mathrm{g}}\left(\tilde{\xi}_{1},\tilde{\eta}_{1};\tilde{\xi}_{2 },\tilde{\eta}_{2}\right)=\left\langle\mathcal{U}_{\mathrm{g}}\left(\tilde{ \xi}_{1},\tilde{\eta}_{1};t\right)\mathcal{U}_{\mathrm{g}}^{*}\left(\tilde{ \xi}_{2},\tilde{\eta}_{2};t\right)\right\rangle\,. \tag{27}\] We can distinguish two types of illuminations, _coherent_ and _incoherent_. _Coherent_ illumination refers to waves whose phases vary in a perfectly correlated way. This illumination is approximately the case of a laser. In _incoherent_ illumination, the wave's phases vary in an uncorrelated fashion. Most natural light sources can be considered incoherent sources. The _mutual intensity_ is helpful to represent both types of illumination. In the case of coherent light, we obtain, \[\mathcal{J}_{\mathrm{g}}^{\mathrm{co}}\left(\tilde{\xi}_{1},\tilde{\eta}_{1}; \tilde{\xi}_{2},\tilde{\eta}_{2}\right)=\mathcal{U}_{\mathrm{g}}\left(\tilde{ \xi}_{1},\tilde{\eta}_{1}\right)\mathcal{U}_{\mathrm{g}}^{*}\left(\tilde{\xi} _{2},\tilde{\eta}_{2}\right)\,, \tag{28}\] where \(\mathcal{U}_{\mathrm{g}}\left(\tilde{\xi}_{1},\tilde{\eta}_{1}\right)\) and \(\mathcal{U}_{\mathrm{g}}\left(\tilde{\xi}_{2},\tilde{\eta}_{2}\right)\) are time-independent phasor amplitudes relative to their time-varying counterpart. As both time-varying phasors are synchronized, we have taken a reference phasor and normalized them against their amplitude with respect to a reference point that can be the origin \((0,0)\). For example, \[\mathcal{U}_{\mathrm{g}}\left(\tilde{\xi}_{1},\tilde{\eta}_{1};t\right)= \mathcal{U}_{\mathrm{g}}\left(\tilde{\xi}_{1},\tilde{\eta}_{1}\right)\frac{ \mathcal{U}_{\mathrm{g}}\left(0,0;t\right)}{\left\langle\left|\mathcal{U}_{ \mathrm{g}}\left(0,0;t\right)\right|^{2}\right\rangle^{\frac{1}{2}}}\,. \tag{29}\] Substituting Equation 28 into Equation 26 we obtain \[\mathcal{I}_{\rm img}^{\rm co}(u,v) =\left|\mathcal{U}_{\rm img}^{\rm co}(u,v)\right|^{2}=\left|\iint_{- \infty}^{+\infty}\mathcal{H}\left(u-\tilde{\xi},v-\tilde{\eta}\right)\mathcal{U }_{\rm g}\left(\tilde{\xi},\tilde{\eta}\right)\mathrm{d}\tilde{\xi}\mathrm{d} \tilde{\eta}\right|^{2}\,, \tag{30}\] \[\mathcal{I}_{\rm img}^{\rm co}(u,v) =\left|\left(\mathcal{U}_{\rm g}\star\mathcal{H}\right)(u,v) \right|^{2}\,,\] where we observe the _coherent illumination gives a linear system in the complex amplitude of the field \(\mathcal{U}_{\rm g}\)_. The previous result is related to the interference of coherent waves. If we now consider incoherent illumination, the mutual intensity writes \[\mathcal{J}_{\rm g}^{\rm in}\left(\tilde{\xi}_{1},\tilde{\eta}_{1};\tilde{\xi }_{2},\tilde{\eta}_{2}\right)=\kappa\,\mathcal{I}_{g}\left(\tilde{\xi}_{1}, \tilde{\eta}_{1}\right)\delta\left(\tilde{\xi}_{1}-\tilde{\xi}_{2},\tilde{ \eta}_{1}-\tilde{\eta}_{2}\right)\,, \tag{31}\] where \(\kappa\) is a real constant, \(\delta\) is Dirac delta distribution, and \(\mathcal{I}_{\rm g}\) is the intensity of the \(U_{\rm g}\) field. The constant \(\kappa\) is a result of a simplification from statistical optics giving origin to Equation 31. The constant depends on the degree of the extension of coherence when the evanescent-wave phenomenon (Beran and Parrent, 1964) is taken fully into account. If the coherence extends over a wavelength, \(\kappa\) is equal to \(\bar{\lambda}^{2}/\pi\), where \(\bar{\lambda}\) is the mean wavelength. See Goodman (1985, SS5.5.2) for a deeper discussion on incoherent illumination and the \(\kappa\) constant. Replacing Equation 31 in Equation 26 the output (image) intensity writes \[\mathcal{I}_{\rm img}^{\rm in}(u,v) =\left|\mathcal{U}_{\rm img}^{\rm in}(u,v)\right|^{2}=\kappa \iint_{-\infty}^{+\infty}\left|\mathcal{H}\left(u-\tilde{\xi},v-\tilde{\eta} \right)\right|^{2}\mathcal{I}_{\rm g}\left(\tilde{\xi},\tilde{\eta}\right) \mathrm{d}\tilde{\xi}\mathrm{d}\tilde{\eta}\,, \tag{32}\] \[\mathcal{I}_{\rm img}^{\rm in}(u,v) =\kappa\left(\mathcal{I}_{\rm g}\star\left|\mathcal{H}\right|^{ 2}\right)(u,v)=\kappa\left(\mathcal{I}_{\rm g}\star\mathcal{H}_{\rm int} \right)(u,v)\,,\] where \(\mathcal{H}_{\rm int}=\left|\mathcal{H}\right|^{2}\) is the _intensity impulse response_, also known as the PSF. In this case, _an optical system illuminated with incoherent light is linear in intensity_. Equation 32 shows a commonly exploited fact; the output intensity is the convolution of the intensity PSF with ideal image intensity \(\mathcal{I}_{\rm g}\). #### 2.5.3 Usual assumptions adopted in PSF modelling PSF modelling articles generally implicitly assume specific hypotheses. We provide some of them on the following list: * The electric field is assumed to be scalar. * The lens law is verified, the paraxial approximation is valid, and the approximations discussed in subsection 2.4 hold. These approximations allow us to discard quadratic phase terms from Fresnel's diffraction and exploit the simpler Fraunhofer diffraction formula. * The incoming light from natural sources is assumed to be ideally incoherent. Then, the optical system is linear in intensity, as seen in Equation 32. * The PSF is considered to be spatially invariant in its isoplanatic region. In other words, PSF is assumed not to change on objects' typical length scales. This assumption allows us to use the convolution equation, i.e. Equation 32, rather than the superposition integral, i.e., Equation 11. Although the previous assumptions are standard, certain precision levels require dropping simplifications. For example, the _Euclid_ mission requirements on the PSF model accuracy as read in Laureijs et al. (2011) and Racca et al. (2016) is of \(2\times 10^{-4}\) for the root mean square (RMS) error on each ellipticity component (\(\delta e_{i}^{\rm PSF}\)), and \(1\times 10^{-3}\) for the relative RMS error on the size (\(\delta R_{\rm PSF}^{2}/R_{\rm PSF}^{2}\)). The PSF model might need to include light polarisation to fulfil these extremely tight PSF requirements. Other assumptions might also be dropped for the precise imaging of widespread objects. This case might require discarding the spatially invariant assumption of the PSF or reducing the size of the isoplanatic region. To conclude, the usual formulation of the PSF, i.e., the intensity of the impulse response, convolving an image seen in many articles, comes from the previous assumptions using the results from Equation 22, Equation 23 and Equation 32. We rewrite this formula as follows \[\mathcal{I}_{\text{img}}(u,v)=\left(\mathcal{H}_{\text{int}}\star\mathcal{I}_{ g}\right)\left(u,v\right), \tag{33}\] where we remind the reader that \((u,v)\) is the image plane, we have dropped the \(\kappa\) term from Equation 32, and \(\mathcal{H}_{\text{int}}\) is the intensity impulse response or PSF that writes \[\mathcal{H}_{\text{int}}(u,v|u_{i},v_{i})=\frac{{a^{\prime}}^{2}} {{\lambda}^{2}\,f^{2}}\bigg{|}\iint_{-\infty}^{+\infty}\mathcal{P}^{*}(x,y|u_{ i},v_{i})\exp\bigg{[}\text{j}\frac{2\pi}{\lambda}\mathcal{W}^{*}(x,y|u_{i},v_{i}) \bigg{]}\\ \exp\bigg{[}-\text{j}\frac{2\pi}{\lambda f}\left(ux+vy\right) \bigg{]}\text{d}x\text{d}y\bigg{|}^{2}, \tag{34}\] where we are studying the PSF for a specific wavelength and focal plane position. ## 3 General Observational Model We consider the PSF as the intensity impulse response, \(\mathcal{H}_{\text{int}}\), of the imaging system under study to a point source. The concept of PSF (Born and Wolf, 1999) is used throughout many imaging applications, including astronomical imaging (Liaudat et al., 2023; Schmitz, 2019), medical imaging (Dougherty and Kawaf, 2001; Joyce et al., 2018), or microscopy (Soulez et al., 2012; Debarnot et al., 2021, 2021). The central idea behind a PSF is that it represents transformations done to the imaged object by the imaging system. The PSF is, in a certain way, a characterisation of the imaging system. Considering incoherent illumination and that the hypotheses from the previous section hold, we can affirm that the optical system behaves linearly as in Equation 33. Consequently, the PSF is considered the impulse response of the optical system and affects the ground truth image through a convolution operation. Focusing on astronomical imaging, the definition of the imaging system can vary between the different use cases and telescopes. For example, in a ground-based telescope, we will consider that the atmosphere belongs to the imaging system we are modelling. However, naturally, the atmosphere will not be considered in a space-based telescope. This article will focus on optical systems, which work with electromagnetic radiation with a wavelength close to the visible spectrum. For example, _Euclid_ VIS instrument's theoretical wavelength range is from \(550\)nm to \(900\)nm. The PSF describes the effects of the imaging system in the imaging process of the object of interest. The PSF is a convolutional kernel, as we have seen in subsection 2.5. However, this convolutional kernel varies spatially, spectrally, and temporally. We give a non-exhaustive list that motivates each of these variations: * _Spatial variations:_ The optical system presents a certain _optical axis_, which is an imaginary line where the system has some degree of rotational symmetry. In simpler words, it can be considered as the direction of the light ray that produces a PSF in the centre of the focal plane for an unaberrated optical system. The angle of incidence is defined as the angle between an incoming light ray and the optical axis. The main objective of the optical systems we study is to make the incoming light rays converge in the focal plane, where there will be some measurement instruments, e.g., a camera. Depending on the angle of incidence, the image will form in different positions in the focal plane. The path of the incoming light will be different for each angle of incidence, and therefore the system's response will be different too. In other words, the PSF will change depending on the angle of incidence or spatial position in the focal plane where the image is forming. Optical systems with wide focal planes, generally associated with wide field-of-views (FOVs), present significant PSF spatial variations. * _Spectral variations:_ Principally due to the diffraction phenomena and its well-known wavelength dependence covered in section 2. Refractive2 components of the optical system under study are also a source of spectral variations (Baron et al., 2022). Other sources of spectral variations are detector electronic components (Meyers and Burchat, 2015a) and atmospheric chromatic effects (Meyers and Burchat, 2015b). Footnote 2: Refraction refers to the change of direction in the propagation of a wave passing from one medium to another. Most of the wave energy is transmitted to the new medium. Reflection refers to the abrupt change of direction of the wave propagation due to a boundary between mediums. In this last case, most of the oncoming wave energy remains in the same medium. * _Temporal variations:_ The state of the telescope changes with respect to time; therefore, the imaged object's transformation also changes. In space-based telescopes, high-temperature gradients cause mechanical dilations and contractions that affect the optical system. In ground-based telescopes, the atmosphere composition changes with time. Consequently, it temporally affects the response of the optical system, i.e., the PSF. The PSF convolutional kernel varies with space, time and wavelength. Once we have set up a specific wavelength and time to analyse our system, we will have a different convolutional kernel for each position in the field of view. Let us refer to the PSF field \(\mathcal{H}_{\text{int}}\) as all the PSF representing an optical system. Then \(\mathcal{H}_{\text{int}}(u,v;\lambda;t|u_{i},v_{i})\) is a specific PSF where \((u_{i},v_{i})\) represents its centroid, i.e., the first order moments. The same notation is maintained from Figure 4, where the \((u,v)\) variables represent the image plane. We can define the PSF field as a varying convolutional kernel \(\mathcal{H}_{\text{int}}:\mathbb{R}^{2}\times\mathbb{R}_{+}\times\mathbb{R}_ {+}\times\mathbb{R}^{2}\rightarrow\mathbb{R}\). This definition would accurately describe how the PSF affects the images considering the assumptions from section 2 are valid. We recall that we adopted the approximation that considers the PSF locally invariant in its _isoplanatic region_(Born and Wolf, 1999, SS 9.5.1), see Figure 7 for an illustration. This approximation means that in a vicinity of an observed object, we will consider that the PSF only varies with time and wavelength, thus facilitating the computation of the convolution. The close vicinity, or the isoplanatic region, will be defined as the postage stamp to image the object of interest. The typical galaxies observed for weak lensing have a comparable size with respect to the PSF size (see Mandelbaum et al. (2018, Figure 7) for a distribution of relative galaxy to PSF size in the HSC survey). Consequently, the approximation error is kept low as it is only done for small patches of the focal plane. Let us define our object of interest with the subscript ground truth (GT), \(\mathcal{I}_{\mathrm{GT}}(u,v;\lambda;t|u_{i},v_{i})\), that is the \(I_{g}\) object from section 2, as a continuous light distribution \(\mathcal{I}_{\mathrm{GT}}:\mathbb{R}^{2}\times\mathbb{R}_{+}\times\mathbb{R} _{+}\times\mathbb{R}^{2}\rightarrow\mathbb{R}\). In this review, we are not considering transient objects, i.e., the time dependence scale of the object is comparable with the exposure time used to image it. Therefore, we can ignore the temporal dependency of the GT object, \(\mathcal{I}_{\mathrm{GT}}(u,v;\lambda;t)\neq f(t)\). Let us write our general observational model that relates our GT object of interest, our PSF and our observed image as follows \[I_{\mathrm{img}}(\bar{u},\bar{v};t|u_{i},v_{i})=\mathcal{F}_{p}\left\{\int_{0} ^{+\infty}\mathcal{T}(\lambda)\;(\mathcal{I}_{\mathrm{GT}}\star\mathcal{H}_{ \text{int}})(u,v;\lambda;t|u_{i},v_{i})\;\mathrm{d}\lambda\right\}\circ N(\bar {u},\bar{v};t|u_{i},v_{i})\,, \tag{35}\] where \(\mathcal{F}_{p}\) is a degradation operator discretising the image to \(\mathbb{R}^{p\times p}\) which includes the image sampling from the instrument. The variables \((\bar{u},\bar{v})\) denote the discrete (pixelised) version of the \((u,v)\) variables. Then, \(I_{\mathrm{img}}(\bar{u},\bar{v};t|u_{i},v_{i})\in\mathbb{R}\) corresponds to the instrument's measurement at a single pixel \((\bar{u},\bar{v})\), and \(I_{\mathrm{img},(\cdot;t|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\) to the entire image. The variables \((u_{i},v_{i})\) correspond to the centre location of the target object \(i\). The instrument's transmission is represented by \(\mathcal{T}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\), a function with finite support, and \(N_{(\cdot;t|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\) corresponds to the noise affecting our observation and possibly a modelling error, where \(\circ\) is some composition operator. We have carried out the spectral integration (Hopkins, 1957; Eriksen and Hoekstra, 2018) on the instrument's passband defined in \(\tau\). Although Equation 35 provides a general observational model, it can be unpractical. The continuous functions \(\mathcal{H}_{\text{int}}\), \(\mathcal{T}\), and \(\mathcal{I}_{\mathrm{GT}}\) are practically inaccessible. We make several assumptions to simplify the problem: 1. the continuous functions \(\mathcal{H}_{\text{int}}\) and \(\mathcal{I}_{\mathrm{GT}}\) are well approximated by piece-wise constant functions over a regular grid in \(\mathbb{R}^{2}\). We assume \(\mathcal{H}_{\text{int}}\approx H\) and \(\mathcal{I}_{\mathrm{GT}}\approx I_{\mathrm{GT}}\), where \(H,I_{\mathrm{GT}}\in\mathbb{R}^{P\times P}\) with \(P\geq p\). The resolution of these two variables has to be greater or equal to the observation resolution, 2. the noise is additive, i.e. \(\circ\equiv+\), although the formulation could be adapted to consider other types of noise, e.g., Poisson, 3. the degradation operator is approximated by its discrete counterpart, \(\mathcal{F}_{p}\approx F_{p}\), where \(F_{p}:\mathbb{R}^{P\times P}\rightarrow\mathbb{R}^{p\times p}\), that has been discretized in a regular grid. We assume that the degradation operator is _linear_, and that includes pixellation, possibly downsampling, intra-pixel shifts and linear detector effects, 4. we keep the approximation that the PSF is locally constant within the postage stamp of \(P\times P\) values of the target image, 5. the integral can be well approximated by a discretised version using \(n_{\lambda}\) bins. Taking into account the aforementioned assumptions, we can define our practical observational model as follows \[I_{\mathrm{img}}(\bar{u},\bar{v};t|u_{i},v_{i})=F_{p}\left\{\sum_{k=1}^{n_{ \lambda}}T(\lambda_{k})\;(I_{\mathrm{GT}}\star H)(\bar{u},\bar{v};\lambda_{k} ;t|u_{i},v_{i})\;\Delta\lambda_{k}\right\}+N(\bar{u},\bar{v};t|u_{i},v_{i})\,, \tag{36}\] where \(I_{\mathrm{img},(\cdot;t|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\), \(T\) is a discretized version of \(\mathcal{T}\), and \(b^{k}=[b^{k}_{0},b^{k}_{1}]\) is the \(k\)-th wavelength bin centred in \(\lambda_{k}\), with a width of \(\Delta\lambda_{k}=b^{k}_{1}-b^{k}_{0}\). ### Particular case: a star observation The case of star observations is of particular interest, as some stars in the FOV can be considered as a spatial impulse, i.e., \(\mathcal{I}_{\mathrm{star}}(u,v;\lambda|u_{i},v_{i})=\delta(u_{i},v_{i}; \lambda)=f_{(u_{i},v_{i})}(\lambda)\). Therefore, if we plug the impulse in Equation 36, we obtain a degraded observation of the PSF field. These observations will be crucial to constrain the PSF models. Unluckily, we do not always have access to the star's spectral variation, \(f_{(u_{i},v_{i})}(\lambda)\). However, we dispose of complementary photometric observations that can be useful to characterise the spectral variations. These observations provide us with the star's spectral energy distribution (SED), which can be defined as the calibrated flux density as a function of wavelength, usually at low spectral resolution. The photometric observations are done in several spectral bands. Figure 8 shows the bands from the MegaCam instrument at the Canada-France-Hawaii Telescope (CFHT)3. We refer the reader to Hogg (2022) for more information about SEDs and stellar photometry. The SED is a normalised low-resolution sampling of the star's spectral variations. We can write the SED definition we will use as \[\text{SED}_{b^{k}}(\lambda_{k})=\frac{1}{z_{n_{\lambda}(b)}}\int_{b_{0}^{k}}^{b_{1 }^{k}}f_{(u_{i},v_{i})}(\lambda)\ \mathrm{d}\lambda\,, \tag{37}\] where we continued to use the \(b^{k}\) bin definition from Equation 36, and \(z_{n_{\lambda}(b)}\) is a constant used so that the SED is normalised to unity. We have that \(\sum_{k=1}^{n_{\lambda}}\text{SED}_{b^{k}}(\lambda_{k})=1\). We continue by considering that the GT image in Equation 36 is a star, and we use the spectral bins from the SED definition to discretise the spectral integration. Finally, we write the practical star observation model as \[I_{\text{star}}(\bar{u},\bar{v};t|u_{i},v_{i})=F_{p}\left\{\sum_{k=1}^{n_{ \lambda}}T(\lambda_{k})\text{SED}_{b^{k}}(\lambda_{k})\ H(\bar{u},\bar{v}; \lambda_{k};t|u_{i},v_{i})\ \Delta\lambda_{k}\right\}+N(\bar{u},\bar{v};t|u_{i},v_{i})\,, \tag{38}\] where we consider the star observation \(I_{\text{star},(\,:t|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\) as a degraded version of the PSF field \(\tilde{H}_{(\,:t|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\). ## 4 PSF FIELD CONTRIBUTORS and RELATED degradations So far, we have described how the PSF interacts with the images we observe and how we can model an observation. However, we have not given much information about the different PSF field contributors and the different degradations represented by \(F_{p}\) in Equation 36 that can occur when modelling observations. We provide a non-exhaustive list of contributors to the PSF field, sources of known degradations, and the atmosphere's effect on our PSF modelling problem. Figure 8: The \(3\)rd generation set of filters of the MegaCam instrument at Canada-France-Hawaii Telescope that is currently being used for the Canada-France Imaging Survey. The transmission filter response includes the full telescope and \(1.25\) airmasses of atmospheric attenuation. The full telescope includes mirrors, optics, and CCDs. ### Image coaddition A fundamental contributor to the PSF is the choice of image coaddition scheme. A _coadded image_ is a composite image created by combining multiple individual exposures of the same region of the sky in some way. This process can help increase the signal-to-noise ratio of the observation. Motivated by the analysis of the LSST data, Mandelbaum et al. (2022) explores different coaddition schemes and studies how it affects the PSF of the resulting coadded image. In particular, Mandelbaum et al. (2022) define under which schemes the coadded image accepts a well-defined PSF, i.e., the observation can be described by the convolution of an extended object and a uniquely defined coadded PSF. Bosch et al. (2017) describes the strategy for image and PSF coaddition in the HSC survey. ### Dithering and super-resolution Dithering consists of taking a series of camera exposures shifted by a fractional or a few pixel amount. There are several advantages of using a dithering strategy, which include the removal of cosmic rays and malfunctioning pixels, improving photometric accuracy, filling the gap between the detectors, and improving the sampling of the observed scene. Dithering allows estimating a sampling density of the images that is denser than the original pixel grid, in other words, to super-resolve the image. Regarding PSFs, it allows recovering Nyquist sampled PSF from undersampled observations. Bernstein (2002) studied the effect of dithering and the choice of pixel sizes in imaging strategies. Naturally, as we will later see, the dithering strategy is helpful for space-based telescopes thanks to their stability. In ground-based telescopes, the atmosphere constantly changes the PSF, making a dithering strategy less effective. However, a dithering strategy can be helpful if the telescope is equipped with adaptive optics technology, which will be described in subsection 4.6. An example is the Spectro-Polarimetic High contrast imager for Exoplanets REsearch (SPHERE) instrument (Beuzit, J.-L. et al., 2019) built for European Southern Observatory's (ESO) Very Large Telescope (VLT) in Chile. Lauer (1999b) discusses the limiting accuracy effect of undersampled PSFs in stellar photometry and proposes ways to correct it with dithered data (Lauer, 1999a). Fruchter and Hook (2002) presents the widely-used Drizzle algorithm that consists of shifting and adding the dithered images onto a finer grid. Rowe et al. (2011) proposed a linear coaddition method coined Imcom to obtain a super-resolved image from several undersampled images. Hirata et al. (2023) later studied the use of Imcom on simulations (Troxel et al., 2023) from the _Roman_ space telescope, while a companion paper, Yamamoto et al. (2023), explored its implications for weak lensing analyses. Ngole et al. (2015) proposed a super-resolution method coined SPRITE targeting the _Euclid_ mission based on a sparse regularisation technique. More recent PSF models handle the undersampling of the observations directly in their algorithm for estimating a well-sampled PSF field, as we will later see. ### Optic-level contributors These contributors affect the PSF by modifying the wave propagation in the optical system. In other words, they affect the wavefront's amplitude and phase. * _Diffraction phenomena and the aperture size:_ As we have seen in section 2, the diffraction phenomena happening in the optical system plays an essential role in the formation of the PSF. The size of the optical system aperture and the wavelength of the light being studied are of particular interest. Equation 34 shows us that under some approximations, the PSF is the Fourier transform of the aperture. Therefore, the size of the aperture and the PSF are closely related. For example, if we consider an ideal circular aperture, its diffraction pattern is the well-known _Airy disk_. The relation between the width of the PSF and the diameter of the aperture is given by \[\theta_{\text{FWHM}}=1.025\frac{\lambda}{d}\,,\] (39) where \(\theta_{\text{FWHM}}\) is the full width at half maximum (FWHM) expressed in radians, \(\lambda\) is the wavelength of the light being studied, and \(d\) is the diameter of the aperture. The width of the PSF is a fundamental property of an optical system as it defines the resolution of the system. In other words, the PSF size defines the optical system's ability to distinguish small details in the image. * _Optical aberrations_: These aberrations are due to imperfections in the optical elements, for example, a not ideally spherical mirror or a not perfectly aligning of the optical components. The optical aberrations play a significant role in the morphology of the PSF and can be modelled using the WFE introduced in the generalised pupil function from Equation 21. Some aberrations have a distinctive name, e.g., coma, astigmatism, and defocus, and they represent a specific Zernike polynomial (Noll, 1976). * _Surface errors or polishing effects_: One would ideally like perfectly smooth surfaces in mirrors and lenses. However, imperfections arise in the optical surfaces due to imperfect surface polishing. Krist et al. (2011) shows the measurement of surface errors (SFE) in the Hubble Space Telescope (HST). Gross et al. (2006, Section 35.2) gives a more in-depth analysis of surface errors focusing on the tolerancing of SFE. Figure 9a shows the surface errors measured for the Hubble space telescope (HST). Krist and Burrows (1995) studied HST's SFE before and after its iconic repair in \(1993\) with parametric and non-parametric (Gerchberg and Saxton, 1972) phase retrieval algorithms. * _Obscurations:_ Complex optical systems have telescope designs where some elements can obscure some part of the pupil. Obscurations are an essential contributor to PSF morphology and result from projecting a \(3\)D structure onto the \(2\)D focal plane. The resulting projection depends on the considered position of the focal plane. Accurate modelling of telescopes with wide-field imagers, e.g., _Euclid_, requires the computation of the obscuration's position dependence arising from the \(3\)D projection. The _Euclid_'s obscurations are presented in Figure 6a. Fienup et al. (1993) and Fienup (1994) studied HST's obscuration from phase retrieval algorithms and noticed a misalignment that caused a pupil shift. Figure 9a: **(a)** shows a reduced range of variations to show the surface errors measured for the Hubble Space Telescope, where the scale has been reduced from \(\pm 22\)nm to better show details. **(b)** and **(c)** show the wavefront error measurements from the JWST cycle \(1\) science operation on the \(30\)th of July of \(2022\) with the total and the reduced range of variation, respectively. * _Stray and scattered light:_ Optical elements and instruments give rise to light reaching the detectors. Krist (1995) studied the problem for the HST. Storkey et al. (2004) developed methods to clean observations with scattered light from the SuperCOSMOS Sky Survey Hambly et al. (2001). Sandin, Christer (2014) studied the effect of scattered light on the outer parts of the PSF. * _Material outgassing and ice contamination:_ Material outgassing leads to molecular contamination that alters different properties of the imaging system. Water is the most common contaminant in cryogenic spacecraft, which then turns into thin ice films. A notable example is the _Gaia_ mission that suffered from ice contamination (see Gaia Collaboration et al. (2016, Section 4.2.1)) and required several decontamination procedures to slowly remove the ice from the optical system. Euclid Collaboration et al. (2023) studied the ice formation and contamination for _Euclid_. The article also reviews the lessons learned from other spacecraft on the topic of material outgassing. A companion paper of Euclid Collaboration et al. (2023) is expected to be published soon addressing the quantification of iced optics impact on _Euclid_'s data. * _Chromatic optical components:_ These components have a particular wavelength dependence, excluding the natural chromaticity due to diffraction. They are usually spectral filters and depend on the optical's system design. A particular example is a dichroic filter which ideally serves as an ideal band-pass filter. The _Euclid_ optical system includes a dichroic filter which allows using both instruments, VIS and NISP, simultaneously as their passbands are disjoint. A dichroic filter is made of a stack of thin coatings of specific materials and thicknesses. Even if these components have a high-quality manufacturing process, they can induce significant chromatic variations in reflection affecting the PSF morphology. Baron et al. (2022) proposed a test bench to characterise _Euclid_'s dichroic filter and a numerical model of its chromatic dependence. * _Light polarisation:_ In section 2, we studied the scalar diffraction theory, thus neglecting light polarisation. First, the optical system can induce polarisation even when the incoming light is not polarised. Breckinridge et al. (2015) studied the effect of polarisation aberrations on the PSF of astronomical telescopes. The study of polarisation is carried out using Jones matrices (Jones, 1941). These matrices describe a ray's polarisation change when going through an optical system. See McGuire and Chipman (1990, 1991); Yun et al. (2011) for more information on polarisation aberrations. Second, there are some regions in space where the incoming light has been polarised by different sources, e.g., Galactic foreground dust. Lin et al. (2020) studied the impact of light polarisation on weak lensing systematics for the Roman space telescope (Spergel et al., 2015). The study found that the systematics introduced by light polarisation are comparable to Roman's requirements. * _Thermal variations:_ The thermal variations in a telescope introduce mechanical variations in its structure that affect the performance of the optical system. The origin of the thermal variations is strong temperature gradients due to the sun's illumination. It is sometimes referred to as the _telescope's breathing_(Bely et al., 1993) for the periodical pattern consequence of its orbit. Thermal variations can introduce a small defocusing of the system that will change the PSF morphology. This phenomenon was first identified in the HST (Hasan et al., 1993). Nino et al. (2007) studied HST focus variations with temperature and Lallo et al. (2006) studied HST temporal optical behaviour, where temperature variations play a principal role. Later works (Makidon et al., 2006; Sahu et al., 2007; Suchkov and Casertano, 1997) studied the impact of the thermal variations, and consequently PSF variations, on different science applications. A Structural-Thermal-Optical Performance (STOP) test helps predict thermal variations' impact on the optical system. This effect is naturally more significant in space-based telescopes as the temperature gradients in space are considerably more prominent than the ones found on the ground. Space-based telescopes located at the stable \(\mathrm{L}_{2}\) Lagrange point, e.g., _Euclid_ and JWST, are less prone to thermal variations compared to telescopes orbiting the Earth, e.g., HST. As an example, Figures 9b and 9c show the measured optical contribution for the James Webb Space Telescope (JWST) PSF. Rigby et al. (2022) presents a detailed analysis of JWST's state, from its commissioning, including its PSF. ### Detector-level degradations The detector-level degradations are related to the detectors being used and, therefore, to the intensity of the PSF. They affect the observed images through the degradation operator \(F_{p}\) from Equation 36, and as we will use star images, or eventually other observations, to constrain PSF models, it is necessary to consider their effects. Some of these degradations are non-convolutional and will not be well-modelled by a convolutional kernel. Nevertheless, we expect that image preprocessing steps will mainly correct these effects. However, the correction will not be perfect, and some modelling errors can propagate to the observations. * _Undersampling and pixellation:_ The EM wave that arrives at the detectors is a continuous function. The discrete pixels in the detectors integrate the functions and measure the intensity of the wave in Figure 10: Example of two different pixellations on the same high-resolution image representing an Airy PSF. The difference between the two pixellations is an intra-pixel shift of \((\Delta x,\Delta y)=(0.35,0.15)\) between them. Figure reproduced from Liaudat (2022). their respective area. We name this process _pixellation_, also known as _sampling_. Some authors, e.g., Anderson and King (2000); Bernstein (2002); Kannawadi et al. (2016), define an _effective_ PSF as the convolution of the optical PSF, i.e., the flux distribution at the focal plane from a point source, with the pixel response of the instrument, e.g., a 2D top-hat function. High et al. (2007) performed an early study on the effects of pixellation in WL and the choice of pixel scale for a WL space-based mission. Krist et al. (2011, Section 3) gives some insight on pixellation effects for HST. Two aspects of pixellation play a crucial role in PSF modelling. First, the sampling is done with the same grid, but it is indispensable to consider that the continuous function is not necessarily centred on the grid. This difference means that intra-pixel shifts between the different pixellations will be found. Figure 10 shows how two pixel representations of the same light profile change due to two different pixellations. When optimising a PSF model to reproduce some observed stars, the centroids of both images must be the same. Suppose the image centroids are the same, and the underlying model represents the observations satisfactorily. In that case, the residual image between the two pixellated images will be close to zero. If the centroids are not the same, the residual can be far from zero even though the model is a good representation of the observation, as illustrated in the residual image in Figure 10. The second aspect is related to the Nyquist-Shannon sampling theorem. The theorem states the required number of samples we need to use to perfectly determine a signal of a given bandwidth. In the telescopes we study, the bandwidth and number of samples are related to the aperture's diameter and pixel size. Depending on the telescope's design, the sampling may not verify the Nyquist-Shannon theorem. If the images are undersampled, meaning that the theorem is not verified, a super-resolution step is required in the PSF modelling, which is the case of _Euclid_. Using an observation strategy with dithering, as described in subsection 4.2, can significantly mitigate the undesired effects of undersampling and pixellation. Kannawadi et al. (2021) studies ways of mitigating the effects of undersampling in WL shear estimations using metacalibration(Huff and Mandelbaum, 2017; Sheldon and Huff, 2017; Sheldon et al., 2020), which is a method for measuring WL shear from well-sampled galaxy images. Finner et al. (2023) studies near-IR weak-lensing (NIRWL) measurements in the CANDELS fields from HST images. The authors find that the most significant contributing systematic effect to WL measurements is caused by undersampling. * _Optical throughput and CCD Quantum Efficiency (QE):_ The optical throughput of the system is the combined effect of the different elements composing the optical system, such as mirrors and optical elements like coatings (Venancio et al., 2016). The filter being used in the telescope forms part of the optical throughput, as can be seen in Figure 8 for the MegaCam set of filters. Figure 8 also includes the CCD QE, which describes the sensibility of the CCD to detect photos of different wavelengths. Commonly, CCDs do not have a uniform response to the different wavelengths. Therefore, we must multiply the CCD QE with the telescope's optical throughput to compute the total transmission. * _CCD misalignments:_ Ideally, we expect that all the CCDs in the detector lie in a single plane that happens to be the focal plane of the optical system. However, it is not the case in practice, as there might be small misalignments between the CCDs introducing small defocuses that change from CCD to CCD. See Jee and Tyson (2011, Figure 8) for a study of this effect for the Vera C. Rubin Observatory. * _Guiding errors:_ Even if space telescopes are expected to be very stable when doing observations thanks to the attitude and orbit control systems (AOCS), there will exist a small residual motion that is called pointing jitter. The effect on the observation is introducing a small blur that can be modelled by a specific convolutional kernel that depends on the pointing time series. Fenech Conti (2017, Section 4.8.3) proposes to model the effect for _Euclid_ with a Gaussian kernel. * _Charge Transfer Inefficiency (CTI):_ CCD detectors are in charge of converting incoming photons to electrons and collecting them in a potential well in the pixel during an exposure. The charge on each pixel is read when the exposure finishes. The collected electrons are transferred through a chain of pixels to the edge of the CCD, amplified and then read. High energy radiation above the Earth's atmosphere gradually damages CCD detector (Prod'homme et al., 2014, 2014). The silicon damage in the detectors creates traps for the electrons that are delayed during the reading procedure. This effect is known as CTI, producing a trailing of bright objects and blurring the image. This effect is noticeably significant for space telescopes, given the harsh environment. CTI effects are expected to be corrected in the VIS image preprocessing. Rhodes et al. (2010) carried out a study on the impact of CTI on WL studies. Massey et al. (2009) developed a model to correct for CTI for the HST and later improved it in Massey et al. (2014). * _Brighter-fatter effect (BFE):_ The assumption that each pixel photon count is independent of its neighbours does not hold in practice. There is a photoelectron redistribution in the pixels as a function of the number of photoelectrons in each pixel. The BFE is due to the accumulation of charge in the pixels' potential wells and the build-up of a transverse electric field. The effect is stronger for bright sources. Antilogus et al. (2014) studied the effect and observed that the images from the CCDs do not scale linearly with flux, so bright star sizes appear larger than fainter stars. Guyonnet et al. (2015) and Coulton et al. (2018) proposed methods to model and correct this effect. The preprocessing of VIS images is supposed to correct for the BFE, but there might be some residuals. * _Wavelength dependent sub-pixel response:_ There exist a charge diffusion between neighbouring pixels in the CCD. Niemi et al. (2015) studied this effect for an _Euclid_'s VIS CCD and modelled the response of the CCD. Niemi et al. (2015) proposed to model the effect as a Gaussian convolutional kernel where the standard deviations of the 2D kernel are wavelength dependent, \(\sigma_{x}(\lambda)\) and \(\sigma_{y}(\lambda)\). Niemi et al. measured the proposed model with a reference VIS CCD. Krist (2003) studied the charge diffusion in HST and proposed spatially varying blur kernels to model the effect. * _Noise:_ There are several noise sources in the measurements. _Thermal noise_(Nyquist, 1928) refers to the signal measured in the detector due to the random thermal motion of electrons which is usually modelled as Gaussian. _Readout noise_(Basden et al., 2004) refers to the uncertainty in the photoelectron count due to imperfect electronics in the CCD. _Dark-current shot noise_(Baer, 2006) refers to the random generation of electrons in the CCD, and even though it is related to the temperature, it is not Gaussian. There are also unresolved and undetected background sources that contribute to the observation noise. Which are the statistics of the predominant noise will depend on the imaging setting of the instrument and its properties. * _Tree rings and edge distortions:_ There exist electric fields in the detector that are transverse to the surface of the CCD. The origin of these fields includes doping gradients or physical stresses on the silicon lattice. This electric field displaces charge, modifying the effective pixel area. Consequently, it changes the expected astrometric and photometric measurements. This electric field also generates concentric rings, _tree rings_, and bright stripes near the boundaries of the CCD, _edge distortions_. Given the close relationship between this effect and the detector, its importance depends strongly on the instrument being used. This effect is unnoticed in the MegaCam used in Canada-France Imaging Survey (CFIS) as it depends on the CCD design. However, it was a major concern in the Dark Energy Camera used in the Dark Energy Survey (DES), as shown by Plazas et al. (2014). Jarvis et al. (2020, Figure 9) illustrates the consequence of tree rings in the PSF modelling. * _Other effects:_ These effects include detector nonlinearity (Plazas et al., 2016; Stubbs, 2014), _sensor interpixel correlation_(Lindstrand, 2019), _interpixel capacitance_(McCullough, 2008; Kannawadi et al., 2016; Donlon et al., 2018), _charge-induced pixel shifts_(Gruen et al., 2015), _persistence_(Smith et al., 2008, 2008, 2010), _reciprocity failure_(_flux-dependent nonlinearity_) (Biesiadzinski et al., 2011; Bohlin et al., 2005), and _detector analogue-to-digital non-linearity_. ### The atmosphere The atmosphere plays a central role in ground-based telescopes' PSFs. See Roddier (1981) for an in-depth study of the subject. How the atmosphere affects our images will strongly depend on the exposure time used to image an object. The PSF induced by the atmosphere for a very short exposure will look like a speckle, while a long exposure will produce a PSF that resembles a 2D Gaussian, or more precisely, a Moffat profile (Moffat, 1969). Figure 12 shows examples of atmospheric PSFs with different exposure times. As a first approximation for long exposures, the effect of the atmosphere on the PSF is that of a spatially varying low-pass filter. Therefore, broadening the PSF and limiting the telescope's resolution. Astronomers usually use the term _seeing_ to refer to the atmospheric conditions of the telescope, and it is measured as the FWHM of the PSF. The loss of resolution due to the atmosphere is one of the main motivations for building space telescopes like _Euclid_ and _Roman_, where the PSF is close to the diffraction limit and very stable. The atmosphere is a heterogeneous medium whose composition changes with the three spatial dimensions and time. The inhomogeneity of the atmosphere affects the propagation of light waves that arrive at the telescope. Instead of supposing that the incoming light waves are plane, as emitted by the faraway source under study, these waves already have some phase lags or leads with respect to an ideal plane wave. The atmosphere introduces a WFE contribution to the optical system. These effects can be resumed as an effective phase-shifting plate, \(\Phi_{\mbox{eff}}(x,y,t)\). However, calculating this effective plate is cumbersome as it involves having a model of the atmosphere and integrating the altitude, \(z\), so that we have the spatial distribution, \((x,y)\), of the effective WFEs. The model of the atmosphere is represented by the continuous \(C_{n}^{2}(z)\)(Roddier, 1981) profile, which represents the variations of the refractive index due to the atmospheric turbulence as a function of height. However, the \(C_{n}^{2}(z)\) is challenging to model and measure, and even if it is possible, it is computationally expensive to exploit. We can discretise the integral over the altitude into \(M\) thin-phase screens of variable strength at different altitudes to simulate the effect of the atmosphere. Each phase screen will have specific properties and move at different speeds in different directions. These assumptions are known as the frozen flow hypothesis. Each phase screen will be characterised by its power spectrum that can be modelled by a von Karman model of turbulence (Karman, 1930). The power spectrum of the atmosphere's WFE contribution writes \[\Psi(\nu)=0.023\,r_{0}^{-5/3}\left(\nu^{2}+\frac{1}{L_{0}^{2}}\right)^{-11/6}\, \tag{40}\] where \(\nu\) is a spatial frequency, \(r_{0}\) is the Fried parameter, and \(L_{0}\) is the outer scale. Both parameters, \(r_{0}\) and \(L_{0}\), are generally expressed in meters. The Fried parameter relates to the turbulence amplitude, and the outer scale relates to the correlation length. See Figure 11 for an example of atmospherical phase screens. For lengths longer than \(L_{0}\), the power of the turbulence asymptotically flattens. If we take the limit of \(L_{0}\) to infinite, we converge to the Kolmogorov model of turbulence (Kolmogorov, 1991). See Sasiela (1994) for more information on electromagnetic wave propagation in turbulence. Once the phase screens, \(\Phi_{m}(x,y|u_{i},v_{i})\), have been simulated following Equation 40, the temporal variation of the screen has to be taken into account. The phase screens contribute to the WFE of the PSF, which is why it depends on the pupil plane variables \((x,y)\). The temporal variation is usually modelled with the wind's properties at the phase screen's reference altitude. We describe the wind with two components, \(v_{u}\) and \(v_{v}\), where we have assumed that \(v_{z}=0\). We then obtain the effective phase screen by a weighted average of the phase screens at the different altitudes as \[\Phi_{\text{eff}}(x,y;t|u_{i},v_{i})=\sum_{m=1}^{M}c_{m}\Phi_{m}(x,y;t|u_{i},v_ {i})\,, \tag{41}\] where \(\{c_{m}\}\) are some weights. The difficulty of modelling the atmosphere is that the time scales are comparable with the exposure time. Therefore, the PSF we estimate for a given time snapshot will change with respect to another PSF at another snapshot within the same camera exposure. This change means that we need to integrate the instantaneous PSF over time to model the PSF physically, which corresponds to \[I_{\text{img}}(\bar{u},\bar{v}|u_{i},v_{i})=\int_{t_{o}}^{t_{o}+T_{\text{exp} }}I_{\text{img}}(\bar{u},\bar{v};t|u_{i},v_{i})\ dt\,, \tag{42}\] where \(I_{\text{img}}(\bar{u},\bar{v},t|u_{i},v_{i})\) is the instantaneous image of the object affected by the PSF \(\mathcal{H}(u,v;\lambda;t|u_{i},v_{i})\), \(t_{o}\) is a random initial time and \(T_{\text{exp}}\) is the exposure time. Finally, we need to choose the time step size to discretise the integral from Equation 42. Each instantaneous PSF will look like a speckle. Once we add them up in the integral, the PSF starts becoming rounder and smoother. Figure 12 shows examples of atmospheric PSFs using different exposure times that Figure 11: Illustration of six von Kármán phase screen layers at different altitudes simulated for LSST. The simulations were produced with the GalSim package (Rowe et al., 2015) using the parameters from Jee and Tyson (2011). were simulated using \(6\) phase screens using the parameters from Jee and Tyson (2011) that correspond to an LSST-like scenario. It is interesting to see how the short-exposure PSF looks like a speckle, and then the profile becomes more and more smooth as the exposure time increases. As a reference, the exposure time used for the r-band observations in CFIS is \(200\)s (science collaboration, 2016). de Vries et al. (2007) studied the PSF ellipticity change due to atmospheric turbulences as a function of the exposure time. de Vries et al. observed that the ellipticity of the PSF decreases its amplitude as the exposure time increases. Another effect that should be considered is _atmospheric differential chromatic refraction_. This effect represents the refraction due to the change of medium from vacuum to the Earth's atmosphere. The effect varies as a function of the zenith angle and wavelength. Meyers and Burchat (2015) performed a study on the impact of the atmospheric chromatic effect on weak lensing for surveys like LSST and DES. Heymans et al. (2012) performed a study on the impact of atmospheric distortions on weak-lensing measurement with real data from CFHT. Heymans et al. characterised the ellipticity contribution of the atmosphere to the PSF for different exposure times. To achieve this, they computed the two-point correlation function of the residual PSF ellipticity between the observations and a PSFEx-like PSF model (described in detail in section 5). Salmon et al. (2009) studied the image quality and the observing environment at CFHT. (Xin et al., 2018) carried out a study of the PSF and the variation of its width with time and wavelength for the Sloan Digital Sky Survey (SDSS) (York et al., 2000; Gunn et al., 1998). Jee and Tyson (2011) carried out a simulation study to evaluate the impact of atmospheric turbulence on weak-lensing measurement in LSST. Jee and Tyson (2011) used the atmospherical parameters from (Ellerbroek, 2002) that were measured in the LSST site in Cerro Pachon, Chile. There is an ongoing project at LSST DESC4 to leverage atmospheric and weather information at or near the observation site to produce realistically correlated wind and turbulence parameters for atmospheric PSF simulations. Figure 12: Example of atmospheric PSFs with different exposure times. The simulation was done using the atmospherical parameters from Jee and Tyson (2011) for an LSST-like scenario. Another way to simulate the atmosphere and the PSF is to use a photon Monte Carlo approach, known as _photon shooting_. This line of work was carried out in Peterson et al. (2015, 2019, 2020) with a simulator available coined PhoSim5 that aims to model LSST PSF. The method consists of sampling photons from astronomical sources and simulating their interactions with their models of the atmosphere, the optics and the detectors. PhoSim is a powerful simulation tool but is not intended to be a model for estimating the PSF from observations. The simulation software GalSim6 Rowe et al. (2015) also incorporates options to simulate atmospheric PSFs from phase-screen exploiting the methods from (Peterson et al., 2015). Footnote 5: bitbucket.org/phosim/phosim_release/wiki/Home Footnote 6: github.com/GalSim-developers/GalSim To conclude, we have seen that it is possible to develop a physical model of the atmosphere based on the optical understanding we have from section 2 and the studies of atmospheric turbulence of Karman and Kolmogorov. However, this approach has two inconveniences. First, the approach requires physical measurements of the atmosphere at the telescope's site, which is not always available. Second, it is computationally expensive, as there is an altitude and temporal integration, to handle varying atmospherical properties and reach the exposure time, respectively. In practice, it is required to use long exposure times to obtain deeper observations, meaning observing fainter objects that are important for weak-lensing studies. This fact simplifies the PSF modelling task as the long temporal integration smooths the PSF profile and the PSF spatial variations over the FOV. Therefore, a data-driven approach to modelling the PSF can offer a feasible and effective solution in this scenario. ### Adaptive optics An alternative approach to work with ground-based observations affected by the atmosphere is to use adaptive optic (AO) systems (Beckers, 1993). This technology significantly improves the observation resolution in ground-based telescopes that is severely limited due to the atmosphere, as we have seen in subsection 4.5. An AO system tries to counteract the effect of the atmosphere on the incoming wavefront by changing the shape of a deformable mirror. The key components of the AO system are wavefront sensors (WFS), wavefront reconstruction techniques and deformable mirrors, which operate together inside a control loop. The WFS provide information about the incoming wavefront and usually incorporate a phase-sensitive device. The wavefront reconstruction has to compute a correction vector for the deformable mirrors by estimating the incoming wavefront from the WFS information. The control loop works in real-time sensing and modifying the deformable mirrors so that the wavefront received by the underlying instrument is free from the optical path differences introduced by the atmosphere. Davies and Kasper (2012) provides a detailed review of AO systems for astronomy. The LSST that will carry out weak lensing studies contains an AO system (Neill et al., 2014; Thomas et al., 2016; Angeli et al., 2014). Exoplanet imaging studies have greatly benefited from AO systems and impose strict requirements for these systems. Some examples are the SPHERE instrument in the VLT (Beuzit et al., 2019) and the Gemini Planet Imager (Graham et al., 2007; Macintosh et al., 2014, 2018) in the Gemini South Telescope. ## 5 Current PSF models Let us now discuss some of the most known PSF models, which can be divided into two main families, _parametric_ and _non-parametric_, also known as _data-driven_. ### Parametric PSF models This family of PSF models is characterized by trying to build a physical optical system model that aims to be as close as possible to the telescope. Once the physical model is defined, a few parameters are estimated using star observations. Such estimation, also called calibration, is required as some events, like launch vibrations, ice contaminations and thermal variations, introduce significant variations in the model. These events prevent a complete on-ground characterization from being a successful model. Parametric models are capable of handling chromatic variations of the PSF as well as complex detector effects. Nevertheless, parametric models have only been developed for space missions and are custom-made for a specific telescope. The parametric model is compelling if the proposed PSF model matches the underlying PSF field. However, if there are mismatches between both models, significant errors can arise due to the rigidity of the parametric models. The difficulty of building a physical model for the atmosphere, already discussed in subsection 4.5, makes them impractical for ground-based telescopes. The parametric model Tiny-Tim7(Krist, 1993; Krist and Burrows, 1995; Krist et al., 2011) has been used to model the PSF of the different instruments on board the HST. The Advanced Camera for Surveys (ACS) in HST was used to image the Cosmic Evolution Survey (COSMOS), which covers a \(2\) deg\({}^{2}\) field that was used to create a widely used space-based weak-lensing catalogue. The first WL shape catalogue used the Tiny-Tim model (Leauthaud et al., 2007). Rhodes et al. (2007) studied the stability of HST's PSF, noticing a temporal change of focus in the images. Besides the parametric model Tiny-Tim model, Anderson and King (2000) developed the concept of _effective PSF_, which is the continuous PSF arriving to the detectors, i.e., Equation 34, convolved with the pixel-response function of the detector. Anderson and King (2000) proposed an algorithm to model the effective PSF iteratively from observed stars. Anderson and King (2006) continued the work on the effective PSF, adding some improvements and detailed a model for the HST instruments ACS and Wide Field Camera (WFC). Hoffmann and Anderson (2017) then carried out a comparison between Tiny-Tim and the effective PSF approach for ACS/WFC. The study shows that the effective PSF approach consistently outperforms the Tiny-Tim PSFs, exposing the limitations of parametric modelling. Anderson (2016) describes the adoption of the effective PSF approach applied to Wide Field Camera 3 (WFC3/IR) observations, which are undersampled. The software Photutils8(Bradley et al., 2022) provides an implementation of the effective PSF approach from Anderson and King (2000) with the enhancements from Anderson (2016). Schrabback, T. et al. (2010) used a data-driven PSF model based on PCA when studying the COSMOS field. Footnote 7: github.com/spacetelescope/tinytim Footnote 8: github.com/astropy/photutils The early severe aberrations of HST's optical system were an important driver of phase retrieval algorithms. Several efforts to characterise HST were published in an _Applied Optics_ special issue (Breckinridge and Wood, 1993). Solving the phase retrieval problem provides a reliable approach to characterise the optical system accurately. Fienup (1993) studied new phase retrieval algorithms and Fienup et al. (1993) present several results characterising HST's PSF. Regarding recently launched space telescopes, _Euclid_'s VIS parametric model constitutes the primary approach for _Euclid_'s PSF modelling. The model will soon be published and used to work with observations from _Euclid_. JWST has a Python-based simulating toolkit, webbfsf9Perrin et al. (2012, 2014). Recent works developed and compared data-driven PSF models for JWST's NIRCam (Nardiello et al., 2022; Zhuang and Shen, 2023). ### Data-driven (or non-parametric) PSF models The _data-driven_ PSF models, also known as _non-parametric_, only rely on the observed stars to build the model in pixel space. They are blind to most of the physics of the inverse problem. These models assume regularity in the spatial variation of the PSF field across the FOV and usually differ in how they exploit this regularity. Data-driven models can easily adapt to the current state of the optical system. However, they have difficulties modelling complex PSF shapes occurring in diffraction-limited settings. One limitation shared by all the data-driven models is their sensitivity to the available number of stars to constrain their estimation. A low star number implies that there might be not enough stars to sample the spatial variation of the PSF. When the number of stars in a FOV falls below some threshold, the model built is usually considered unusable for WL purposes. This family of models has been widely used for modelling ground-based telescope PSFs. Nevertheless, they are not yet capable of successfully modelling the chromatic variations in addition to the spatial variations and the super-resolution. We proceed by describing several PSF models in chronological order. The first models, described in more detail, were used to process real data from different surveys, except for Resolved Component Analysis (Ngole et al., 2016). The latter models are worth mentioning but have yet to be used to produce a WL shape catalogue with all the validation and testing it implies. #### 5.2.1 Shape interpolation The first approach for PSF modelling consisted in estimating some parameters of the PSF at the positions of interest. It was done for early studies in WL and is closely related to the widely-employed galaxy shape measurement method KSB (Kaiser et al., 1995). This precursor method only required the PSF's ellipticity and size at the positions of the galaxies. Therefore, a full-pixel image of the PSF was unnecessary. Then, the KSB method can correct the galaxy shape measurement for the effects of the PSF. The method to interpolate the shape parameters to other positions is usually a polynomial interpolation. For example, this was the case for the early WL study of CFHTLS (Fu, L. et al., 2008). Gentile et al. (2013) reviewed the different interpolation methods and studied their performance for WL studies. Viola et al. (2011) performed a study showcasing different biases present in the KSB (Kaiser et al., 1995) method. These biases are a consequence of problematic KSB assumptions: (i) KSB gives a shear estimate per individual image and then takes an average, while WL shear should be estimated from averaged galaxy images (ii) KSB assumes that galaxy ellipticities are small but what it is small in WL is the change in ellipticity due to the shear, and (iii) KSB gives an approximate PSF correction that only holds in the limit of circular sources and does not invert the convolution with the PSF. Recent WL studies no longer use this approach. The WL studies have evolved to more sophisticated galaxy shape measurement techniques that require a full pixel image of the PSF at the position of galaxies. #### 5.2.2 Principal component analysis (PCA) Principal component analysis is a widely-known method for multivariate data analysis and dimensionality reduction. Let us start with a set of star observations in \(\mathbb{R}^{p^{2}}\) that we concatenate in a matrix \(\bar{\mathbf{I}}=[\bar{I}_{1},\ldots,\bar{I}_{n}]\). We have flattened the 2D images into an array to simplify expressions. One would like to represent the observations with \(r\) components \(\{X_{i}\}_{i=1}^{r}\) in \(\mathbb{R}^{p^{2}}\), where \(r\geq n\), assuming that \(p^{2}>n\). The PCA method gives \(r\) orthonormal components in \(\mathbb{R}^{p^{2}}\) which define directions in the \(\mathbb{R}^{p^{2}}\) space where the variance of the dataset \(\bar{\mathbf{I}}\) is maximized. If \(n\) components are used to represent the observations, then the learned components in the PCA procedure represent a basis of the subspace spanned by the observations or the columns of \(\bar{\mathbf{I}}\). The method can be interpreted as a linear transformation to a new representation with orthogonal components. As it is usual to observe regularity in the spatial variations of the PSF, most of the dataset variability can be described with a few components. Then, one can only use the first \(r\) principal components and achieve a dimensionality reduction of the observations. The dimensionality reduction technique allows denoising the model as the observational noise cannot be represented with \(r\) components and only the PSF trends are well described. The PCA method was used to model the PSF for the SDSS (Lupton et al., 2001), although it was referenced as the Karhunen-Loeve transform. Jarvis and Jain (2004) then proposed its use in a WL context. Jee et al. (2007) used PCA to model the spatial and temporal variations of the HST PSF. Jee and Tyson (2011) also used PCA to model the PSF in LSST simulations. HST's COSMOS catalogue (Schrabback, T. et al., 2010) used PCA to model the PSF. PCA showcased the utility and robustness of data-driven methods and the importance of using a pixel representation of the PSF and is the precursor of several of the following models. #### 5.2.3 PSFEx PSFEx10(Bertin, 2011) has been widely used in astronomy for weak-lensing surveys, for example, DES year \(1\)(Zuntz et al., 2018), HSC (Mandelbaum et al., 2017), and CFIS (Guinot et al., 2022). It was designed to work together with the SExtractor(Bertin, E. and Arnouts, S., 1996) software which builds catalogues from astronomical images and measures several properties of the observed stars. PSFEx models the variability of the PSF in the FOV as a function of these measured properties. It builds independent models for each CCD in the focal plane and works with polychromatic observations. It cannot model the chromatic variations of the PSF field. The model is based on a matrix factorisation scheme, where one matrix represents PSF features and the other matrix the feature weights. Each observed PSF is represented as a linear combination of PSF features. The feature weights are defined as a polynomial law of the selected measured properties. This choice allows having an easy interpolation framework for target positions. In practice, the properties that are generally used are both components of the PSF's FOV position. The PSF features are shared by all the observed PSFs and are learned in an optimisation problem. The PSF reconstruction at a FOV position \((u_{i},v_{i})\) can be written as Footnote 10: github.com/astromatic/psfex \[\bar{I}^{\text{PSFEx}}_{\text{star}}(\bar{u},\bar{v}|u_{i},v_{i})=F^{\text{ PSFEx}}\underbrace{\left\{\sum_{\begin{subarray}{c}p,q\geq 0\\ p+q\leq d\end{subarray}}u_{i}^{p}\,v_{i}^{q}\,S_{p,q}(\bar{u},\bar{v})\,+\,S_{ 0}(\bar{u},\bar{v})\right\}}_{\bar{H}^{\text{PSFEx}}(\bar{u},\bar{v}|u_{i},v_{ i})}, \tag{43}\] where \(\bar{I}^{\text{PSFEx}}_{\text{star},(\cdot|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\) is the PSFEx reconstruction of the observed star \(\bar{I}_{(\cdot|u_{i},v_{i})}\), \(S_{p,q}\in\mathbb{R}^{P\times P}\) represents the learned PSF features or _eigenPSFs_, \(S_{0}\in\mathbb{R}^{P\times P}\) represents a first guess of the PSF, the polynomial law is defined to be of degree \(d\), and \(F^{\text{PSFEx}}\) represents the degradations required to match the model with the observations. The model's PSF reconstruction is represented by \(\bar{H}^{\text{PSFEx}}\). The first guess can be computed as a function of the median of all the observations. The dimensions \(p\) and \(P\) will be the same if no downsampling operation is included in \(F^{\text{PSFEx}}\). The PSF features are learned in an optimisation problem that aims to minimise the reconstruction error between the PSFEx model and the observations, which reads \[\min_{\begin{subarray}{c}S_{p,q}\\ \forall p,q\geq 0\,,\,p+q\leq d\end{subarray}}\left\{\sum_{i=1}^{n_{\text{obs}}} \left\|\frac{\bar{I}_{(\cdot|u_{i},v_{i})}-\bar{I}^{\text{PSFEx}}_{\text{star},(\cdot|u_{i},v_{i})}}{\hat{\sigma}_{i}}\right\|_{F}^{2}+\sum_{\begin{subarray} {c}p,q\geq 0\\ p+q\leq d\end{subarray}}\|T_{p,q}\,S_{p,q}\|_{F}^{2}\right\}\,, \tag{44}\] where \(\hat{\sigma}_{i}^{2}\) represent the estimated per-pixel variances, \(\bar{I}\) represents the noisy observations, and \(\|\,\cdot\,\|_{F}\) the Frobenius norm of a matrix. The second term in Equation 44 corresponds to a Tikhonov regularisation, where \(T_{p,q}\) represents some regularisation weights to favour smoother PSF models. The PSF recovery at target positions is straightforward. One needs to introduce new positions in the Equation 43 after learning the PSF features \(S_{p,q}\). The recovery at a new FOV position \((u_{j},v_{j})\) simply writes \[\bar{H}^{\text{PSFEx}}(\bar{u},\bar{v}|u_{j},v_{j})=\sum_{ \begin{subarray}{c}p,q\geq 0\\ p+q\leq d\end{subarray}}u_{j}^{p}\,v_{j}^{q}\,S_{p,q}(\bar{u},\bar{v})\,+\,S_{ 0}(\bar{u},\bar{v})\,, \tag{45}\] where \(\bar{H}^{\text{PSFEx}}\) is the model's PSF reconstruction, and \(S_{0}\) and \(S_{p,q}\) were learned during the training procedure. #### 5.2.4 Resolved component analysis (RCA) RCA11(Ngole et al., 2016) is a state-of-the-art data-driven method designed for the space-based _Euclid_ mission (Schmitz et al., 2020). The model builds an independent model for each CCD, can handle super-resolution, and, like PSFEx, is based on a matrix factorisation scheme. However, there are three fundamental changes with respect to PSFEx. The first difference is that, in RCA, the feature weights are defined as a further matrix factorisation, and are also learned from the data. The feature weights are constrained to be part of a dictionary12 built with different spatial variations based on the harmonics of a fully connected undirected weighted graph. The graph is built using the star positions as the nodes and a function of the inverse distance between the stars to define the edge weights. The rationale of the graph-harmonics dictionary is to capture localised spatial variations of the PSF that occur in space-based missions exploiting the irregular structure of the star positions. The RCA reconstruction of an observed star is then Footnote 11: github.com/CosmoStat/rca Footnote 12: In the signal processing community, a dictionary is a collection of templates, or basic elements, used to decompose a signal. \[\bar{I}^{\text{RCA}}_{\text{star}}(\bar{u},\bar{v}|u_{i},v_{i})= F^{\text{RCA}}\left\{\bar{H}^{\text{RCA}}(\bar{u},\bar{v}|u_{i},v_{i}) \right\},\text{ where} \tag{46}\] \[\bar{H}^{\text{RCA}}(\bar{u},\bar{v}|u_{i},v_{i})=\sum_{r=1}^{N_ {\text{comp}}}S_{r}(\bar{u},\bar{v})\,A[r,i]=\sum_{r=1}^{N_{\text{comp}}}S_{r} (\bar{u},\bar{v})\,(\alpha V^{\top})[r,i]\,,\] where \(\bar{I}^{\text{RCA}}_{\text{star},(\cdot|u_{i},v_{i})}\in\mathbb{R}^{p\times p}\), \(F^{\text{RCA}}\) corresponds to the degradation model of RCA, including downsampling, intra-pixel shifts among others, \(S_{r}\in\mathbb{R}^{D\times D\,p}\) corresponds to the data-driven feature, i.e., eigenPSF, \(r\) from a total of \(N_{\text{comp}}\) features, \(D\) is the upsampling factor in case a super-resolution step is required, and \(A[r,i]\) is the \(r\)-th feature weight of the \(i\)-th star. The feature weight matrix \(A\) is decomposed into a sparse matrix \(\alpha\) and a dictionary matrix \(V^{\top}\) of graph-based spatial variations; see Ngole et al. (2016) for more information. To regularise the inverse problem, RCA enforces a low-rank solution by fixing \(N_{\text{comp}}\), a positivity constraint on the modelled PSF, a denoising strategy based on a sparsity constraint in the starlet (Starck et al., 2015) domain, which is a wavelet representation basis, and a constraint to learn the useful spatial variations from the graph-harmonics-based dictionary. The optimisation problem that the RCA method targets is \[\min_{S_{k},\alpha_{k}} \left\{\frac{1}{2}\sum_{i=1}^{n_{\text{obs}}}\left\|\bar{I}_{( \cdot|u_{i},v_{i})}-F^{\text{RCA}}\left\{\bar{H}_{(\cdot|u_{i},v_{i})}^{\text{ RCA}}\right\}\right\|_{F}^{2}+\sum_{r=1}^{N_{\text{comp}}}\|w_{r} \odot\Phi S_{r}\|_{1}+\right. \tag{47}\] \[+\left.\iota_{+}\left(\bar{H}_{(\cdot|u_{i},v_{i})}^{\text{RCA}} \right)+\iota_{\Omega}(\alpha)\right\}\quad\text{s.t.}\quad\left.\bar{H}_{i} ^{\text{RCA}}=\sum_{r=1}^{N_{\text{comp}}}S_{r}\left(\alpha V^{\top}\right)[ r,i],\] where \(w_{r}\) are weights, \(\Phi\) represents a transformation allowing the eigenPSFs to have a sparse representation, e.g., a wavelet transformation, \(\odot\) denotes the Hadamard product, \(\iota_{+}\) is the indicator function of the positive orthant, and \(\iota_{\Omega}\) is the indicator function over a set \(\Omega\), which is defined as a set of sparse vectors and is used to enforce sparsity on \(\alpha\). The second difference with respect to PSFEx corresponds to the regularisations used in the objective function from Equation 47, which ends up as being non-convex due to the matrix factorisation and non-smooth due to the \(\|\cdot\|_{1}\) constraint. The optimisation is solved through a block coordinate descent, as it is a multi-convex problem, and proximal optimisation algorithms that tackle the non-smooth subproblems (Beck and Teboulle, 2009; Condat, 2013). The third difference is handling the PSF recovery at a new position \((u_{j},v_{j})\). The recovery is carried out by a radial basis function (RBF) interpolation of the learned columns of the \(A\) matrix, issuing a vector, \(\hat{\mathbf{a}}_{j}\in\mathbb{R}^{N_{\text{comp}}}\), see Schmitz et al. (2020) for more details. This way, the spatial constraints encoded in the \(A\) matrix are preserved when estimating the PSF at galaxy positions. The interpolated feature weights \(\hat{\mathbf{a}}_{j}\) can be introduced in the Equation 46 formulation to generate the PSF at the new \(j\) position. The RCA model has yet to be used to generate a WL shape catalogue from real observations. Liaudat et al. (2021) showed that RCA is not robust enough to handle real ground-based observations from CFIS as some CCDs exhibited significant errors of the PSF shape. #### 5.2.5 The Multi-CCD PSF model (MCCD) MCCD13(Liaudat et al., 2021) is a state-of-the-art data-driven method originally designed for the ground-based CFIS from CFHT. MCCD can model the full focal plane at once by incorporating the CCD mosaic geometry into the PSF model. The MCCD model rationale is explained by the limitations of other PSF models that build independent PSF models for every CCD, e.g., RCA and PSFEx, are: (i) fundamentally limited in the possible's model complexity due to the lack of constraining power of a reduced number of star observations, and (ii) the difficulty of modelling PSF spatial variations spanning the entire focal plane, i.e., several CCDs, from independently modelled CCDs. MCCD overcomes these issues by building a PSF model containing two types of variations, global and CCD-specific. Both variations are modelled by a matrix factorisation approach, building over the success of PSFEx and RCA. The global features are shared between all CCDs, and the local CCD-specific features aim to provide corrections for the global features. The feature weights are defined as a combination of the polynomial variations from PSFEx and the graph-based variations from RCA. The model's optimisation is more challenging than the previous models and is based on a novel optimisation procedure based on iterative schemes involving proximal algorithms (Parikh and Boyd, 2014). The MCCD model has proven robust enough to handle real observations from CFIS (Liaudat et al., 2021), giving state-of-the-art results. MCCD has been incorporated into the recent ShapePipe shape measurement pipeline (Farrens, S. et al., 2022) originally designed to process the CFIS survey and generate a WL shape catalogue. The first version of the shape catalogue (Guinot et al., 2022), spanning \(1700\) deg\({}^{2}\), from ShapePipe used PSFEx. However, the next version, spanning \(\sim 3500\) deg\({}^{2}\), used the MCCD PSF model and will be released soon. #### 5.2.6 lensfit _lensfit_(Miller et al., 2007; Kitching et al., 2008; Miller et al., 2013) refers to a Bayesian galaxy shape measurement method for WL surveys. It also includes a data-driven PSF model that will also be referenced as _lens_fit and is sparsely described throughout the different publications involving the shape measurement _lens_fit (Miller et al., 2013; Kuijken et al., 2015; Giblin, Benjamin et al., 2021). This method has been used with real data to produce the WL shape catalogues of CFHTLenS (Erben et al., 2013; Miller et al., 2013), KiDS-+VIKING-450 (Wright, Angus H. et al., 2019), KiDS-450 (Hildebrandt et al., 2016; Fenech Conti et al., 2017), KiDS-1000 (Giblin, Benjamin et al., 2021), and VOICE (Fu et al., 2018). However, the code is not publicly available. This PSF model differs from the previous ones. PSFEx and RCA learn some features or _eigenPSFs_ that all the PSFs share. The _lens_fit model is fitted on a pixel-by-pixel basis. Each pixel is represented as a polynomial model of degree \(d\) of the FOV positions. The _lens_fit model can use all the observations in one exposure, meaning that it uses several CCDs at once. The model uses the low-order polynomials, up to degree \(n_{\text{c}}<d\), to be fitted independently for each CCD and the rest of the monomials are fitted using the observations from all the CCDs. This multi-CCD modelling is a significant change with respect to previous methods that built independent models for each CCD. The total number of coefficients _per pixel_ is \[N_{\text{coeff}}=\frac{1}{2}\left((d+1)(d+2)+(N_{\text{CCD}}-1)(n_{\text{c}}+1) (n_{\text{c}}+2)\right)\,, \tag{48}\] where \(N_{\text{CCD}}\) is the total number of CCDs in the camera, \(d\) represents the degree of the polynomial varying in the full FOV, and \(n_{c}\) the polynomial that is CCD-dependent. We can write the description of the pixel \((\bar{u},\bar{v})\) of the PSF model for a FOV position \((u_{j},v_{j})\) in CCD \(k\) as follows \[\bar{H}^{\text{lensfit}}(\bar{u},\bar{v}|u_{j},v_{j})=\sum_{ \begin{subarray}{c}p,q\geq 0\\ p+q\leq n_{\text{c}}\end{subarray}}u_{j}^{p}\,v_{j}^{q}\,a_{(p,q),(\bar{u}, \bar{v})}^{k}+\sum_{\begin{subarray}{c}p+q>n_{\text{c}}\\ p+q\leq d\end{subarray}}u_{j}^{p}\,v_{j}^{q}\,b_{(p,q),(\bar{u},\bar{v})}\,, \tag{49}\] where \(a_{(p,q),(\bar{u},\bar{v})}^{k}\) is the coefficient specific of CCD \(k\), pixel \((\bar{u},\bar{v})\), and polynomial \((p,q)\) to be fitted to the observations. The coefficient \(b_{(p,q),(\bar{u},\bar{v})}\) is shared by all the CCDs. One thing to notice in this approach is that as the modelling of the PSF is done pixel-by-pixel, then every observation should share the same pixel grid of the PSF. There is no guarantee that an observation will have its centroid aligned with the chosen pixel grid. Therefore, the PSF model has to be aligned with the observations. Other methods, like PSFEx and RCA, interpolate the model to the observation's centroids. However, _lens_fit interpolates all the observations to the model's pixel grid with a sinc function interpolation which implies interpolating noisy images. This procedure is described in Kuijken et al. (2015). For the KiDS DR2 (Kuijken et al., 2015), the hyperparameters used by _lens_fit are \(n_{c}=1\), \(d=3\), and \(N_{\text{CCD}}=32\) (from CFHT's OmegaCAM instrument), where the images used belong to a \(32\times 32\) pixel grid. When fitting the model's parameters, each star is given a weight that is a function of its SNR with the following empirical formula \[w_{i}=\frac{s_{i}^{2}}{s_{i}^{2}+50^{2}}\,, \tag{50}\] where \(s_{i}\) is the measured SNR of the star \(i\). #### 5.2.7 PSFs In the Full Field-of-View (Plff) PIFF14(Jarvis et al., 2020) is a recently developed PSF model that was used for the DES year \(3\) WL shape catalogue (Gatti et al., 2021) replacing PSFEx that was used for the DES year \(1\) release. The PIFF model targets the LSST survey. Some improvements of PIFF with respect to PSFEx are the ability to use the full focal plane to build the model and modelling the PSF in sky coordinates rather than pixel coordinates. PIFF offers a modular and user-friendly design that will enable further improvements. The change of modelling coordinates was motivated by the strong tree ring detector effect observed in the DES instrument, Dark Energy Camera, which introduces astrometric distortions that are easier to correct in sky coordinates. Pixel coordinates refer to the coordinates defined on the pixel grid of the instrument. In contrast, sky coordinates refer to the angles in the celestial sphere, known as right ascension (RA) and declination (DEC). The geometric transformations that allow going back and forth between the pixel and sky coordinates are known as WCS. Footnote 14: github.com/mjarvis/Piff Being a modular PSF modelling code, PIFF allows choosing between different options for the PSF model and the interpolation method. For example, the model can be an analytical profile like a Gaussian, a Moffat or a Kolmogorov profile, or a more general non-parametric profile called PixelGrid. The interpolation method can be a simple polynomial interpolation, K-nearest neighbours method, a Gaussian process (also known as Kriging), or a _Basis-function polynomial interpolation_. Let us clarify the difference between the first and last mentioned interpolations. The simple polynomial interpolation first fits the PSF model's parameters \(\mathbf{p}\) for each observed star. Then, it fits the coefficients of a polynomial of the 2D star positions that will later be used to interpolate. In the _Basis-function polynomial interpolation_, the position polynomial's interpolation coefficients are fitted simultaneously with the model's parameters using all the available stars (from a single CCD or the entire exposure). If this last option is used with the PixelGrid model, it comes closer to the approaches of PSFEx and RCA without the specific characteristics of each model. We have only mentioned position polynomials, but, as in PSFEx, the interpolation polynomial can be built on any parameter of the PSF, as, for example, a colour parameter. The current PIFF PSF model includes an outlier check after the algorithm has converged. The outlier check is based on the chi-squared, \(\chi^{2}\), pixel residual error between the model and the observations. The model implements an iterative refining approach which means that after the model has converged, one (or more) outlier star(s) is(are) removed from the observations. A new iteration then starts with the model being recomputed. Although this approach effectively removes outlier stars not representative of the PSF (because they are binary stars or have some contamination), it can be very computationally demanding. The computing time increases linearly with the number of iterations used, which might be problematic depending on the total area to analyse or the available computing resources. We refer the reader to Jarvis et al. (2020) for more details. The DES year \(3\) shape catalogue (Gatti et al., 2021) used the PIFF model. The model was a PixelGrid with Basis-function polynomial interpolation using a \(3\)rd order polynomial. Even if PIFF has the _potential_ to build a model across several CCD chips, in practice, each model was built independently for each CCD. #### 5.2.8 WaveDiff and differentiable optics approaches The WaveDiff15(Liaudat et al., 2023, 2021b) PSF model was recently developed targeting space telescopes, in particular the _Euclid_ mission (Laureijs et al., 2011). WaveDiff proposes a paradigm shift for _data-driven_ PSF modelling. Instead of building a data-driven model of the PSF in the pixel space as the previous models, WaveDiff builds its model in the wavefront error (WFE) space. This change relies on a differentiable optical forward model that allows propagating the wavefront from the pupil plane to the focal plane and then computing the pixel PSF. The model is based on two components, a parametric WFE and a data-driven WFE. The parametric WFE can be based on optical simulations, characterisations of the optical system, or complementary measurements such as phase diversity observations. The parametric part should aim to model very complex dependencies that cannot be inferred from the degraded star observations. Liaudat et al. (2023) proposes a parametric model built using fixed features, namely the Zernike polynomials (Noll, 1976). For several reasons, the parametric WFE model obtained often cannot accurately represent the observed PSF, e.g., the telescope changes over time, there are errors in the parametric model built, and relevant effects were not considered or neglected. Consequently, the data-driven WFE should be able to correct the aforementioned mismatches. This data-driven part is based on a matrix factorisation with spatial variations inspired from PSFEx and RCA. It is crucial to model smooth variations that have a reliable generalisation of the PSF to different positions in the FOV. An overview of the model is presented in Figure 13. Figure 13: Overview of the WaveDiff approach to model the PSF with a differentiable optical forward model. Figure adapted from Liaudat et al. (2023). Estimating the model's parameters in the WFE space is a challenging, ill-posed inverse problem known as phase retrieval (Fienup et al., 1993; Fienup, 1993; Shechtman et al., 2015), i.e., estimating a complex signal from intensity-only observations. The optimisation problem is non-convex and non-smooth, the star observations are not very informative, and there is no guarantee that the WFE model's structure can represent the underlying ground truth WFE. Liaudat et al. (2023) shows that under the aforementioned conditions, targeting the estimation of the ground truth WFE is not the best option. The PSF model's objective is to have a good pixel representation of the PSF. It is, therefore, possible to estimate a WFE manifold far away from the underlying WFE but very close in the pixel space. The data-driven features, the basis of the WFE manifold, are estimated with a stochastic gradient descent method widely used for estimating neural network parameters. The WaveDiff model can handle spatial variations, super-resolve the PSF, and model chromatic variations thanks to the WFE formulation and the optical forward model, which also considers more general degradations as in Equation 38. To the best of our knowledge and at the time of writing, this is the _only_ data-driven PSF model able to cope with the spectral variations of the PSF. WaveDiff shows a breakthrough in performance for data-driven PSF models in a simplified _Euclid_-like setting (Liaudat et al., 2023). WaveDiff is flexible enough to be adapted to different space telescopes. The framework proposed shows an exciting research direction for future data-driven PSF modelling. The WaveDiff model has yet to be tested with real space-telescope observations and needs to incorporate detector-level effects, which are more naturally modelled in pixel space. We refer the reader to Liaudat et al. (2023) and Liaudat (2022) for more details. More general approaches based on differentiable optics have been recently emerging, e.g., \(\partial\)Lux16 Desdoigts et al. (2023). Approaches based on automatic differentiation can be useful not only to model the PSF, but have been used to design apodizing phase-plates coronagraphs Wong et al. (2021) and detector calibration Desdoigts et al. (2023). Footnote 16: [https://github.com/LouisDesdoigts/dLux](https://github.com/LouisDesdoigts/dLux) #### 5.2.9 Other PSF models * _Shapelets_: Refregier (2003) proposed a framework to analyse images based on a series of localised basis functions of different shapes named _shapelets_. Images can then be decomposed using these basis functions. Refregier and Bacon (2003) continued the work proposing the _shapelet_ framework to be used for building shear estimates and modelling the PSF. The PSF modelling consists of decomposing the star images in the _shapelet_ basis and then performing an interpolation of the coefficients to positions of interest. Essentially, it is an extension of the approach seen in shape interpolation. Expressing the image in _shapelet_ coefficients allows denoising the star images and provides an easier framework for the galaxy-PSF deconvolution. However, capturing all the PSF structures in a finite expansion over analytical functions is not always possible, leading to lost information. Massey and Refregier (2005) extended the framework from Cartesian to polar _shapelets_. * _Moffatlets and Gaussianlets_: Li et al. (2016) proposed two other series of basis functions to decompose the PSF named _Moffatlets_ and _Gaussianlets_. Li et al. compared the PSF reconstruction using the aforementioned basis with a PCA-based method on LSST-like images. Using analytical basis functions leads to more denoised models, as expected. Nie et al. (2021) continued the approach and forced the principal components being learned in the PCA-like algorithm to be built using the _Moffatlets_ basis. This choice avoids the principal components of learning noise as the _Moffatlets_ basis avoids it. Both analyses are missing a comparison with a reference PSF model like PSFEx to have a reference performance. In addition, the performance comparison is made at the same position as the observed stars, so the model's generalisation to other positions, a fundamental task of the PSF model, is not studied. * _Fourier-based methods_: Zhang (2007) proposed a Fourier-based method for directly measuring the cosmic shear taking into account the PSF, which was further developed in several publications (Zhang, 2011; Zhang et al., 2015; Lu et al., 2017; Zhang et al., 2019). The method is based on the quadrupole moments of the galaxy images (described in detail in subsection 7.2) but is measured in Fourier space. The handling of the PSF is also done in Fourier space. Lu et al. (2017) explores different interpolation approaches for the PSF in the aforementioned Fourier framework. The 2D power spectrum of the observed PSFs is interpolated to target positions. The interpolation is done pixel-by-pixel, and the best-performing method is a well-parametrised polynomial interpolation. An advantage of the Fourier interpolation is that the 2D power spectrum is automatically centred in Fourier space, simplifying the handling of images with intra-pixel shifts or, what is the same, different centroids. Another Fourier-based shear measurement method is Bayesian Fourier Domain (BFD) (Bernstein and Armstrong, 2014; Bernstein et al., 2016) built on the Bayesian formalism. However, it does not include a specific PSF model. * _Optimal transport (OT)-based methods_: There exist two approaches involving OT (Peyre and Cuturi, 2019) to tackle the PSF modelling problem. Ngole and Starck (2017) proposes to use Wasserstein barycenters as a non-linear geometric-aware interpolation procedure of a low-dimensional embedding representation of the PSFs. Although elegant, the performance of the approach does not seem to justify its computational burden. In the comparison method, an RBF interpolation of the principal components obtained through PCA achieves a similar performance. The performance of the PCA method is better in terms of ellipticity but slightly worse in terms of pixel error and PSF FWHM. Schmitz (2019) worked on a data-driven PSF model based on RCA that would be able to model the chromatic variations of the PSF through the use of Wasserstein barycenters that were previously developed in Schmitz et al. (2018). The OT-based PSF model coined \(\lambda\)RCA was compared to RCA. The comparison showed a lower pixel and size error for \(\lambda\)RCA, although the ellipticity error was similar or better for RCA. This method assumes that the PSF's chromatic variation is smooth over all the passband. This assumption holds if the only chromatic effect of the PSF is due to the diffraction phenomena, which exhibits a smooth variation with the \(1/\lambda\) dependence in the WFE that was already presented in Equation 34. However, if this is not the case and another non-smooth chromatic variation is present, currently occurring in _Euclid_(Venancio et al., 2016; Baron et al., 2022), there is no straightforward way to adapt the \(\lambda\)RCA model to account for it. * _Wavefront approach_: Soulez et al. (2016) proposed to model the propagation of light through the mirrors of the optical system. The PSF modelling problem is recast into a phase retrieval problem. The article is a proof-of-concept as there are only qualitative results, and many PSF-modelling difficulties remain unaddressed. * _Exploit out-of-focus images_: Some instruments, like the Dark Energy Camera (DECam) (Flaugher et al., 2015), are equipped with wavefront sensors that are helpful for focus, alignment and adaptive optic system (AOS) (Roddier and Roddier, 1993; Roodman, 2010, 2012). The LSST camera (LSST Science Collaboration et al., 2009) will also be equipped with wavefront sensors (Manuel et al., 2010; Xin et al., 2015, 2016; Claver et al., 2012). Roodman et al. (2014) proposed to use the data from the wavefront sensors to constrain the optical contribution of the PSF. The work was continued by Davis et al. (2016) that proposed a wavefront-based PSF model for the DECam instrument using out-of-focus observations. The PSF model is based on Zernike polynomials fitted to out-of-focus stars, also called doughnuts, that contain considerably more wavefront information than in-focus stars. Then, a new fit is done for each exposure based on the measured quadrupole moments of the in-focus star images. It is not easy to understand at which point the quadrupole moments constrain the proposed PSF model and at which point it is the base physical wavefront measured from the out-of-focus images that are the only part driving the performance of the model. Snyder et al. (2016) proposed using the AOS measurements to characterise the atmospheric turbulence in terms of a Zernike decomposition. * _Deep learning approaches_: A model coined PSF-NET was proposed by Jia et al. (2020) and is based on two convolutional neural networks (CNNs) trained jointly. One network has to transform high-resolution images into low-resolution images, while the other has to do the opposite. The CNNs are trained in a supervised way expecting that the first network will learn a PSF manifold. However, it is unclear how the approach handles the spatial variation of the PSF, and it has not been tested for WL purposes. Jia et al. (2020) proposed another approach for PSF modelling based on denoising auto-encoders, but the spatial variation of the PSF remains untackled. Another approach is followed by Herbel et al. (2018), where the PSF profile is modelled by a parametric function consisting of a base profile of two Moffat profiles and several parametrised distortions to increase the expressivity. A CNN is trained in a supervised manner to predict the parameters of the parametric profile from a noisy star observation. The neural network provides a good estimation of the parameters, but the spatial variation of the PSF is, again, not addressed. Having a PSF model that can model the observations is important. However, in PSF modelling for WL analysis, a crucial part is to capture the spatial variations of the PSF and that the model outputs the PSF at different positions in the FOV. ## 6 Desirable Properties of PSF Models In the previous section, we reviewed some of the most relevant PSF models developed so far. After studying many models, we can conclude on desirable PSF model properties. The PSF model should: 1. _Have an accurate modelling of the PSF light profile._ This modelling is essential for any target task, as the light profile is the convolutional kernel for a given position. The smoothness and structure in the PSF profile are a consequence of the PSF being the Fourier transform, in Fraunhofer's approximation, of a particular finite-length aperture that limits the frequency content of the PSF. This frequency limitation prevents us from having a Dirac distribution as a PSF, as it would require an infinite frequency content to build it. One difficulty is accurately modelling the PSF's wings, or outer region, which is often below the noise level. In ground-based telescopes, the effect of the atmosphere can be interpreted as a low-pass filter for the PSF, smoothing the PSF light profile. 2. _Produce noiseless estimations of the PSF._ The presence of noise in the PSF estimations is an issue for purely data-driven models, which sometimes tend to overfit noisy observations. Some regularisations have to be introduced in the PSF model parameter optimisation to avoid producing noisy PSFs. A seemingly straightforward solution to this problem is to rely on PSF models based on fixed basis functions like Shapelets or Moffatlets. These models will always output denoised PSFs as their basis functions cannot reproduce the noise. However, they will introduce modelling errors if they cannot accurately model the observed PSF light profiles. 3. _Capture the PSF field's variations._ It is often the case that a good quality PSF is required at the position of a certain object where no direct information of the PSF is available. The PSF model first needs to capture most of the relevant information from the observations at other positions and wavelengths. Then, the model exploits this information and predicts the PSF at the required position and wavelength. The PSF model relies upon its generalisation power as it requires to exploit the PSF field information from other positions. 4. _Be able to exploit the structure of the PSF field variations._ This desired property is related to the previous point (c). An exciting approach to obtain a good generalisation power is to learn the structure of the PSF field variations. This structure is a consequence of the physical properties of the telescope's optical system. The subsection below provides a physical understanding of the PSF field structure, which imposes a certain smoothness to the variations. The spatial variations are also structured due to the atmosphere if we study the PSF field of a ground-based telescope. A data-driven PSF model should use a low-complexity representation of the PSF field, which would be able to learn its structure and spatial variations. 5. _Handle discontinuities of the PSF field._ The CCDs misalignments are a source of discontinuity of the PSF's spatial variations. The PSF field is piece-wise continuous, and the borders delimiting the discontinuity are well known as the geometry of the focal plane is accurately known. A straightforward way to handle the discontinuities is to model the PSF for each CCD independently, e.g., PSFEx. Although this is simple to implement, it limits the number of stars available to constrain the PSF model. Another more difficult but potentially more powerful approach is to build a PSF model for the entire focal plane, accounting for the focal plane discontinuities, e.g., MCCD. Another source of discontinuity is segmented mirrors, e.g. JWST's hexagonal mirrors seen in Figure 8(b). 6. _Be robust to outliers and contaminations of the star sample._ Contaminations can come from the selection of stars, the fact that the objects classified as stars are good representations of the PSF and are not small galaxies or binary stars (Kuntzer and Courbin, 2017). Outliers can come from imperfect image preprocessing where detector effects, like CTI, have not been adequately removed. In addition, the model should be robust to different observation conditions such as spatial distributions of the observed stars, SNRs, and the number of observed stars. 7. _Work appropriately with the target task._ The PSF model can be exploited differently according to the target task's objective, e.g., estimating a deconvolved object or estimating some summary statistic of the deconvolved image. The model should be developed with the task in mind, as each task might be more or less susceptible to different kinds of errors. 8. _Be fast._ Upcoming surveys will process a vast amount of observations. Consequently, they put significant pressure on the computing time of PSF models as they need to cope with the data intake. The requirements in terms of computing time can drive many design choices in a PSF model, preventing the use of costly physical simulations. Once the PSF model has been developed with all the aforementioned properties in mind, it is essential to validate the model's performance. The validation should ensure that the expected performance of the model is achieved and help to identify sources of problems and provide directions for the improvement of the model. In the next section, we give an overview of validation methods for PSF models. ## 7 Validation of PSF Models The validation of PSF models is a challenging problem. To derive a validation method, it would be necessary to quantify the impact of PSF modelling errors on the final objective of our analysis. We consider, as an example, a weak-lensing-based cosmological analysis, where the objective is to derive constraints on the parameter of the cosmological model under analysis. This exercise is challenging, given the analysis' complexity and the large data volume. Nevertheless, with some simplifying assumptions, it was carried out to set the PSF modelling requirements for the _Euclid_ mission as shown in subsection 7.3. In this analysis, some assumptions on the PSF shape used do not always hold for the high complexity of the PSF in a space-based mission like _Euclid_. Even though it is essential to derive requirements for the PSF model, these do not give much information on the nature of the errors and possible problems the PSF model has. Therefore, it is necessary to derive different diagnoses or null tests. Jarvis et al. (2016) proposed a set of null tests for the DES WL shear catalogues science verification, including the PSF model validation. The most basic rule for any validation of PSF models is to separate the observations in the FOV into two datasets for estimation and testing, i.e., validation, which could be \(80\%\) and \(20\%\), respectively. The first one should be used to estimate the PSF model. The second one should help validate its performance and not be used in the PSF model estimation. This rule tests the PSF model's generalisation power to unseen positions in the FOV. Next, we will describe the most used PSF diagnosis that will help us to validate the performance of the PSF models. ### Pixel-based metrics The most straightforward diagnostic we can think of is to compute the pixel residual of our PSF model. Once trained, the model is used to recover the PSF at test positions. We can then compute the RMSE of the pixel reconstruction residuals. The PSF model can ideally predict the observed test stars without error, and the reconstruction residual would only contain the observational noise. If we work with simulations, we can produce noiseless stars for our testing set, and the RMSE will directly indicate the pixel reconstruction error. Even though pixel-based metrics can give insight into the PSF model performance, they are not easily interpretable regarding scientific impact. Errors in the PSF core or the PSF wings can impact the observed galaxy's estimated shape differently. With the existing methodology, it is difficult to translate a pixel-based metric into a scientifically meaningful quantity in terms of error propagation. When working with real data, the PSF model's validation with pixel-based metrics becomes more complicated. The different noise levels in the data can hide the pixel reconstruction error, making it difficult to compare different PSF models or even assess the performance of a single one. Liaudat et al. (2021) proposed pixel-based reconstruction metrics for real observations. Let us denote with \(I_{\text{star}}(\bar{u},\bar{v}|u_{i},v_{i}),I_{\text{model}}(\bar{u},\bar{v} |u_{i},v_{i})\in\mathbb{R}^{p\times p}\) a test star and the predicted PSFs, respectively, at the FOV position \((u_{i},v_{i})\), where \(p^{2}\) is the total number of pixels in the image. To simplify notation we write \(I(\bar{u},\bar{v}|u_{i},v_{i})=I_{i}\). In star images, most of the PSF flux is concentrated in the centre of the square image, and the noise level can be considered constant in the image. Therefore, we can mask the image to only consider the central pixels within a given radius of \(R\) pixels and compute the pixel RMSE of the masked images. We note \(\tilde{I}=I\odot M_{R}\) the masked image, where \(M_{R}\in\{0,1\}^{p\times p}\) is a binary mask, and \(\odot\) is the Hadamard or element-wise product. Let us define the \(\sigma\) value as \[\sigma(\tilde{I}_{i})=\sigma\left(M_{R}\odot I_{i}\right)=\left(\frac{1}{\tilde {p}^{2}}\sum_{\bar{u},\bar{v}=1}^{p}\left(\tilde{I}_{i}(\bar{u},\bar{v})\right) ^{2}\right)^{1/2}\, \tag{51}\] where \(\tilde{p}^{2}\) is the number of unmasked pixels, and the sum is done on the unmasked pixel values. The first pixel metric is \(Q_{p_{1}}\) and is defined as \[Q_{p_{1}}=\left(\text{Err}^{2}-\sigma_{\text{noise}}^{2}\right)^{1/2}\, \tag{52}\] where \[\text{Err}^{2}=\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}\sigma\left(\tilde{I}_{\text{star}, i}-\tilde{I}_{\text{model},i}\right)^{2},\text{ and }\sigma_{\text{noise}}^{2}=\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}\sigma\left(\tilde{I}_ {\text{star},i}^{*}\right)^{2}\,, \tag{53}\] where the general noise standard deviation, \(\sigma_{\text{noise}}\), is computed from the pixels on the outer region of the test stars, i.e., \(\tilde{I}^{*}\), which we define as \(\tilde{I}^{*}=I_{\text{star}}\odot M_{R}^{*}\), where \(M_{R}^{*}\) is such that \(M_{R}^{*}+M_{R}=1_{p\times p}\). Subtracting our estimated model, \(I_{\text{model}}\), from an observed star, \(I_{\text{star}}\), should lead to a residual map containing only noise if the model is perfect. The probability of having our model correlated with the noise is minimal. Therefore, the method with the smallest \(Q_{p_{1}}\) can be considered the best from the \(Q_{p_{1}}\) point of view. The next two metrics, \(Q_{p_{2}}\) and \(Q_{p_{3}}\), help quantify the model noise. Let us define \(\sigma_{\text{model},i}^{2}=[\sigma(\tilde{I}_{\text{star},i}-\tilde{I}_{\text {model},i})^{2}-\sigma(\tilde{I}_{\text{star},i}^{*})^{2}]_{+}\), where the operator \([\cdot]_{+}\) sets to zero negative values. Then, both metrics are defined as follows \[Q_{p_{2}}=\left(\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}\sigma_{\text{model},i}^{2} \right)^{1/2}\,,\,\,Q_{p_{3}}=\left(\frac{1}{N_{s}}\frac{1}{N_{s}}\sum_{i=1}^{ N_{s}}(\sigma_{\text{model},i}^{2}-Q_{p_{2}}^{2})^{2}\right)^{1/4}\,. \tag{54}\] The \(Q_{p_{2}}\) metric represents the modelling error expectation for a given star, and the \(Q_{p_{3}}\) metric indicates the fluctuation of the modelling error. A perfect PSF model would give values close to zero for the three metrics. We have assumed that there is no background contamination in the observed test stars or that it has been removed. #### 7.1.1 Chromatic PSF models Some applications or analyses require a chromatic PSF model, and it is essential to validate the chromaticity of the PSF model. This monochromatic validation means validating the PSF at every single wavelength or validating the monochromatic PSF before it is integrated into the instrument's passband. A PSF model with a good performance in reproducing the polychromatic stars does not necessarily have a good monochromatic performance. Supposing that is the case and even if the spectra of the different objects are known in advance, the PSF errors will be more significant when used with objects with considerably different spectra, e.g., galaxies. Chromatic PSF models will generally be required if the observing instrument has a wide passband, e.g. the _Euclid_'s VIS instrument passband goes from \(550\)nm to \(900\)nm. The pixel RMSE can be computed for monochromatic PSFs in the passband as done in Liaudat et al. (2023). However, it is cumbersome to validate with real data as we usually do not have access to the monochromatic PSFs of the instrument under study. Consequently, the monochromatic validation might only be possible with simulations. ### Moment-based metrics Weak gravitational lensing analyses are interested in measuring the shape of galaxies as the measured ellipticity is an estimator of the shear. Cosmologists have developed formulations to relate the PSF errors, expressed in terms of shape and size metrics (Massey et al., 2012), to the cosmological parameters of interest (Cropper et al., 2013). Therefore, it seems natural to have diagnosis metrics based on the ellipticity and size of the PSF. These metrics are determined using the moments of the polychromatic observation \(\tilde{I}[u,v]\). Following Hirata and Seljak (2003), we redefine the image moments that we will use in practice, including a weight function as follows \[<\mu> =\frac{\int\mu\;\bar{I}(\bar{u},\bar{v})\;w(\bar{u},\bar{v})\;\text{d} \bar{u}\,\text{d}\bar{v}}{\int\bar{I}(\bar{u},\bar{v})\;w(\bar{u},\bar{v})\; \text{d}\bar{u}\,\text{d}\bar{v}}, \tag{55}\] \[M_{\mu\nu} =\frac{\int\bar{I}(\bar{u},\bar{v})\;(\mu-<\mu>)\;(\nu-<\nu>)\;w (\bar{u},\bar{v})\;\text{d}\bar{u}\,\text{d}\bar{v}}{\int\bar{I}(\bar{u},\bar{ v})w(\bar{u},\bar{v})\;\text{d}\bar{u}\,\text{d}\bar{v}}, \tag{56}\] where \(\mu,\nu\in\{\bar{u},\bar{v}\}\), \(<\mu>\) denotes the mean of \(\mu\), and \(w(\bar{u},\bar{v})\) is a weight function that helps in noisy settings. The weight function is also useful to compute the moments from diffraction-limited PSFs, e.g., an Airy profile, as they prevent the integral from diverging due to the wings of the PSF. Equation 55 defines the first-order moments, while Equation 56 defines the second-order moments. The ellipticities, or _shape_ metrics, are defined as \[e=e_{1}+\text{i}e_{2}=\frac{(M_{\bar{u}\bar{u}}-M_{\bar{v}\bar{v}})+\text{i} \,2M_{\bar{u}\bar{v}}}{M_{\bar{u}\bar{u}}+M_{\bar{v}\bar{v}}}\,, \tag{57}\] where i is the imaginary unit, and the size metric is defined as \[R^{2}=T=M_{\bar{u}\bar{u}}+M_{\bar{v}\bar{v}}\,. \tag{58}\] One widely used method to estimate these metrics is the widely-used adaptive moment algorithm from GalSim's HSM module (Hirata and Seljak, 2003; Mandelbaum et al., 2005). The adaptive moment algorithm measurement provides \(\sigma\) as size, which relates to the above-defined size metric as \(R^{2}=2\sigma^{2}\). The measurements are carried out on well-resolved polychromatic images. If the observations are undersampled, as is the case for _Euclid_, a super-resolution step is required for the model. Gillis et al. (2020) proposed alternative metrics, based on the image moments, that target the validation of space-based PSFs with emphasis on the HST PSF. The measurements of the shape parameters based on the image moments are susceptible to image noise. If we are working with real data, we do not have access to ground truth images and are obliged to work with noisy observations. Therefore, we need to average over many objects in order to be able to conclude from the different diagnostics. We continue by presenting different moment-based metrics. #### 7.2.1 Shape RMSE We start with a set of test stars and their corresponding PSF estimations. Then, the most direct moment-based metric is to compute the RMSE of the ellipticities and size residuals between the observations and the model prediction. However, this metric could be more insightful as it does not provide any information about the composition of the residuals and the estimation biases involved. #### 7.2.2 Meanshapes A useful diagnostic is to compute the spatial distribution of the ellipticities and size residuals, which we coin _meanshapes_. See Figure 14 for an example. This diagnostic allows us to inspect if there are spatial correlations in the shape and size residuals, indicating that the PSF model is not capturing certain spatial variations from the PSF field. In order to have a finely sampled distribution, we need to average over many exposures, as the available stars from a single exposure are not enough to observe patterns. The shape measurements are also noisy; therefore, averaging over many exposures allows us to smooth out the residuals and observe systematic modelling errors. In practice, we divide the focal plane into several spatial bins, consider several exposures, and then the value of each bin is built by averaging the residuals of all the stars within that bin. A ground-based survey allows us to average the ellipticity contribution of the atmosphere (Heymans et al., 2012), as it can be considered a random field with a zero mean. Then, the observed ellipticity distribution over the focal plane is due to the telescope's optical system that is consistent in every exposure. It is also possible to plot the same spatial distribution but observe the positions of the stars. Such a plot will help to observe if there are regions of stellar under-density that could eventually affect the PSF model. We have assumed that the ellipticities and size are good descriptors, or summary statistics, of the PSF shape. If this is not the case, the diagnostic could be extended with more accurate descriptors. #### 7.2.3 \(\rho\)-statistics Rowe (2010) proposed to compute the auto- and cross-correlations of the ellipticities and their residuals as a diagnostic. The diagnostic was then expanded by Jarvis et al. (2016) to a combination of ellipticities, sizes, and residuals. The \(\rho\)-statistics are helpful to identify PSF modelling biases and detect at which scales it is most affecting the weak lensing analysis. Following Jarvis et al. (2016), we define the \(\rho\)-statistics as follows \[\rho_{1}(\theta)= \left\langle\delta e_{\text{PSF}}^{*}(\mathbf{\theta}^{\prime})\ \delta e_{\text{PSF}}(\mathbf{\theta}^{\prime}+\mathbf{\theta})\right\rangle\,, \tag{59}\] \[\rho_{2}(\theta)= \left\langle e_{\text{PSF}}^{*}(\mathbf{\theta}^{\prime})\ \delta e_{\text{PSF}}(\mathbf{\theta}^{\prime}+\mathbf{\theta})\right\rangle\,,\] (60) \[\rho_{3}(\theta)= \left\langle\left(e_{\text{PSF}}^{*}\frac{\delta R_{\text{PSF}} ^{2}}{R_{\text{PSF}}^{2}}\right)(\mathbf{\theta}^{\prime})\ \left(e_{\text{PSF}}\frac{\delta R_{\text{PSF}}^{2}}{R_{\text{PSF}}^{2}} \right)(\mathbf{\theta}^{\prime}+\mathbf{\theta})\right\rangle\,,\] (61) \[\rho_{4}(\theta)= \left\langle\delta e_{\text{PSF}}^{*}(\mathbf{\theta}^{\prime})\ \left(e_{\text{PSF}}\frac{\delta R_{\text{PSF}}^{2}}{R_{\text{PSF}}^{2}} \right)(\mathbf{\theta}^{\prime}+\mathbf{\theta})\right\rangle\,,\] (62) \[\rho_{5}(\theta)= \left\langle e_{\text{PSF}}^{*}(\mathbf{\theta}^{\prime})\ \left(e_{\text{PSF}}\frac{\delta R_{\text{PSF}}^{2}}{R_{\text{PSF}}^{2}} \right)(\mathbf{\theta}^{\prime}+\mathbf{\theta})\right\rangle\,, \tag{63}\] Figure 14: _Meanshapes_ plots showing the first component of the ellipticity of the PSF model in **(a)** and of its residuals with the observed stars in **(b)**. The figure shows the \(40\)-CCD mosaic from the MegaCam instrument’s focal plane at the Canada-France-Hawaii Telescope. Figure reproduced from Guinot et al. (2022). where \({}^{*}\) denotes complex conjugation, \(\mathbf{\theta}\) and \(\mathbf{\theta}^{\prime}\) denote sky positions, \(\theta\) denotes the modulus of \(\mathbf{\theta}\), and \(\delta\) denotes the residual error that can be computed as \(\delta e_{\text{PSF}}=e_{\text{PSF}}-e_{\text{star}}\) in the case of the PSF ellipticity. Suppose that the ellipticities are random fields that are isotropic and statistically homogenous. In that case, we can compute the correlation \(\rho(\mathbf{\theta},\mathbf{\theta}^{\prime})\) as \(\rho(|\mathbf{\theta}-\mathbf{\theta}^{\prime}|)=\rho(\theta)\), using the modulus \(\theta\). This choice means we are assuming translational and rotational symmetry, a consequence of the Cosmological Principle. We define several \(\theta\) bins in a logarithmic scale, corresponding to \(\ln\theta-\Delta\ln\theta/2<\ln\theta_{ij}<\ln\theta+\Delta\ln\theta/2\), where \(\theta_{ij}=|\mathbf{\theta}_{i}-\mathbf{\theta}_{j}|\) is the distance between two objects at \(\mathbf{\theta}_{i}\) and \(\mathbf{\theta}_{j}\). Consequently, the correlation function at \(\theta\) can be computed using the following unbiased estimator of \(\rho\) that is \[\hat{\rho}(\theta)=\frac{\sum_{i,j}w_{i}w_{j}e_{i}^{\text{A}}{}^{*}e_{j}^{ \text{B}}}{\sum_{i,j}w_{i}w_{j}}\,, \tag{64}\] where we are computing, as an example, the correlation of two ellipticities \(e^{\text{A}}\) and \(e^{\text{B}}\), and the weights depend on the SNR of the ellipticity measurements. We carry out the weighted sum over the pairs of objects within each bin. The \(\rho\)-statistics are interesting as they can be propagated to the shear two-point correlation function (2PCF) (Kilbinger, 2015), which allows studying the properties of the weak lensing convergence field. Following Jarvis et al. (2020), we include the PSF errors into the shear 2PCF, making the \(\rho\)-statistics appear, and then express the systematic error in the shear 2PCF. #### 7.2.4 Other shape metrics Another shape metric that gives insight into the performance of the PSF model in a weak lensing analysis is the PSF leakage \(\alpha\) from Jarvis et al. (2016). It is related to the linear modelling of the shear bias and where it has been decomposed into a multiplicative bias and an additive bias further decomposed into PSF-dependent (leakage) and PSF-independent terms. The PSF leakage helps quantify how the PSF affects the shear estimation through the shape measurement. It measures the leakage of the PSF shape to the galaxy shapes. ### Weak lensing: PSF error propagation and PSF requirements definition The pioneer in the PSF error propagation for WL was Paulin-Henriksson et al. (2008), followed by Massey et al. (2012). The proposed framework is based on the second-order moments of the images, i.e., complex ellipticity \(e\) and size \(R^{2}\). It expresses how the PSF, or some other effect, affects the observed ellipticity and size. Let us consider the effect of the PSF on the unweighted moments from Equation 57 and Equation 58, where unweighted represents computing Equation 55 and Equation 56 without the \(w\) weight function. Then, we obtain \[e_{\text{obs}}=e_{\text{gal}}+\frac{R_{\text{PSF}}^{2}}{R_{\text{PSF}}^{2}+R_{ \text{gal}}^{2}}\left(e_{\text{PSF}}-e_{\text{gal}}\right)\quad\text{and} \quad R_{\text{obs}}^{2}=R_{\text{gal}}^{2}+R_{\text{PSF}}^{2}\,, \tag{65}\] where the subscript \({}_{\text{obs}}\) refers to the quantity measured to the observed galaxy, the subscript \({}_{\text{gal}}\) refers to the intrinsic quantity of the galaxy, and \({}_{\text{PSF}}\) refers to the quantity measured to the PSF. There are intrinsic assumptions in Equation 65, that the observational model is \(I_{\text{obs}}=I_{\text{gal}}*H_{\text{PSF}}\), and that all the moments are well defined. Then, Equation 65 can be rewritten to express the quantity of interest in weak lensing, the intrinsic galaxy ellipticity, as follows \[e_{\text{gal}}=\frac{e_{\text{obs}}\,R_{\text{obs}}^{2}-e_{\text{PSF}}\,R_{\text{ PSF}}^{2}}{R_{\text{obs}}^{2}-R_{\text{PSF}}^{2}}\,. \tag{66}\] The error propagation consists of expanding the previous equation in a first-order Taylor series with respect to the quantities of interest. In this case, it will be the shape and size of the PSF, and the propagation writes \[\widehat{e_{\text{gal}}}\approx e_{\text{gal}}+\frac{\partial e_{\text{gal}}} {\partial\left(R_{\text{PSF}}^{2}\right)}\delta\left(R_{\text{PSF}}^{2} \right)+\frac{\partial e_{\text{gal}}}{\partial e_{\text{PSF}}}\delta e_{ \text{PSF}}\,, \tag{67}\] where \(\delta\) refers to errors in the model with respect to the ground truth. It is straightforward to compute the partial derivatives in Equation 67 from Equation 66. We then obtain the following expression \[\widehat{e_{\text{gal}}}\approx e_{\text{gal}}\left(1+\frac{\delta\left(R_{ \text{PSF}}^{2}\right)}{R_{\text{gal}}^{2}}\right)-\left(\frac{R_{\text{PSF}}^ {2}}{R_{\text{gal}}^{2}}\delta e_{\text{PSF}}+\frac{\delta\left(R_{\text{PSF} }^{2}\right)}{R_{\text{gal}}^{2}}e_{\text{PSF}}\right)\,, \tag{68}\] The previous ellipticity estimator can be used to obtain a shear estimator assuming that the intrinsic galaxy distribution has a zero mean. The estimator can then be used in the linear shear bias parametrization from Jarvis et al. (2016). At this point, we can express the additive and multiplicative biases as a function of the elements from Equation 68. This analysis shows us that the multiplicative bias is related to the size of the PSF with its estimation error and the size of the galaxy. The result was expected if we paid attention to the first term of Equation 68. This framework allows us to consider different types of errors. Massey et al. (2012) uses it to include errors due to non-convolutional detector effects, imperfect shape measurement, and the fact that the shape measurement method used weighted, i.e., Equation 56, instead of unweighted moments. The procedure consists in adding the desired effect to the galaxy ellipticity expression, Equation 66, and then adding their corresponding partial derivatives to the Taylor expansion seen in Equation 67. Cropper et al. (2013) uses this formalism to derive requirements for a WL mission in space. The aforementioned framework was used to derive the current PSF model requirements for the _Euclid_ space mission (Laureijs et al., 2011). The previous formalism is based on _unweighted_ moments. However, in practice, the moments are always computed from noisy images using a compact weight function to ensure that the measurement yields significant results. (Melchior et al., 2011; Melchior and Viola, 2012) showed that using a weight function mixes the image's moments. Consequently, the second-order moments are affected by higher-order moments even in the absence of noise, thus exposing the Paulin-Henriksson et al. (2008) framework's fundamental limitations. Schmitz et al. (2020) then noted and verified empirically in an _Euclid_-like scenario that the propagation is based on second-order moments of the PSF, which do not accurately describe the shape of a space-based PSF. A perfect second-order moment estimation of the PSF would have a zero shear bias contribution in the formalism described. However, the PSF's higher moments error (HME) of the PSF will impact the shear biases and are not considered in the framework proposed by Paulin-Henriksson et al. (2008). The higher the contribution of HME to the PSF, the more significant the deviations will be. A space mission like _Euclid_ will have a PSF close to the diffraction limit, meaning that its shape will be complex and not well described by a Gaussian (or by its second-order moments). As a space PSF is not well described by second-order moments, the previous requirements should be used _with caution_. The LSST collaboration, concerned with the previous issue, studied the contribution to systematic biases of the HME of the PSF model on the shear measurement (Zhang et al., 2021, 2022). Zhang et al. (2021) showed that the HME of the PSF model might be a significant source of systematics in upcoming WL analyses. Zhang et al. (2022) studied the impact of moments from the \(3^{\rm rd}\) to \(6^{\rm th}\) order to the cosmological parameter inference concluding that the HME of PSF models like PSFEx and PIFF should be reduced for the future surveys like LSST if the WL analysis is to remain unchanged. The use and adoption of automatic differentiable (Baydin et al., 2017) models could make a significant contribution to error propagation. The derivatives of the target estimators with respect to the model's parameters, or intermediate products with physical meaning, would be available. This fact allows us to consider more complex scenarios than the one seen in Equation 65 as we would not require explicitly writing the equations nor their derivatives. A differentiable forward model should be enough to describe how the PSF interacts with the target task. ## 8 Conclusions This review gives an overview of point spread function (PSF) modelling for astronomical telescopes, emphasising cosmological analyses based on weak gravitational lensing. This application sets the tightest constraints on the PSF models and has driven much of the last progress in PSF models. The development of new instruments and telescopes seeking higher precision and accuracy requires more powerful PSF models to keep up with the reduction of other sources of errors. We differentiate two scenarios that fundamentally change how the PSF is modelled: the ground- and space-based telescopes. The main difference is the atmosphere, how it affects the observations, and how challenging it is to build an atmospherical physical model that can be exploited in a reasonable amount of time. The difficulty of handling the temporal integration of a representative atmospherical model fostered the use of purely data-driven PSF models built in the pixel space for ground-based telescopes. The stability of space-based telescopes allows for exploiting models more physically based on the wavefront. The optics fundamentals to properly define the PSF and understand its effect on the underlying imaged object are not often introduced in PSF modelling articles. One of our goals was to solve this issue by providing a concise yet comprehensive introduction to optical principles. The provided optical background should cover most of the available PSF models and motivate the general observational model that we have proposed. This observational model can be further simplified and adapted to different use cases, including ground or space telescopes. We described several assumptions that might not always hold. After describing how the PSF affects our observations, we presented an extended list of the leading optical- and detector-level contributors to the PSF field. These contributors are the source of the PSF's spatial, spectral and temporal variations in addition to its morphology. A detailed description of the atmospheric contribution based on phase screens was presented. We then gave a brief description of the most relevant PSF models. The PSFEx model has been successful in modelling the PSF for several surveys with its robust and fast implementation. However, the next-generation telescopes set higher PSF requirements, demanding novel models to achieve such performances. Recent models are targetting upcoming telescopes, e.g., the Vera C. Rubin observatory and the _Euclid_ and Roman space telescopes. These models are continuously being developed and are pushing forward the capabilities of PSF modelling. A common bottleneck for them is the computing time required to estimate the model from the observations. It is unclear if the solution can be achieved through better software implementations that exploit parallel computing architectures or better-performing programming languages. A refactoring of the methods allowing for simplifications that accelerate calculations might be required, or even both approaches. One big challenge of PSF modelling is to build _fast_ still _powerful_ models. Another challenge is to include complex effects and contributions that cannot be directly constrained from the observations into the PSF model. These contributions can be modelled with simulations and obtained from complementary observations or instrument characterisations. However, this complementary information will not match precisely the state of the telescope during the imaging procedure due to several reasons, e.g., changes in the telescope, measurement errors, and imperfect modelling. The better way to correct this information and adapt it to real observations needs to be further studied. The validation of PSF models from real observations is a challenging subject that requires further development, as access to the ground truth PSF field is unavailable. Although some validation methods exist, they are generally not very informative or based on second-order moments that are not well suited to describe diffraction-limited PSFs. We have presented the error propagation of a galaxy-shape measurement, where several limitations were exposed. Current error propagation methods have simplifying assumptions, e.g., the PSF is well described by its quadrupole moments that do not hold anymore with recent and upcoming telescopes. Further development of these methods is required to define realistic PSF model requirements and study how PSF modelling errors affect the target task. ## Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions TIL coordinated the entire effort, produced the figures and wrote the text. JLS, MK and PAF provided valuable comments and feedback on the different sections of the article. ## Funding Part of this work was funded by the TITAN ERA Chair project (contract no. 101086741) within the Horizon Europe Framework Program of the European Commission. ## Acknowledgments Part of this review has been adapted from Tobias Liaudat's PhD thesis (Liaudat, 2022). We acknowledge and resume in the following list the open-source simulating and modelling software mentioned in this review: * GalSim(Rowe et al., 2015): github.com/GalSim-developers/GalSim, * webbpsf(Perrin et al., 2014): github.com/spacetelescope/webbpsf, * PhoSim(Peterson et al., 2015): bitbucket.org/phosim/phosim_release/wiki/Home, * WaveDiff(Liaudat et al., 2023): github.com/CosmoStat/wf-psf, * ShapePipe(Farrens et al., 2022): github.com/CosmoStat/shapepipe, * PIFF(Jarvis et al., 2020): github.com/rmjarvis/Piff, * MCCD(Liaudat et al., 2021a): github.com/CosmoStat/mccd, * RCA (Ngole et al., 2016; Schmitz et al., 2020): github.com/CosmoStat/rca, * PSFEx (Bertin, 2011): github.com/astromatic/psfex, * Tiny-Tim (Krist et al., 2011): github.com/spacetelescope/tinytim, * Photutils (Bradley et al., 2022): github.com/astropy/photutils, * \(\partial\)Lux (Desdoigts et al., 2023): github.com/LouisDesdoigts/dLux, * PSF weather station: github.com/LSSTDESC/psf-weather-station.
2303.04499
Non-Conventional Critical Behavior and Q-dependent Electron-Phonon Coupling Induced Phonon Softening in the CDW Superconductor LaPt2Si2
This paper reports the first experimental observation of phonons and their softening on single crystalline LaPt$_2$Si$_2$ via inelastic neutron scattering. From the temperature dependence of the phonon frequency in close proximity to the charge-density wave (CDW) $q$-vector, we obtain a CDW transition temperature of T$_{CDW}$ = 230 K and a critical exponent $\beta$ = 0.28 $\pm$ 0.03. This value is suggestive of a non-conventional critical behavior for the CDW phase transition in LaPt$_2$Si$_2$, compatible with a scenario of CDW discommensuration (DC). The DC would be caused by the existence of two CDWs in this material, propagating separately in the non equivalent (Si1-Pt2-Si1) and (Pt1-Si2-Pt1) layers respectively, with transition temperatures T$_{CDW-1}$ = 230 K and T$_{CDW-2}$ = 110 K. A strong $q$-dependence of the electron-phonon coupling has been identified as the driving mechanism for the CDW transition at T$_{CDW-1}$ = 230 K while a CDW with 3-dimensional character, and Fermi surface quasi-nesting as a driving mechanism, is suggested for the transition at T$_{CDW-2}$ = 110 K. Our results clarify some aspects of the CDW transition in LaPt$_2$Si$_2$, which have been so far misinterpreted by both theoretical predictions and experimental observations, and give direct insight into its actual temperature dependence.
Elisabetta Nocerino, Uwe Stuhr, Irene San Lorenzo, Federico Mazza, Daniel Mazzone, Johan Hellsvik, Shunsuke Hasegawa, Shinichiro Asai, Takatsugu Masuda, Arianna Minelli, Zakir Hossain, Arumugam Thamizhavel, Kim Lefmann, Yasmine Sassa, Martin Månsson
2023-03-08T10:44:27Z
http://arxiv.org/abs/2303.04499v1
Non-Conventional Critical Behavior and Q-dependent Electron-Phonon Coupling Induced Phonon Softening in the CDW Superconductor LaPt\({}_{2}\)Si\({}_{2}\) ###### Abstract This paper reports the first experimental observation of phonons and their softening on single crystalline LaPt\({}_{2}\)Si\({}_{2}\) via inelastic neutron scattering. From the temperature dependence of the phonon frequency in close proximity to the charge-density wave (CDW) \(q\)-vector, we obtain a CDW transition temperature of T\({}_{CDW}\) = 230 K and a critical exponent \(\beta\) = 0.28 \(\pm\) 0.03. This value is suggestive of a non-conventional critical behavior for the CDW phase transition in LaPt\({}_{2}\)Si\({}_{2}\), compatible with a scenario of CDW discommensuration (DC). The DC would be caused by the existence of two CDWs in this material, propagating separately in the non equivalent (Si\(-\)Pt2-Si1) and (Pt1\(-\)Si2-Pt1) layers respectively, with transition temperatures T\({}_{CDW-1}\) = 230 K and T\({}_{CDW-2}\) = 110 K. A strong \(q\)-dependence of the electron-phonon coupling has been identified as the driving mechanism for the CDW transition at T\({}_{CDW-1}\) = 230 K while a CDW with 3-dimensional character, and Fermi surface quasi-nesting as a driving mechanism, is suggested for the transition at T\({}_{CDW-2}\) = 110 K. Our results clarify some aspects of the CDW transition in LaPt\({}_{2}\)Si\({}_{2}\), which have been so far misinterpreted by both theoretical predictions and experimental observations, and give direct insight into its actual temperature dependence. ## Introduction The concept of charge-density waves (CDW) is associated with the periodic spatial modulation of electron density in metallic systems, where the symmetry of the otherwise highly uniform charge density is broken due to instabilities of the Fermi Surface [1]. In CDW materials, structural modifications of the crystal lattice, such as super lattice distortions and modifications of the crystal symmetry, usually occur as a consequence of the charge displacement to lower the energy of the system. Introduced for the first time in the 1950's [2], CDWs were more recently brought back to the headlines since evidence of incommensurate charge ordering phenomena were found in several unconventional superconductors [3, 4, 5]. For this reason, the interplay between CDW and superconductivity (SC) is believed to be a key factor in understanding the mechanism behind unconventional superconductivity, which is one of the major unresolved questions in modern condensed matter physics. LaPt\({}_{2}\)Si\({}_{2}\) is a layered Pt-based rare earth intermetallic material which exhibits strong interplay between CDW and SC [6, 7] and, therefore, represents a very interesting study case in this framework. The room temperature crystal structure of LaPt\({}_{2}\)Si\({}_{2}\) is a CaBe\({}_{2}\)Ge\({}_{2}\)-type tetragonal structure with space group \(P4/nmm\), where two alternating (Si1-Pt2-Si1) and (Pt1-Si2-Pt1) layers, separated by lanthanum atoms, are stacked along the \(c\)-axis [8] (Fig. 1). According to theoretical calculations [6, 9], the Fermi surface of LaPt\({}_{2}\)Si\({}_{2}\) is expected to be of two-dimensional nature with electron-like pockets around the M-point. These pockets are quasi-nested with a \(q\)-vector \(q_{CDW}\) close to (1/3, 0, 0). The quasi-nesting is due to the predominant contribution of the Pt1 atom matrix element to the calculated projected susceptibility, which implies dominant role of the Pt1 atoms in the CDW transition and partial gapping of the Fermi surface at \(q\)-values close to the nesting vector. Phonon dispersion calculations were also performed to account for the role of electron-phonon coupling in the CDW transition [6]; phonon-softening instabilities leading to structural instabilities, mainly due to the Pt1 atoms, were predicted to occur in correspondence to \(q_{CDW}=(1/3,0,0)\). An estimation of the electron-phonon coupling constant \(\lambda_{p}\) is also provided, and found to be consistent with a moderately strong coupling (\(\lambda_{p}\) = 0.73). It was concluded that the CDW in LaPt\({}_{2}\)Si\({}_{2}\) is 2-dimensional in nature and likely originates from \(q\)-dependent electron-phonon coupling with quasi-nesting of the Fermi surface. The CDW was claimed to propagate in the Pt1 layer, within which it would co-exist with superconductivity [6]. Experimentally, indications of the occurrence of the CDW transition in this material were observed with several experimental techniques in both powder samples and single crystals [10, 11, 12, 13, 14]. From these investigations, it was established that the CDW transition temperature should be around 85 K for single crystalline LaPt\({}_{2}\)Si\({}_{2}\) or 110 K for polycrystalline LaPt\({}_{2}\)Si\({}_{2}\), in correspondence with sharp anomalies observed in magnetic susceptibility, resistivity and specific heat data, as well as in the temperature dependence of the Knight shift and NMR relaxation rate extracted from \({}^{193}\)Pt-NMR measurements. The appearance of super-lattice satellite Bragg reflections in the diffraction patterns of both single crystal and powder LaPt\({}_{2}\)Si\({}_{2}\)[12, 13] with propagation vectors \(q\)/1 = (0.36, 0, 0) and \(q\)/\(1\) = (0, 0.36, 0), provided additional indication of the occurrence of a CDW state. In single crystals, the satellites appear around 175 K, have their maximum intensity at 85 K and disappear below 80 K. For this reason, the CDW transition temperature was associated with the temperature at which the satellites have their maximum intensity (i.e., \(T_{CDW}\) = 85 K). Later on, a second set of satellites appearing around 85 K was identified, with a periodicity \(q\)/2 = (0.18, 0.18, 0.5), \(q\)/\(2\) = (0.18, -0.18, 0.5) that can be expressed as a linear combination of the \(q\)1 satellites propagation vectors: \[q\prime 2 =\frac{q\prime 1+q\prime\prime\prime+(001)}{2}, \tag{1}\] \[q\prime\prime 2 =\frac{q\prime 1-q\prime 1+(001)}{2}. \tag{2}\] This finding seems to indicate that multiple CDW transitions take place in LaPt\({}_{2}\)Si\({}_{2}\) and casted some doubts on the previous results, as well as on the actual temperature dependence of the CDW transition [15]. Indeed, the reported T\({}_{CDW}\) = 85 K seems Figure 1: Brillouin zone of the tetragonal \(P4/nmm\) symmetry group, with the related symmetry points and axes, along with the room temperature unit cell of LaPt\({}_{2}\)Si\({}_{2}\). The direct and reciprocal cells are depicted with the same orientation of the axes. The room temperature unit cell oriented along the \(a\)-axis is also shown for clarity of display of the Pt-Si layers to be associated to the \(q2\) satellites, while the \(q1\) are probably related to a higher T\({}_{CDW}\) (the authors of reference [15] suggest T\({}_{CDW}\)(\(q2\)) = 85 K and T\({}_{CDW}\)(\(q1\)) = 175 K). Our very recent high-resolution synchrotron XRD study, carried out within our collaboration, clarified the complex temperature dependent structural evolution of LaPt\({}_{2}\)Si\({}_{2}\)[8], which was found to undergo a series of structural transitions. Namely, on cooling, tetragonal \(P4/nmm\rightarrow\) incommensurate tetragonal \(\rightarrow\) orthorhombic \(Pmmn\rightarrow\) tetragonal \(P4/nmm\). The onset of the charge ordering that leads to the CDW transition was found to be well above room temperature and the first CDW transition was identified at T1 = 230 K, in correspondence to the formation of the \(q1\) satellites, while a second CDW transition was identified at T2 = 110 K, in correspondence to the formation of the higher order \(q2\) satellites [8]. The T1 transition was not considered earlier because the previous results were determined in a too narrow temperature range or with in-house experimental equipment, which did not have the necessary resolution to clearly identify the weak anomalies in their signals at T1. To unambiguously determine whether the observed phenomena are really due to a CDW transition, and what is its actual temperature dependence, a measurement of the phonon dispersion curves in LaPt\({}_{2}\)Si\({}_{2}\) is needed. This enables to verify the presence of a phonon softening, which is predicted theoretically to occur above T\({}_{CDW}\) to assist the CDW instability. In this work we present a direct observation of the phonon softening and T\({}_{CDW}\) determination in LaPt\({}_{2}\)Si\({}_{2}\), measured with inelastic neutron scattering (INS). ## Results The present study was conducted on two samples, a large single crystal with mass \(\approx\) 4g and a smaller crystal with mass \(\approx\) 1g. Due to its large mass, the big sample allowed the collection of INS scans with lower counting time with respect to the small sample, resulting in faster measurements with a better signal to noise ratio, and a consequent outline of the phonon dispersion relation with higher density of points. However, a twinning of about 8\({}^{\circ}\) in the large sample, with inequivalent contributions from the different twins, caused a systematic shift of the momentum transfer axis for the different energy transfer scans. The shift was estimated to be about 0.05 r.l.u. along \(qk\), from calculations of the center of mass of the twin system and by direct comparison with the small, non twinned, sample. The behavior of the two crystals is in very good agreement modulo a 0.05 r.l.u. shift of the \(qk\) axis, therefore, from now on we will refer to the measurements performed on the small non-twinned sample, Figure 2: a) Temperature dependence of the intensity of the [2 0 0] Bragg reflection. b) Temperature dependence of the \(q1\) satellite [2 -0.36 0]. c) Temperature dependence of the intensity of the \(q1\) satellite [2 -0.36 0]. d) Temperature dependence of the integrated intensity of the \(q1\) satellite [2 -0.36 0]. while the plots collected for the large twinned sample (displayed in the Methods section) will be used as a qualitative term of comparison to facilitate the observation of the critical behavior in LaPt\({}_{2}\)Si\({}_{2}\). It should be noted that the twinning in the large crystal could not be detected with surface characterization methods (i.e. X-ray Laue diffraction). A preliminary inspection of the zero energy transfer region of the spectrum (i.e., the elastic line), for alignment purposes on the Bragg reflection (2 0 0), gave us the opportunity to follow the temperature dependence of such peak, as well as of the \(q1\) satellite (2 -0.36 0). Since, unlike X-rays, neutrons scatter with the atomic nuclei rather than with the surrounding electronic clouds, this measurement allows to investigate the temperature evolution of the lattice distortions without the charge ordering contribution. The outcome of this investigation can be therefore compared with the structural characterization suggested in reference [8], to discriminate the effects of charge ordering and structural distortions on the diffraction data. Figure 2(a) displays the temperature dependent (2 0 0) structural Bragg peak intensity between 75 K and 200 K, whose behavior is in very good agreement with the findings of ref. [8]. In fact, on cooling, a smooth increase of the peak intensity is observed in correspondence to the incommensurate distortion in the \(ab\)-plane. Then two clear first order modulations of the peak intensity occur, one at \(\sim\) 115 K, in proximity of the T2 = 110 K structural transition, and one at \(\sim\) 90 K, in proximity of the strong fluctuations in the \(b-a\) difference, that precede the structural transition at T3 = 60 K (here out of the investigated T-range) [8]. Figure 2(b-d) show the temperature dependence of the intensity and of the integrated intensity of the (2 -0.36 0) satellite. Also in this case, these parameters follow the trend outlined in reference [8]. Here it can be seen that the intensity of the \(q1\) satellite is completely suppressed at 250 K, suggesting that the diffuse scattering observed in the XRD above this temperature is purely due to charge ordering. Concerning the inelastic part of the spectrum, by following the transverse acoustic phonon branch of the (2 0 0) reflection Figure 3: (a-c) TAS-INS energy scans with model fits, taken at \(qk\) = 0.35 for different temperatures showing the collapse of the phonon mode in the beginning of the critical region. (d-e) TAS-INS energy scans with model fits, taken at different \(q\) points showing the \(q\)-dependence of the soft phonon mode at 200 K. (f) Temperature dependence of the phonon energy at qk = 0.35. The solid blue line is a fit to the power law \(E_{tran}\propto(\frac{T}{T_{c}}-1)^{\beta}\). along the \(\Gamma-X\) direction (Fig. 1), the phonon dispersion relation for different temperature points was constructed with constant \(q\) and constant energy transfer scans. In order to follow the dynamical behavior of the phonon frequency as a function of the wavevector \(qk\) for different temperatures, the inelastic part of the \(q\) and energy scans were fitted to Lorentzian lineshapes, which are conventionally used to approximate phonon lineshapes to damped harmonic oscillators [16], while the elastic part was modeled with Gaussian lineshapes. Figure 3 (a-c) display the temperature dependence of the soft phonon mode at the \(q\) point (2 0.35 0), in close proximity to \(q_{CDW}\) = 0.36 (r.l.u.). Figure 3 (d-e) shows the \(q\)-dependence of the soft phonon mode below the CDW transition at \(T=200\) K, outside and inside the critically damped region. The phonon is well shaped as a Lorentzian peak outside the critical region for \(qk\) = 0.2 (r.l.u.). At \(q_{CDW}\) the phonon appears very broad already at \(T=450\) K [Fig. 3(a)] and, at \(T=200\) K additional broadening and shift in frequency result in a collapse of the mode on the elastic line [Fig. 3(c)]. The clear dramatic change in the phonon position and broadening made it unnecessary for us to deconvolute the instrumental resolution function from the Lorentzian fit function, since the phonon linewidths were much larger than the instrumental resolution at EIGER (\(<1\) meV in the energy transfer region of interest [17]). Additional temperature points were collected to observe the temperature dependence of the value of the phonon frequency at the \(q\) point (2 0.35 0) [Fig. 3(f)], which follows a power law of the type \(E\propto(\frac{T}{T_{c}}-1)^{\beta}\). Fitting the temperature dependence of the phonon frequency to the power law, the critical exponent was found to be \(\beta=0.29\pm 0.21\), which implies critical behavior not well described by a mean-field theory approach, foreseeing an exponent \(\beta=\frac{1}{2}\). This value is consistent with the value \(\beta=0.28\pm 0.03\) extracted from the data on the twinned sample in \(qk\) = 0.4 (r.l.u.) [see Fig. 7(f)], which could be estimated with higher confidence due to the larger number of experimental points in proximity of the phase transition. The fit also provides the transition temperature to the CDW state, \(T_{CDW}\) = (225 \(\pm\) 3) K, while the value estimated from the twinned sample is \(T_{CDW}\) = (237 \(\pm\) 4) K. These temperatures are reasonably close to each other and they are both in very good agreement with the transition temperature \(T_{1}\) = 230 K suggested in reference [8], as well as with the temperature dependence of the (2 -0.36 0) satellite displayed in Fig. 2. Figure 4 displays the phonon dispersion curves constructed through the Lorentzian fitting for different temperature points. The experimental data are overlapped with the phonon dispersion previously calculated [6], showing a very good agreement with the theoretical curves. Indeed, the measured onset of the softening is aligned with its counterpart in the calculated soft phonon mode. The \(q\)-dependent damping of the phonon mode is strongly pronounced already at 450 K, with a valley in the value of the phonon frequency around the \(q\) point corresponding to \(q_{CDW}\) (Fig. 4). The extent of the valley can be better appreciated looking at the phonon dispersion curves for the twinned sample (see Fig. 7(g) in the Methods section). Here, the dispersion curve at T = 470 K shows a pronounced anomaly in a broad \(q\) region, from qk = 0.3 (r.l.u.) to qk = 0.6 (r.l.u.). Higher temperatures were not accessible with our setup, however the anomaly in the phonon dispersion observed at T = 470 K implies that the onset of the CDW transition in this material occurs well above room temperature, confirming the indications of the synchrotron XRD results [8]. On cooling from 450 K, the anomaly is more and more pronounced until the collapse of the phonon. Below 200 K the value for the phonon frequency in the critical region increases slowly on cooling (see. Fig. 7 f in the Methods section). In \(q\) regions above \(q>0.4\) (r.l.u.) the phonon frequency increases with increasing \(q\), recapturing the regular transverse phonon profile. However, the shape of the phonon is not completely recovered and a very broad lineshape persists also outside of the critical region together with a dramatic intensity drop (Fig. 5). One of the possible mechanisms that might justify such a damping of the phonon could be anharmonicity, as a direct consequence of the structural instability induced by the CDW transition. With the observed renormalization of the phonon dispersion, over a wide range of wavevectors, and the broad critically damped region, which extends for an interval of wavevectors \(\Delta\)qk = 0.075 (r.l.u.), LaPt\({}_{2}\)Si\({}_{2}\) exhibits a behavior that cannot be explained within the realm of the Peierls' picture, which foresees a sharp cusp-like deep in the phonon dispersion (Kohn anomaly) [18] in correspondence to the Fermi surface nesting (FSN) vector. A similar phenomenology, with the broad Kohn-like anomaly in the phonon dispersion, occurs in the prototypical 2-dimensional CDW material \(2H\)-NbSe\({}_{2}\)[19]. Here, the phonon softening instability was motivated uniquely by a strong \(q\)-dependence of the electron phonon coupling, since no indication of FSN was found. Indeed, looking at the temperature dependence of the resistivity in LaPt\({}_{2}\)Si\({}_{2}\)[8] and \(2H\)-NbSe\({}_{2}\)[20], no sharp metal to insulator transition (typical of the FSN-driven CDW) can be observed at their respective CDW transition temperatures (namely 230 K and 33.5 K respectively). Their resistivity only shows a deviation from linearity, with a downwards bending, which can be explained with CDW induced small anisotropic energy gaps which hardly affect the transport properties in these materials [21]. However, the two systems present two major differences between their dispersion relations. The first is that \(2H\)-NbSe\({}_{2}\) does not present anharmonicity and the shape of the phonon is recovered at higher \(q\) after the anomaly. The second difference is that, unlike \(2H\)-NbSe\({}_{2}\), LaPt\({}_{2}\)Si\({}_{2}\) shows an abrupt increase in the resistivity and a sharp drop in the magnetic susceptibility temperature dependencies occur at lower temperatures, in correspondence to the appearance of the \(q2\) satellites and the first order structural transition \(ICetragonal\to orthorhombic\)\(Pmm\) at T2 = 110 K [8]. These sharp anomalies are evocative of a Peierls' transition, however they do not induce a full metal to insulator transition, because the system is still metallic below the transition. Therefore, they are compatible with partial anisotropic gaps opening at the Fermi surface, with a phenomenology similar to the quasi-1-dimensional system NbSe\({}_{3}\)[22, 23], in which two anomalies in the resistivity temperature dependence were observed. Here, the anomalies were explained with partial gaps opening at the Fermi surface, which were identified with an imperfect Fermi surface nesting. Nevertheless, unlike NbSe\({}_{3}\) in which the transitions in the electronic states are of second order, in LaPt\({}_{2}\)Si\({}_{2}\) such transitions exhibit a hysteresis, which is signature of a first order transition. Since in regular 1D and 2D systems the CDW transition is usually of second order[24], in reference[8] it is suggested that the abrupt changes in the resistivity and magnetic susceptibility in LaPt\({}_{2}\)Si\({}_{2}\) are associated to the structural transition-induced changes in its electronic states. A plausible origin for the transition at T2 = 110 K with propagation vectors \(q\prime\)2 = (0.18, 0.18, 0.5), \(q\prime\prime\)2 = (0.18, -0.18, 0.5) is that a second CDW is established in LaPt\({}_{2}\)Si\({}_{2}\), which has a different nature with respect to the one found at T\({}_{CDW}\) = 230 K. Indeed, the first order transitions in the electronic states accompanied by a first order restructuring of the atomic lattice with long-range ordering in the out-of-plane \(c-\)direction and strong inter-planar coupling[8], might be consistent with a CDW transition with a 3-dimensional character and a quasi-nesting feature of the Fermi surface, which is indeed foreseen in the theoretical work of Kim et al.[6]. 3-dimensional CDW systems similar to LaPt\({}_{2}\)Si\({}_{2}\), with 2D planar structures containing 5\(d\) transition metal ions with partially filled electronic levels such as IrTe\({}_{2}\)[25], exhibit an analogous behavior of their transport properties within the CDW transition. Such systems belong to a class of structurally quasi 1D and quasi 2D materials, in which strong inter-chain and inter-planar couplings as well as spin-orbit coupling are believed to be related to the CDW transition[26]. For this reason they are called "strong coupling CDW" materials or "3D-CDW" materials. In reference[8] LaPt\({}_{2}\)Si\({}_{2}\) was associated with the 3D-CDW compound 2H-TaSe\({}_{2}\), due to the striking similarity between the behaviors of their structural evolution across the CDW phase transition[27]. Since discommensuration of the CDW was directly observed in 2H-TaSe\({}_{2}\)[28], by virtue of their similarities, it was suggested that the CDW in LaPt\({}_{2}\)Si\({}_{2}\) also undergoes discommensuration, with large commensurate domains separated by incommensurate narrow domain walls[29]. Remarkably, the determination of the critical exponent in 2H-TaSe\({}_{2}\)[30] provided a value \(\beta=\frac{1}{3}\), which implies a non conventional critical behavior for the CDW transition in 2H-TaSe\({}_{2}\) and is close, within the error bars, to the critical exponent estimated in this work for LaPt\({}_{2}\)Si\({}_{2}\)\(\beta=0.28\pm 0.03\). In qualitative agreement with 2H-TaSe\({}_{2}\), the non conventionality of the critical behavior of the CDW in LaPt\({}_{2}\)Si\({}_{2}\) might be motivated by discommensuration, hereby endorsing the conjecture of reference[8]. In LaPt\({}_{2}\)Si\({}_{2}\) the 2 non-equivalent Pt1 and Pt2 sites give different contributions to the electronic density of states at the Fermi level, according to first principle calculations[6, 9]. Additionally, experimental NMR measurements, acquired in a temperature range below 200 K, show that the Pt1 layer is the only responsible for the occurrence of the CDW state in LaPt\({}_{2}\)Si\({}_{2}\)[14]. In this regard, it is reasonable to believe that the proposed two CDW transitions at T\({}_{CDW-1}\) = 230 K and T\({}_{CDW-2}\) = 110 K have separate origins in the respective Pt planes, as also conjectured in reference[8]. In brief, based on the arguments here in agreement with the published theoretical calculations[6] as well as with experimental evidences[8], Figure 4: TAS-INS phonon dispersion curves for different temperature points. The scatter plot (solid symbols) shows the experimental data, the grey lines is the lower energy part of the phonon dispersion calculated in reference[6] and the magenta line is the calculated soft mode. and in qualitative agreement with the phenomenology of analogous systems [19, 20, 25, 27], we suggest that the CDW formation in LaPt\({}_{2}\)Si\({}_{2}\) is driven by two distinct mechanisms: 1. a \(q\)-dependent electron-phonon coupling driven CDW transition, with a 2D character, which affects the Pt2 plane with transition temperature T\({}_{CDW-1}\) = 230 K, 2. a Fermi surface quasi-nesting driven CDW transition, with a 3D character, which affects the Pt1 layer with T\({}_{CDW-2}\) = 110 K. This scenario foresees interaction between the two CDW mechanisms and is consistent with the suggested discommensuration scenario. The softening of the phonon dispersion in LaPt\({}_{2}\)Si\({}_{2}\) can be attributed to a strong wavevector dependence of the electron phonon coupling. As a side remark, it is interesting to notice that a \(q\)-dependent overdamping of the phonon is already in place at 450 K. Figure 5 displays the ratio between the linewidth of the phonon \(\Gamma\), extracted from the Lorentzian fit, and the phonon frequency as a function of the wavevector for T = 450 K. From the plot it is evident that the overdamping condition \(\frac{\Gamma}{8\omega}>1\) (as described in ref. [31]) is satisfied in a broad range of wavevectors already at this high temperature. Since the electron-phonon coupling is believed to be the only mechanism involved in the first CDW transition at T\({}_{CDW-1}\) = 230 K observed through the phonon softening, it can be considered also the only responsible for such a dramatic enhancement of the phonon linewidth. Such a clear influence of the electron phonon coupling in LaPt\({}_{2}\)Si\({}_{2}\) at a temperature as high as 450 K might have significant implications on the mechanism of superconductivity in LaPt\({}_{2}\)Si\({}_{2}\), which is currently debated [32, 33]. Superconductivity in this material is not the subject of the current investigation, nevertheless doping dependent studies aimed at the suppression of the CDW transition might clarify this aspect. An alternative explanation for the origin of the phonon broadening at high temperature might be the strong interaction between the phonons and the CDW. Indeed, phonon scattering due to charge fluctuations, which are inevitably present as Figure 5: (a-c) \(q\)-dependence of the soft phonon mode at \(T=450\) K well outside the critical region. (d) \(q\)-dependence of the damping ratio for the phonon mode at \(T=450\) K. The red dashed line is a guide to the eye and the black dashed horizontal line marks the overdamping condition threshold. Data taken by TAS-INS. precursors of the CDW, might be in place already at T = 450 K. Since these fluctuations disturb the periodicity of the lattice, they could act as scattering centers. This might be responsible for a very short lifetime of the phonons, evident from the observed broad linewidths [34]. This interpretation would also be consistent with the strong CDW-induced structural instability of the LaPt\({}_{2}\)Si\({}_{2}\) lattice [8]. An overview of the dispersion relation at 220 K and 85 K for the \(q\) point (0 -2 0), provided by the TOF-INS HRC measurements, is shown in figure 6. The dashed lines represent an attempt to determine the dispersion curves. However, the evident broadening and weak intensity of all the phonons in this materials makes the reliability of this approach questionable. In the figure we can see that the phonon dispersion profile appears very blurred, with overlapping of the acoustic modes and a relatively low intensity. Such a feature for the phonon spectra evidences the anharmonic character of the vibrational modes in this material [35]. Phonon anharmonicity was found to be related to ultralow thermal conductivity in several thermoelectric materials [36, 37, 38]. Under the light of this fact, the apparent phonon anharmonicity observed in the measured dispersion of single crystalline LaPt\({}_{2}\)Si\({}_{2}\), is consistent with the value of thermal conductivity measured on a polycrystal sample of LaPt\({}_{2}\)Si\({}_{2}\), \(\kappa\) = 2.7 \(\frac{W}{mK}\), with an estimated lattice-only thermal conductivity \(\kappa_{L}\approx 0.1\frac{W}{mK}\) at room temperature [39]. This value is remarkably low compared to the values of thermal conductivity commonly found in other electrically conducting materials (e.g. stainless steel has \(\kappa\) = 15 \(\frac{W}{mK}\)). Due to its effect on \(\kappa_{L}\), the phonon anharmonicity is considered to be a key factor in increasing the efficiency of thermoelectric materials, which are known to be extremely promising for efficient clean energy conversion [40]. Therefore, due to its thermal transport properties and vibrational landscape, LaPt\({}_{2}\)Si\({}_{2}\) seems to constitute also an interesting study case for the development of thermoelectric devices. ## Discussion In this work we have observed the predicted phonon softening in the CDW superconductor LaPt\({}_{2}\)Si\({}_{2}\). Following its temperature dependence, we could determine T\({}_{CDW}\) = 230 K. Indeed, the power law fit to the temperature dependence of the phonon frequency in the reciprocal lattice point \(q\) = (2 0.35 0), in close proximity to \(q_{CDW}\), confirmed the findings of reference [8], posing the CDW transition temperature at T\({}_{CDW}\) = 230 K. From the same power law, the critical exponent of the phase transition was Figure 6: Overview of the phonon modes for the Bragg point (0 -2 0). (a) Longitudinal phonon mode at \(T=85\) K. (b) Transverse phonon mode at \(T=85\) K. (c) Longitudinal phonon mode at \(T=220\) K. (d) Transverse phonon mode at \(T=220\) K. Data taken by ToF-INS extracted as \(\beta\) = 0.28, suggesting an unconventional critical behavior for the CDW in LaPt\({}_{2}\)Si\({}_{2}\) and endorsing the conjecture that this material manifests CDW discommensuration. We propose strong \(q\)-dependence of the electron-phonon coupling as the driving mechanism for the CDW transition at T\({}_{CDW-1}\) = 230 K in LaPt\({}_{2}\)Si\({}_{2}\) and conjecture the possibility of a second CDW transition at T\({}_{CDW-2}\) = 110 K, related to the first one, with a 3-dimensional nature and Fermi surface quasi-nesting as a driving mechanism. The two CDWs should propagate separately in the non equivalent Pt2 and Pt1 layers respectively. While a strong hint of the the existence of the first mechanism is found in this work, additional studies would be necessary to prove also the second mechanism. Namely angle resolved photoemission spectroscopy (ARPES) for direct observation of the Fermi contour and electronic band structure, to confirm the Fermi surface nesting and probe the spin orbit coupling, as well as additional neutron spectroscopy studies, to identify eventual phonon softening associated to the second transition. ## Methods LaPt\({}_{2}\)Si\({}_{2}\) single crystals were prepared using arc melting of high purity La, Pt and Si at TIFR, Mumbai. The phonon spectra were collected at the triple-axis inelastic neutron scattering (TAS-INS) instrument EIGER at the Paul Scherrer Institut (Switzerland) [17] in the temperature range from 100 K to 470 K with a closed cycle cryo-furnace. The experiment at EIGER was carried out using a double focusing pyrolytic graphite monochromator and a horizontal focusing analyzer at a fixed energy E\({}_{f}\) = 14.68 meV. The room temperature unit cell of LaPt\({}_{2}\)Si\({}_{2}\), refined from synchrotron x-ray diffraction data, was used for the mapping of the reciprocal space and for defining the momentum transfer **Q** = ha***+_kb***+_le*** which is expressed in reciprocal lattice units (r.l.u.) as (qh, qk, ql). The crystal was aligned with the _hk_0-plane in the scattering plane. A low temperature overview of the phonon spectra was collected at the time-of-flight spectrometer (TOF-INS) HRC at JPARC (Japan) at the temperatures 3 K (here not shown), 85 K and 220 K with a Gifford McMahon (GM) bottom loading cryostat. The measurements in EIGER were performed in inverse geometry while in HRC in direct geometry. All images involving crystal structure were made with the VESTA software [41], the TOF-INS data reduction was carried out with the software Hana, the \(q\)-E slices in the TOF-INS data were produced with the software Dave [42], the data plots were produced with the software IgorPro [43]. Here we also show the data collected on the twinned sample. The energy scans at different \(q\) points can be compared directly with the non twinned sample, modulo a shift of 0.05 (r.l.u.). The higher density of points in the dispersion relation and in the power law, facilitates the observation of the critical behavior in LaPt\({}_{2}\)Si\({}_{2}\). Also in this case, the inelastic part of the \(q\) and energy scans were fitted to Lorentzian lineshapes while the elastic part was modeled with Gaussian lineshapes [Fig. 7(a-c)]. Figure 7(d-e) shows the raw energy scans normalized to the monitor counts at different \(q\) points across the critical region. From these plots it is possible to follow both the \(T-\)dependence [Fig. 7(f)] as well as the \(q-\)dependence of the soft phonon mode at the highest accessible temperature 470 K and below the CDW transition 210 K [Fig. 7(g)]. The mode undergoes a remarkable \(q\)-dependent damping already at 470 K, with a minimum in the value of the phonon frequency at qk = 0.45 (r.l.u.). At \(T=210\) K the phonon shows a \(q\)-dependent damping and shifting in frequency until partial collapse on the elastic line. The fit of the temperature dependence of the phonon frequency to the power law \(E_{trans}\propto(\frac{T}{T_{c}}-1)^{\beta}\) [Fig. 7(f)] provides a critical exponent equal to \(\beta=0.28\pm 0.03\), which implies a phase transition with a non conventional critical behavior [30]. This fit provides also a transition temperature to the CDW state equal to \(T_{CDW}\) = (237 \(\pm\) 4)K.
2305.14504
Demonstration of quantum-digital payments
Digital payments have replaced physical banknotes in many aspects of our daily lives. Similarly to banknotes, they should be easy to use, unique, tamper-resistant and untraceable, but additionally withstand digital attackers and data breaches. Current technology substitutes customers' sensitive data by randomized tokens, and secures the payment's uniqueness with a cryptographic function, called a cryptogram. However, computationally powerful attacks violate the security of these functions. Quantum technology comes with the potential to protect even against infinite computational power. Here, we show how quantum light can secure daily digital payments by generating inherently unforgeable quantum cryptograms. We implement the scheme over an urban optical fiber link, and show its robustness to noise and loss-dependent attacks. Unlike previously proposed protocols, our solution does not depend on long-term quantum storage or trusted agents and authenticated channels. It is practical with near-term technology and may herald an era of quantum-enabled security.
Peter Schiansky, Julia Kalb, Esther Sztatecsny, Marie-Christine Roehsner, Tobias Guggemos, Alessandro Trenti, Mathieu Bozzio, Philip Walther
2023-05-23T20:20:14Z
http://arxiv.org/abs/2305.14504v2
# Demonstration of quantum-digital payments ###### Abstract Digital contactless payments have replaced physical banknotes in many aspects of our daily lives. Similarly to banknotes, they are easy to use, unique, tamper-resistant and untraceable, but additionally have to withstand attackers and data breaches in the digital world. Current technology substitutes customers' sensitive data by randomized tokens, and secures the uniqueness of each digital purchase with a cryptographic function, called a cryptogram. However, computationally powerful attacks violate the security of these functions. Quantum technology, on the other hand, has the unique potential to guarantee payment protection even in the presence of infinite computational power. Here, we show how quantum light can secure daily digital payments in a practical manner by generating inherently unforgeable quantum-cryptograms. We implement the full scheme over an urban optical fiber link, and show its robustness to noise and loss-dependent attacks. Unlike previously proposed quantum-security protocols, our solution does not depend on challenging long-term quantum storage or a network of trusted agents and authenticated channels. The envisioned scenario is practical with near-term technology and has the potential to herald a new era of real-world, quantum-enabled security. + Footnote †: These two authors contributed equally. ing System (GPS), which opens the door to undesired spoofing-type attacks [33]. In this work, we show how quantum light can provide practical security advantages over classical methods in everyday digital payments. As shown in FIG. 1, we generate and verify i.t.-secure quantum cryptograms, in such a way that the unforgeability and user privacy properties from previous experimental works holds [32], but all intermediate channels, networks and parties are untrusted, thus significantly loosening the security assumptions. Only one authenticated communication (between the client and their payment provider) has to take place at an arbitrary prior point in time. The concealment of the customers' sensitive information is guaranteed by an i.t.-secure function, and the commitment to the purchase is guaranteed by the laws of quantum mechanics. Additionally, no cross-communication is required to validate the transaction in the case of multiple verifier branches. Our implementation is performed over a 641m urban fiber link, and can withstand the full spectrum of noise and loss-dependent attacks, including those exploiting reporting strategies [34]. Digital payments.We first describe the main security concepts of today's online and contactless purchases [15; 16] (actual implementation may vary). Following FIG. 2, each Client initially sets up an account with a Trusted Token Provider (TTP) via a secure communication channel. The TTP is usually the Client's bank, credit card provider, or a trusted external company. Through this initial step, the Client is assigned a unique identification token \(C\), which is stored securely on both the Client's and TTP's devices. The Client's stored data can be e.g. an electronic wallet or a virtual credit card stored on a smartphone, watch, etc. When the Client wishes to purchase goods or services from a given Merchant \(M_{i}\), it has to be ensured that malicious parties, including untrusted Merchants, cannot spend in the Client's name at another place or time. That is why the Client receives a one-time payment token \(P\) from the Merchant or TTP, which is used to compute a _cryptogram_, an output of a function of their secret token \(C\), the Merchant's public ID \(M_{i}\), and the one-time payment token \(P\). This cryptogram, which we call \(\kappa\left(C,M_{i},P\right)\), is communicated to the Merchant, who then sends it to the TTP for verification. The TTP can verify the signature and uniqueness of the cryptogram, since they have knowledge of all three inputs \(C\), \(M_{i}\) and \(P\). In real-world applications, the cryptogram is the output of a cryptographic hash- or encryption function [16; 35] that is computationally secure. However, this would allow a malicious party with sufficient computational power to run through all input combinations of \(C\), \(P\) and \(M_{i}\) until they recover the one combination that matches the original cryptogram. In that case, the Client's ID and payment data are completely compromised. **Quantum advantage.** We propose a scheme that is similar to classical digital payments, but replaces the one-time payment token \(P\) by a sequence \(\left|P\right\rangle\) of quantum states. That is to say, \(\kappa\left(C,M_{i},P\right)\) becomes \(\kappa\left(C,M_{i},\left|P\right\rangle\right)\) and steps 2-5 from FIG. 2 are modified as follows: 2. The TTP generates a random bitstring \(b\) and a random conjugate basis-string \(\mathcal{B}\) of length \(\lambda\). Each bit \(b_{j}\) is encoded onto a quantum state prepared in \(\mathcal{B}_{j}\), where \(j\in\{1;...;\lambda\}\). This constitutes the classical description \((b,\mathcal{B})\) of the quantum token \(\left|P\right\rangle\), which the TTP stores under the Client's ID.1 The length \(\lambda\) depends on the tolerated success probability of an attack and the number of available merchants. Figure 1: **Simplified representation of quantum-digital payments.** As in classical payments, we consider three parties: a Client, a Merchant and a Bank/Creditcard institute. In contrast to [32], we do not assume any quantum or classical communication channel to be trusted (i.e. CH 1, CH 2 and CH 3 are insecure), except an initial prior step between the Bank and Client for an account creation. All parties involved apart from the Bank can also act maliciously. During a payment, the Bank sends a set of quantum states to the Client’s device (e.g. phone, computer, etc.), who measures them and transforms them into a quantum-secured payment token – _cryptogram_ – which we display here as a one-time credit card. The Client uses this classical token for paying at the Merchant, who then contacts the Bank for payment verification. If the payment is accepted, the bank transfers the money from the Client’s account to the Merchant’s. 3. Upon receiving \(\left|P\right\rangle\), the Client calculates a bitstring \(m_{i}=\mathit{MAC}(C,M_{i})\), which is the output of an i.t.-secure Message Authentication Code (MAC) [36; 37; 38; 39; 40] that takes the secret token \(C\) and the chosen Merchant's public ID \(M_{i}\) as input (see Methods). The Client interprets \(m_{i}\) as a basis-string and privately measures the whole sequence \(\left|P\right\rangle\) according to \(m_{i}\). The resulting string of measurement outcomes \(\kappa_{i}\stackrel{{ m_{i}}}{{\longleftarrow}}\left|P\right\rangle\) constitutes the cryptogram. 4. The Client sends \(\kappa_{i}\) to the Merchant, who forwards this together with its \(M_{i}\) as \(\left\{\kappa_{i},M_{i}\right\}\) to the TTP for verification. 5. To authorize the purchase, the TTP looks up \(C\) and \((b,\mathcal{B})\), and calculates \(m_{i}=\mathit{MAC}(C,M_{i})\) for the Client's ID. The TTP accepts the transaction if and only if \((\kappa_{i})_{j}=b_{j}\) when \((m_{i})_{j}=\mathcal{B}_{j}\). The TTP rejects otherwise. Just like QKD provides i.t.-security for key exchanges such as Diffie-Hellman [41], our scheme provides i.t.-security for the one-time property of cryptograms: while the concealment of \(C\) is guaranteed by the i.t.-secure MAC, the commitment to \(M_{i}\) is ensured by the irreversible nature of quantum measurements (see Methods). Notably, our quantum commitment is not limited by the impossibility theorem of quantum bit commitment [42; 43], in which one of the two parties can delay their quantum measurements in time. This is because in our protocol one of the interacting parties is assumed to be honest (the TTP). We note that our implementation contrasts with those of QKD schemes in two ways. First, the choice of measurement basis is deterministic as opposed to random. This effectively commits the purchase to a given Client token and Merchant. Second, the measurement bases are never publicly revealed, which has the interesting benefit of hiding the Merchant that was chosen by the Client until verification is required [32]. **Loss-dependent security.** Although the security of commitment is guaranteed by the laws of quantum mechanics in theory, certain considerations have to be taken into account in a practical setting: Due to imperfections of real devices (inaccurate state preparation, lossy quantum channels, non-unit detection efficiency), some quantum states will divert from their ideal classical descriptions, or get lost along the way. In fact, some bits in step 5 will be unequal, although measured in the same basis (i.e. \((\kappa_{i})_{j}\neq b_{j}\) when \((m_{i})_{j}=\mathcal{B}_{j}\)) and the protocol would abort even though it was followed honestly. This is why we have to allow for errors and losses during the verification procedure. In turn, a malicious party can exploit this new allowance to circumvent the commitment or double-spend the cryptogram. As an example, assume that the TTP tolerates as many as 50% losses. A malicious Client could then measure half of the quantum token \(\left|P\right\rangle\) in the basis for \(M_{0}\) and the other half in the basis for \(M_{1}\), effectively creating two successfully committed tokens. While double-spending is certainly possible with a loss-rate as high as 50%, we use semidefinite programming to identify combinations of error- and loss-rates for which an attack can still be detected. Intuitively, the derivation involves searching for the cheating strategy that minimizes the malicious party's introduction of excess errors and losses in the protocol (see Methods). We note that, to the best of our knowledge, such powerful loss-dependent attacks were not considered in previous quantum token implementations [24; 32]. **Experimental demonstration.** We implement our quantum-digital payment scheme over the deployed optical fiber link depicted in FIG. 3. The TTP employs a spontaneous parametric down conversion (SPDC) source to create a pair of polarization-entangled photons in the state \(\left|\Psi^{-}\right\rangle=\left(\left|HV\right\rangle-\left|VH\right\rangle \right)/\sqrt{2}\). The TTP keeps one of these photons and employs a 50/50 beamsplitter to probabilistically direct it to one of two polarization pro Figure 2: **Classical digital payments.**_Step_ 0: The Client sets up an account at the Trusted Token Provider (TTP), providing their secret ID and sensitive credit card information through an authenticated and encrypted channel. _Step_ 1: The Client authenticates with the TTP, and requests a cardholder token \(C\), which the TTP sends through a secure channel. _Step_ 2: The TTP randomly generates a one-time token \(P\) and sends it to the Client through a secure channel. _Step_ 3: The Client’s device uses the stored secret token \(C\), the public merchant ID \(M_{i}\), and the payment token \(P\) to compute a cryptogram \(\kappa\left(C,M_{i},P\right)\). _Step_ 4: The Client spends the cryptogram at the chosen Merchant. _Step_ 5: The Merchant verifies the cryptogram with the TTP, and accepts or rejects the transaction. jection stages, measuring its polarization in either the linear H/V (\(\frac{1}{T}\)) or diagonal D/A (\(\bigtimes\)) basis. This creates the random classical description (\(b,\mathcal{B}\)) and remotely imprints the payment token \(\ket{P}\) onto the second photon. The payment token is sent to the Client, located in another building, through a 641m optical fiber link. Using a half-wave plate, the Client commits to exactly one Merchant from the set \(\{M_{0},M_{1}\}\) by measuring either in the H/V basis for \(m_{0}=MAC\) (\(C,M_{0}\)) or in the D/A basis for \(m_{1}=MAC\) (\(C,M_{1}\)). In this way, the Client retrieves the classical cryptogram \(\kappa\left(C,M_{i},\ket{P}\right)\), and forwards it to the Merchant, who is, for convenience, located in the same laboratory. Note that in the case of more than two merchants, the token is split into several sub-tokens that are each measured either in H/V or D/A. We discuss how to adapt the token length in the following section. At any later time, the Merchant transmits the _cryptogram_ received by the Client back to the TTP, using a classical channel that links the two buildings. The TTP finally checks the compatibility of \((b,\mathcal{B})\) with \(M_{i}\), \(C\) and \(\kappa_{i}\), and accepts or rejects the requested transaction. **Results.** We repeatably execute the experiment for both commitments in H/V and D/A. The average measured error rate is 1.45 \(\pm\) 0.01% for H/V and 3.28 \(\pm\) 0.01% for D/A. The overall losses, combining the deployed fiber link and the Client's setup (including detection efficiency), are estimated at 22.40 \(\pm\) 1.50%, while the multiphoton emission probability, measured through a correlation measurement, is 6.76 \(\pm\) 0.12%. The detail of such values are presented in the Supplementary Information. With a maximum honest error rate of \(e_{h}=3.28\pm 0.01\%\) (D/A) and losses of \(l_{h}=22.40\)\(\pm\) 1.50%, we lie within the calculated secure region as depicted in FIG. 4.a. In fact, according to our semidefinite programs (see Methods), a cheating party will introduce errors larger than \(e_{d}=3.79\pm 0.22\%\) when double-spending with the same amount of claimed losses. With \(e_{h}<e_{d}\) and \(l_{h}<l_{d}\) by two standard deviations, we therefore demonstrate that a TTP can allow for honest experimental imperfections while ensuring protection against malicious parties. The i.t.-secure implementation of our protocol depends only on statistical fluctuations arising from the finite number of generated quantum states: a malicious party may indeed successfully cheat by introducing fewer losses Figure 3: **Experimental quantum-digital payments.****a)** The Trusted Token Provider (TTP) creates entangled photon pairs using a Spontaneous Parametric Down Conversion (SPDC) source. One photon’s polarization is randomly measured by the TTP in either the linear or diagonal basis, creating the classical description \((b,\mathcal{B})\), which remotely prepares the quantum token \(\ket{P}\) on the second photon. The latter is sent to the Client through a 641m long optical fiber link, who measures its polarization in a basis \(m_{i}=MAC(C,M_{i})\) specified by a Message Authentication Code (MAC) of the Merchant’s ID \(M_{i}\) and the Client’s private token \(C\), and thereby obtains the _cryptogram_ that is \(\kappa_{i}\xrightarrow{m_{i}}\ket{P}\). Classical communication between the TTP, Client and Merchant is used to verify the compatibility of \(\kappa\), \(M_{i}\) and \(C\) with \((b,\mathcal{B})\). Red (blue) lines indicate quantum (classical) channels. The arrow numbering indicates the steps from FIG. 2. **b)** Satellite image of the two buildings housing the TTP, Client and Merchant. A 641 m optical fiber link connects the parties. or errors than the expected asymptotic values displayed in FIG. 4.a. We use the Chernoff bounds from FIG. 4.b to estimate the _dishonest success probability_\(p_{d}\) associated with the number \(N\) of quantum states required to verify one bit of the cryptogram. We also determine the probability \(p_{h}\) that the protocol does not abort when followed honestly, which tends to \(p_{h}\sim 1\) as \(N\) is increased. The overall protocol security depends on \(p_{d}\) and on the probability \(p_{t}\) of forging the output tag of the MAC. In an i.t.-secure MAC, \(p_{t}=1/\sqrt{|C|}\)[39]. Since \(p_{d}\) and \(p_{t}\) should be of the same order of magnitude, we choose \(p_{d}=|M|/|C|\) and a MAC with output length \(|M|=\sqrt{|C|}\). The total length of the quantum token is finally given by \(\lambda=N\cdot\sqrt{|C|}\). **Discussion.** We propose and demonstrate a new form of quantum payment that guarantees the one-time nature of purchases with i.t.-security. By increasing the length of the quantum token, the cheating probability becomes arbitrarily low in the presence of experimental imperfections such as noise and losses. The implementation does not require any challenging technology on the Client's side, besides single-photon detection. While typical contactless payment delays are of the order of seconds, our quantum communication and verification provide i.t.-security within a few tens of minutes. These limitations are, however, only technological: quantum communication rates can be improved by using brighter quantum sources, while the verification delay originates from the correction of time-tagging drifts between the two buildings (see Methods). Indeed, brighter sources of entangled photon pairs have already been demonstrated, which could decrease the quantum token transmission time to under a second [44]. We finally note that practical digital payment schemes must allow for rejected payments without compromising the Client's sensitive data. In our scheme, the adversary can compromise the payment token \(|P\rangle\) sent over the quantum channel or the cryptogram sent over the classical channel. In both cases, quantum mechanics will ensure that the the TTP recognizes the malformed cryptogram and rejects the payment with arbitrarily high probability. The transaction may than be restarted. However, an i.t.-secure MAC must not re-use the key \(C\) (see Methods), which is why we propose the use of \(n\)-time-secure MAC to overcome this obstacle. This allows re-using \(C\) as an input for the following payments, which imposes a finite, arbitrary bound on the number of purchases [37, 38]. When \(C\) is consumed, a new \(C\), shared between Client and TTP, must then be generated. Remarkably, we can amend our protocol such that the number of purchases is not bounded by the MAC function: the Client simply encrypts a new random key \(R\) with \(C\), and forwards the result along with the cryptogram to the TTP. **Methods** **Cryptogram.** A cryptogram is a cryptographic function that secures tokenized payments (e.g. online, contactless, and in-app-payments) against double-spending [15, 16]. The actual cryptographic mechanism varies per payment network, but a typical procedure is _challenge-response_. Here, the Client is not only in possession of a payment token, but also shares a secret key with the TTP [35]. During the payment, the Merchant generates a pseudo-random value (called a _nonce_), and sends it to the client who encrypts it with this key (typically, symmetric encryption with \(\geq 128\) bit key strength is used). The resulting _cryptogram_ is sent alongside the payment metadata (e.g. merchant ID, amount, etc.) to the Merchant, who forwards it to the TTP. As the TTP is in possession of the key, they are able to decrypt and prove the correctness of the nonce for the given payment at the Merchant. Spending the token for another transaction is impossible under the assumptions of computationally secure encryption. **i.t.-secure MAC.** A _Message Authentication Code (MAC)_ is a function \(f(H,k,m)\mapsto y\). Based on a pseudo-randomized function \(H\) - typically a hash function -, it Figure 4: **Security for experimental quantum cryptograms.****a)** The semidefinite programming framework extracts a secure region of operation (turquoise) as a function of errors and losses. Our honest experimental performance (\(e_{h}=0.0328\pm 0.0001;h_{h}=0.2239\pm 0.015\)) is indicated by the blue dot, and lies within the secure region. **b)** The dishonest success probability \(p_{d}\) (green, upper bound) and honest success probability \(p_{h}\) (red, lower bound) are displayed as a function of the number of quantum states \(N\) required to verify one bit of the cryptogram. These are derived using a Chernoff bound argument (see Supplementary Information) [45]. As an example, an experimental token containing \(\lambda=N=4.2\cdot 10^{6}\) quantum states (vertical blue dashed line) achieves an honest success probability very close to \(p_{h}\sim 1\) and a dishonest success probability \(p_{d}=5.9\cdot 10^{-45}\). takes a secret key \(k\) and message \(m\) as inputs, and outputs some authentication tag. A hash function is defined as a function that maps a set of arbitrary length to a finite set \(H:\{0,1\}^{*}\mapsto\{0,1\}^{n};n\in\mathbb{N}\). Hence, hash functions are non-injective by definition, and thus collisions, such that \(f(H,k,m)=f(H,k,m^{\prime});m\neq m^{\prime}\) can occur (given that \(k\) remains secret). In an i.t.-secure MAC, the probability of such a collision is bound to \(1/\sqrt{|k|};k=\{0,1\}^{l}\), where \(l\in\mathbb{N}\) is some security parameter. This is similar to the probability of finding the decryption key for a given one-time pad. Different such schemes exist, in which a key \(k\) can either only be used once [36, 39], a finite amount of times [37, 38], or outputted tag length is variable [40]. **Semidefinite programming.** Our quantum-cryptographic security proof involves optimizing over semidefinite positive objects to find an adversary's optimal cheating strategy. Semidefinite programming provides a suitable framework for this, as it allows to optimize over semidefinite positive variables, given linear constraints [46]. Most of the time, these variables are density matrices, measurement operators, or more general completely-positive trace-preserving maps [47]. Semidefinite programs present an elegant dual structure, which associates a dual maximization problem to each primal minimization problem. The optimal value of the primal problem then upper bounds the optimal value of the dual problem, allowing to prove tight bounds on the adversarial cheating probability (see [26] for instance). **Optimal cheating strategy.** Using semidefinite programs, we search for the optimal completely-positive trace-preserving quantum map which minimizes the introduction of noise and losses for an adversary attempting to double-spend the cryptogram. The security analysis takes into account multiphoton emission, and assumes the absence of coherence between photon number states. The latter is justified by the fact that SPDC produces states of the form \(\sum_{n=0}^{\infty}c_{n}\ket{n}_{1}\ket{n}_{2}\) in the \(\{\ket{n}\}\) photon number basis [48], which leaves the individual subsystems in states of the form \(\sum_{n=0}^{\infty}c_{n}\ket{n}\bra{n}\). The resulting cheating strategy is fairly intuitive when considering two extreme cases: when the tolerated error rate is zero, the malicious party splits the quantum token into two equal parts, and measures each half in a different basis. This leads to two tokens that are committed to different merchants with zero error, but with 50% losses on each. On the other hand, when the tolerated losses are zero, the malicious party measures all states in a basis that is rotated by \(22.5^{\circ}\) with respect to the H/V basis. Such a measurement will identify the correct encoded bit with a probability of \(\sim 85.4\%\). The actual optimal cheating strategy corresponding to our experimental parameters is a combination of these two extreme strategies. **State generation.** An SPDC process in a periodically-poled KTP crystal is pumped with a continuous-wave 515nm laser, yielding a pair of polarization-entangled and color-entangled photons. One photon is emitted at around 1500nm, while its orthogonally polarized counterpart is emitted around 785nm. Experimental demonstrations using a similar entanglement design were demonstrated in [49, 50]. Since the spectral bandwidths of the two SPDC processes are not equal, a tunable EXFO bandpass filter is inserted into the 1500nm arm to equalize them and enhance the entanglement visibility. In order to render the two photons temporally indistinguishable, an unpoled KTP crystal of half the length of the ppKTP crystal, with axes rotated by \(90^{\circ}\) with respect to the ppKTP axes, is inserted. **Single-photon detection.** After the optical fiber link, the 1500nm photons are detected with PhotonSpot superconducting nanowire single-photon detectors, with efficiencies around 93% (see Supplementary Information for detail), while the 785nm photons are locally detected in the TTP's laboratory using Roithner avalanche single-photon detectors, with efficiencies around 50%. A set of paddles, inserted before the polarization measurement, are used to compensate for polarization drifts over the fiber link. **Data post-processing.** The TTP's and Client's single-photon detectors are connected to two different TimeTagging Modules (TTM). In order to recover coincidences between the two buildings, careful synchronisation of the two modules is required: first, the internal clocks of the respective modules bear an offset with respect to one another, due to the photon travel time through the optical fiber link. Second, the cycles of the internal clocks of the two TTMs drift with slightly different rates, resulting in an offset drift over time. Finally, there is an electronic delay due to different detector response times, and the TTMs only record time tags relative to the time they were activated. All these factors were corrected with our post-processing code. **Heralded second order correlation function measurement.** To measure the heralded second order correlation function \(g_{h}^{(2)}(\tau)\), the 1500 nm (telecom) photons created by our SPDC source are sent directly to an InGaAs detector (idler detector labelled D\({}_{\text{i}}\)), while the 785 nm photons are routed to a 50/50 fiber beamsplitter, with both outputs connected to one detector each (labelled \(D_{1}\) and \(D_{2}\)). \(g_{h}^{(2)}(\tau)\) can be written as [51] \[g_{h}^{(2)}(\tau)=\frac{N_{i}\cdot N_{i12}(\tau)}{N_{i1}(0)\cdot N_{i2}(\tau)},\] where \(N_{i}\) is the total number of events detected in the telecom detector during the measurement integration time; \(N_{i1}(0)\) are the 2-fold coincidence events between the telecom detector and D1 at 0 delay; \(N_{i2}(\tau)\) are the 2-fold coincidence events between the telecom detector and D2 at delay \(\tau\); and \(N_{i12}(\tau)\) are the 3-fold coincidences between all 3 detectors with at delay \(\tau=0\) between the telcom detector and D1, and delay \(\tau\) to D2. Pumping the SPDC source with \(35\,\mathrm{mW}\), data was acquired for about \(60\,\mathrm{min}\). \(g_{h}^{(2)}(\tau)\), with coincidence time windows of \(0.33\,\mathrm{ns}\), \(0.99\,\mathrm{ns}\), \(1.98\,\mathrm{ns}\), \(2.96\,\mathrm{ns}\) is shown in FIG. 5. A source dominated by single-photons has a \(g_{h}^{(2)}(0)<0.5\), with \(g_{h}^{(2)}(0)=0\) for a true single-photon source. From our measurements with a coincidence window of \(2.96\,\mathrm{ns}\), which is close to the combined jitter of the SPADs and coincidence logic and therefore the most meaningful value, we determined \(g_{h}^{(2)}(0)=0.030\,10(14)\), proving SPDC multi-pair emission processes are negligible for the chosen experimental settings. ## Author contributions M.B. and P.W. conceived the project. J.K., E.S., T.G. and M.B. derived the security analysis. P.S., J.K., E.S., M-C.R., A.T. and M.B. performed the experiment and analysed the experimental data. P.S., J.K. and M-C.R. wrote the code required to run the experiment and process the experimental data. T.G. researched and explained the relevant classical digital payment schemes. E.S, M.B. and T.G. designed the final protocol. P.S., J.K., E.S., T.G. and M.B. wrote the manuscript, with inputs from M-C.R., A.T. and P.W. ## Competing interests P. W., M. B., T. G., E. S., and P. S. are named as inventors on a patent application for a quantum payment token scheme by the University of Vienna (application no. EP21172766.4; status, pending). The other authors declare no competing interests. ### Acknowledgements This work was supported by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 899368 (EPIQUS) and through Quantum-Flagship Project UNIQORN under grant agreement 820474; by the Austrian Science Fund (FWF) through [F7113] (BeyondC), and [FG5] (Research Group 5); by the AFOSR via FA9550-21- 1-0355 (QTRUST); from the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association. A.T. acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 801110 and the Austrian Federal Ministry of Education, Science and Research (BMBWF). For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2304.03351
Entity Graphs for Exploring Online Discourse
Vast amounts of human communication occurs online. These digital traces of natural human communication along with recent advances in natural language processing technology provide for computational analysis of these discussions. In the study of social networks the typical perspective is to view users as nodes and concepts as flowing through and among the user-nodes within the social network. In the present work we take the opposite perspective: we extract and organize massive amounts of group discussion into a concept space we call an entity graph where concepts and entities are static and human communicators move about the concept space via their conversations. Framed by this perspective we performed several experiments and comparative analysis on large volumes of online discourse from Reddit. In quantitative experiments, we found that discourse was difficult to predict, especially as the conversation carried on. We also developed an interactive tool to visually inspect conversation trails over the entity graph; although they were difficult to predict, we found that conversations, in general, tended to diverge to a vast swath of topics initially, but then tended to converge to simple and popular concepts as the conversation progressed. An application of the spreading activation function from the field of cognitive psychology also provided compelling visual narratives from the data.
Nicholas Botzer, Tim Weninger
2023-04-06T19:57:57Z
http://arxiv.org/abs/2304.03351v1
# Entity Graphs for Exploring Online Discourse ###### Abstract Vast amounts of human communication occurs online. These digital traces of natural human communication along with recent advances in natural language processing technology provide for computational analysis of these discussions. In the study of social networks the typical perspective is to view users as nodes and concepts as flowing through and among the user-nodes within the social network. In the present work we take the opposite perspective: we extract and organize massive amounts of group discussion into a concept space we call an _entity graph_ where concepts and entities are static and human communicators move about the concept space via their conversations. Framed by this perspective we performed several experiments and comparative analysis on large volumes of online discourse from Reddit. In quantitative experiments, we found that discourse was difficult to predict, especially as the conversation carried on. We also developed an interactive tool to visually inspect _conversation trails_ over the entity graph; although they were difficult to predict, we found that conversations, in general, tended to diverge to a vast swath of topics initially, but then tended to converge to simple and popular concepts as the conversation progressed. An application of the spreading activation function from the field of cognitive psychology also provided compelling visual narratives from the data. ## 1 Introduction In any conversation, members continuously track the topics and concepts that are being discussed. The colloquialism "train-of-thought" is often used to describe the path that a discussion takes, where a conversation may "de-rail," or "come-full-circle," etc. An interesting untapped perspective of these ideas exists within the realm of the Web and Social Media, where a train-of-thought could be analogous to a trail over a graph of concepts. With this perspective, an individual's ideas as expressed through language can be mapped to explicit entities or concepts, and, therefore, a single argument or train-of-thought can be treated as a path over the graph of concepts. Within a group discussion, the entities, concepts, arguments, and stories can be expressed as a set of distinct paths over a shared concept space, what we call an _entity graph_. Scholars have long studied discourse and the flow of narrative in group conversations, especially in relation to debates around social media [32] and intelligence [29]. The study of language and discourse is rooted in psychology [7] and consciousness [6]. Indeed, the linguist Wallace Chafe considered "...conversation as a way separate minds are connected into networks of other minds." [7] Looking at online conversations from this angle, a natural hypothesis arises: If we think of group discussion as a graph of interconnected ideas, then can we learn patterns that are descriptive and predictive of the discussion? Fortunately, recent developments in natural language processing, graph mining, and the analysis of discourse now permit the algorithmic modelling of human discussion in interesting ways by piecing them together. This is a broad goal, but in the present work we provide a first step towards graph mining over human discourse. Another outcome of the digital age is that much of human discourse has shifted to online social systems. Interpersonal communication is now observable at a massive scale. Digital traces of emails, chat rooms, Twitter or other threaded conversations that approximate in person communication are commonly available. A newer form of digital group discussion can be seen in the dynamics of Internet fora where individuals (usually strangers) discuss and debate a myriad of issues. Technology that can parse and extract information from these conversations currently exists and operates with reasonable accuracy. From this large body of work, the study of _entity linking_ has emerged as a way to ground conversational statements to well-defined entities, such as those that constitute knowledge bases and knowledge graphs [41]. Wikification, _i.e._, where entities in prose are linked to Wikipedia-entries as if it was written for Wikipedia, is one example of entity linking [9]. The Information Cartography project is another example that uses these NLP-tools to create visualizations that help users understand how related news stories are con nected in a simple, yet meaningful manner [40, 39, 25]. But because entity linking techniques have been typically trained from Wikipedia or long-form Web-text, they have a difficult time accurately processing conversational narratives, especially from social media [13]. Fortunately, recent progress in _Social-NLP_ has made considerable strides in recent years [35] providing the ability to extract grounded information from informal, threaded online discourse [27]. Taking this perspective, the present work studies and explores the flow of entities in online discourse through the lens of _entity graphs_. We focus our attention on discussion threads from Reddit, but these techniques should generalize to online discussions on similar platforms so long as the entity linking system can accurately link the text to the correct entities. The threaded conversations provide a clear indication of the reply-pattern, which allows us to chart and visualize conversation-paths over entities. To be clear, this perspective is the opposite of the conventional social networks approach, where information and ideas traverse over user-nodes; on the contrary, we consider discourse to be humans traversing over a graph Figure 1: Illustration of an entity graph created from threaded conversations from r/politics (blue-edges) and r/conservative (red-edges). The x-axis represents the (threaded) depth at which each entity was mentioned within conversations, extracted from Reddit, rooted at Joe_Biden. The y-axis represents the semantic space of each entity, \(i\).e., similar entities are closer than dissimilar entities on the y-axis. Edge colors denote whether the transition from one entity set to another occurs more often from one groups conversations than another. Node colors represent equivalent entity sets along the x-axis. In this visualization we observe a pattern of affective polarization as comments coming from /r/Conservative are more likely to drive the conversation towards topics related to the opposing political party. of entities. The conventional approach to social networks is important for areas such as influence maximization [26] and the spread of behaviors [5]. Instead, the goal of our alternative perspective is to discover this network of minds and uncover patterns of how they think over topics. This alternative perspective is motivated by the large number of influence campaigns [37], information operations [44], and the effectiveness of disinformation [18]. These campaigns often operate by seeding conversations in order to exploit conversation patterns and incite a particular group. Another motivation for our proposed methodology is humans attraction towards homophily and the large number of echo chambers that have been created online [10; 17]. Prior works [17] looking at echo chambers in political discourse rely on this notion of the ideas spreading between user-nodes. Other works looking at morality [4] also follow this notion of how moral text spreads throughout a user network. We stress here that our entity graph will allow for a flipped perspective of having users move across the graph of entities in various types of conversations. This position allows for a different form of analysis into how different groups or communities think as a whole. Our way of thinking is illustrated in Fig. 1, which shows a subset of path traversals, which we describe in detail later, from thousands of conversations in /r/politics and /r/conservative that start from the entity Joe_Biden. As a brief preview, we find that conversations starting with Joe_Biden tend to lead towards United_States in conversations from the /r/conservative subreddit (indicated by a red edge), but commonly lead towards mentions of the Republican_Party in conversations from /r/politics (indicated by blue-purple edge). From there the conversations move onward to various other entities and topics that are cropped from Fig 1 to maintain clarity. In the present work, we describe how to create entity graphs and use them to answer questions about the nature of online, threaded discourse. Specifically, we ask three research questions: 1. How predictable is online discourse? Can we accurately determine where a conversation will lead? 2. What do entity graphs of Reddit look like? In general, does online discourse tend to splinter, narrow, or coalesce? Do conversations tend to deviate or stay on topic? 3. Can cognitive-psychological theories on spreading activation be applied to further illuminate and compare online discourse? We find that entity graphs provide a detailed yet holistic illustration of online discourse in aggregate that allow us to address our proposed research questions. Conversations have an enormous, visually random, empirical possibility space, but attention tends to coalesce towards a handful of common topics as the depth increases. Prediction is difficult, and gets more difficult the longer a conversation goes on. Finally, we show that entity graphs present a particularly compelling tool by which to perform comparative analysis. For example, we find, especially in recent years, that conservatives and liberals both tend to focus their conversations on the out-group - a notion known as _affective polarization_[22]. We also find that users also tend to stick to the enforced topics of a subreddit as shown by how r/news tends towards entities from the United States and r/worldnews tends towards non-US topics. ## 2 Methodology ### Online Discourse Dataset Of all the possible choices from which to collect online discourse, we find that Reddit provides exactly the kind of data that can be used for this task. It is freely and abundantly available [1], and it has a large number of users and a variety of topics. Reddit has become a central source of data for many different works [30]. For example, recent studies on the linguistic analysis of Schizophrenia [48], hate speech [8], misogyny [15], and detecting depression related posts [42] all make substantial use of Reddit data. The threading system that is built into Reddit comment-pages is important for our analysis. Each comment thread begins with a high level topic (the post title), that is often viewed as the start of a conversation around a specific topic. Users often respond to the post with their own comments. These can be viewed as direct responses to the initial post, and then each of these comments can have replies. This threading system generates a large tree structure where the root is the post title. Of course, such a threading system is only one possible realization of digital discussion, but this system provides the ability to understand how conversations move as users respond to each other in turn. Twitter, Facebook, and Youtube also have discussion sections, but it is very difficult to untangle who is replying to whom in these (mostly) unthreaded systems. Reddit contains a large variety of subreddits, which are small communities focused on a specific topic. We limit our analysis to only a small number of them, but for each selection we obtain their complete comment history from January 2017 to June 2021. In total we selected five subreddits: /r/news, /r/worldnews, /r/Conservative, /r/Coronavirus and /r/politics. We selected these subreddits because they are large and attract a lot of discussion related to current events, albeit with their own perspectives and guidelines. These subreddits also contain a large number of entities, which we plan to extract and analyze. Like most social sites, Reddit post-engagement follows the 90-9-1 rule of Internet engagement. Simply put, most users don't post or comment, and most posts receive almost no attention [30]. Because of this we limit our data to include only those threads that are in the top 20% in terms of number of comments per post. Doing so ensures that we mostly collect larger discussions threads that have an established back and forth. We also ignore posts from the well-known bot accounts, (_e_.g., AutoMod, LocationBot) to ensure we get actual user posts in the conversation. ### Entity Linking We use entity linking tools to extract the entities from each post title and comment in the dataset (_c.f._[41]). Entity linking tools seek to determine parts of free form text that represent an entity (a mention) and then map Figure 2: (Left) Example comment thread with the post title as the root, two immediate child comments, one of which has two additional child comments. Entity mentions are highlighted in yellow. (Right) The resulting entity tree where each comment is replaced by their entity set. Note the case where the mention-text Trump in the comment thread is represented by the standardized entity-label Donald_Trump in the entity tree. that mention to the appropriate entity-listing in a knowledge base (disambiguation), such as Wikipedia. Existing models and algorithms rely heavily on character matching between the mention-text and the entity label, but more-recent models have employed deep representation learning to make this task more robust [38]. An example of entity linking on a comment thread is illustrated in Fig. 2. Each comment thread \(T\) contains a post \(R\) which serves as the root of the tree \(c_{r}\in T\) and comments \(c_{x}\in T\), where subscript \(r\) and \(x\) serve to index the post title and a specific comment. Each comment can reply to the root \(c\to r\) or to another comment \(c_{x}\to c_{y}\) thereby determining a comment's depth \(\ell\in[0,\dots,L]\). Comments and post titles may or may not contain one or more entities \(S(c)\). These entity sets are likewise threaded, such that \(S(c_{x})\to S(c_{y})\) means that the entities in \(c_{x}\) were responded to with the entities in \(c_{y}\), _i.e._, \(c_{x}\) is the parent of \(c_{y}\). With this formalism, the entity linking task transforms a comment threads into an _entity tree_ as seen in Fig. 2. Specifically, we utilize the End-to-End (E2E) neural model created by Kolitaskas et. al. [27] to perform entity linking on our selected subreddits. Previous work has shown that entity linking on Reddit can be quite challenging due to the wide variety of mentions used [3]. The E2E model we use has been shown to have a high level of precision on Reddit but lacks a high recall [3]. We find using this model appropriate as we want to ensure that the entities we find are correct and reliable, but acknowledge that it may miss a portion of the less well-known entities, as well as missing any new entities that arise from entity drift. The choice of this entity linker also influenced our decision to analyze the selected subreddits as the performance is better in these selected subreddits. We also experimented with the popular REL entity linker [43]. Although it did retrieve many more entities from the comments, we found a large number of the entities to be incorrect. Using the E2E model we extract entities from each post title and com \begin{table} \begin{tabular}{l r r r r} \hline \hline & **\# Posts** & **\# Comments** & **Total Entities** & **Unique Entities** \\ \hline /r/news & 7,299 & 106,428 & 240,009 & 10,573 \\ /r/worldnews & 16,056 & 263,227 & 692,735 & 12,840 \\ /r/politics & 15,596 & 326,958 & 756,576 & 11,908 \\ /r/Conservative & 3,093 & 41,439 & 100,756 & 4,308 \\ /r/Coronavirus & 18,469 & 252,303 & 509,632 & 10,246 \\ \end{tabular} \end{table} Table 1: Reddit discourse dataset. Top 20% of posts in terms of number of comments from five subreddits between January 2017 to June 2021. ment individually and construct the entity tree as illustrated in Fig 2. Table 1 shows a breakdown of the post, comment, and entity statistics for each subreddit considered in the present work. ### Entity Graph Given an entity tree, our next task is to construct a model that can be used to make predictions about the shape and future of the conversation, but also can be used as a visual, exploratory tool. Although entity trees may provide a good picture for a single conversation, we want to investigate patterns in a broader manner. To do this we consider conversations coming from a large number of entity trees in aggregate. This model takes the form of a weighted directed graph \(G=(V,E,w)\) where each vertex \(v\in V\) is a tuple of an entity set \(S(c)\) and it's associated depth \(\ell\) in the comment tree \(v=(S(c),\ell)\). Each directed edge in the graph \(e\in E\) connects two vertices \(e=(v_{1},v_{2})\) such that the depth, \(\ell\) of \(v_{1}\) must be one less than the depth of \(v_{2}\). Each edge in the graph \(e\in E\) also contains a weight \(w:E\rightarrow\mathbb{R}\) that represents the frequency of the transition from one entity set to another. This directed graph captures not only the specific concepts and ideas mentioned within the discourse, but also the conversational flow over those concepts. Continuing the example from above, Fig. 3 shows three individual paths \(P\) representing the entity tree from Fig. 2(b). Each entity set moves from one depth to the next, representing the progression of the discussion. During the construction of the entity paths, we remove comments that do not have any replies. Short paths, those with a length less than three, do Figure 3: Paths extracted from the entity tree in Fig. 2(b) represented by directed edges over entity sets. not offer much information in terms of how the conversation will progress, because the conversation empirically did not progress. It may be useful to analyze why some topics resulted in no follow-on correspondence, but we leave this as a matter for future work. Because we wish to explore online discourse in aggregate, this is the point where we aggregate across many comment threads \(T\in\mathcal{T}\) where \(\mathcal{T}\) represents an entire subreddit or an intentional mixture of subreddits depending on the task. We extract all of the conversation paths from our comment threads \(\mathcal{T}\) to now have a group of conversation paths \(\mathcal{P}\). To generate our graph we iterate over our group of paths \(\mathcal{P}\) and aggregated them together to construct our entity graph. For every instance of an entity set transition in a conversation path we increment the weight \(w\) of it's respective edge in our entity graph. One key aspect of this is that we count this transition only once per each comment thread \(T\). This ensures that entity transitions do not get over counted, by virtue of the thread being larger and containing more conversation paths overall. One of the limitations of the current graph structure is that the graph does not capture conversation similarities if some of the entities overlap between two different vertices. For instance, another entity tree may result in having an entity set \(S(c_{r})\) that contains a subset of the entities in a given vertex. This new entity set may have a similar conversational flow but will not be captured in our current entity graph because the model does not allow for any entity overlap. To help alleviate this issue we borrow from the notion of a hypergraph and perform a star-expansion on our graph \(G\)[47]. A hypergraph is defined as \(\mathcal{H}=(X,E)\) where \(X\) is the set of vertices and \(E\) is a set of non-empty subsets of \(X\) called hyperedges. The star expansion process turns a hypergraph into a simple, bipartite graph. It works by generating a new vertex in the graph for each hyperedge present in the hypergraph and then connects each vertex to each new hyperedge-vertex. This generates a new graph \(G(V,E)\) from \(\mathcal{H}\) by introducing a new vertex and edge for each hyperedge such that \(V=\mathcal{E}\cup\mathcal{P}\). While our model is a graph we can treat each entity set \(S(c)\) as a hyperedge in our case to perform this star expansion. This will give us new vertices to represent each individual entity and allow us to capture transitions from one entity set to another if they share a subset of entities. An example of the resulting graph after performing a star-expansion can be seen in Fig. 4. This helps to provide valid transition paths that would otherwise not exist without the star expansion. When the star expansion operation is performed the edge weights between the new individual entity vertices and their respective entity sets is set to the number of times that entity set occurred at a given depth \(l\). Although the star expansion process will generate a much larger graph due to the large number of vertices, it proves to be useful for prediction and aligning entity set vertices in a visual space. This graph-model therefore represents the entities, their frequent combinations, and the paths frequently used in their invocation over a set of threaded conversations. ## 3 Conversation Prediction Having generated these entity graphs we turn our attention to the three research questions. RQ1 first asks if these entity graphs can be used to predict where a conversation may lead. Clearly this is a difficult task, but recent advances in deep learning and language models have led to major improvements and interest in conversational AI [34], which has further lead to the development of a number of models that utilize entities and knowledge graphs [45] from various sources including Reddit [46]. The main motivation of these tools is to use the topological structure of the knowledge graphs (entities and their relationships) to improve a conversational agents' ability to more-naturally select the next entity in the conversation. The typical methodology in related machine learning papers seeks to predict the next entity in some conversation [31]. In these cases, a dataset of paths through a knowledge graph is constructed from actual human conversations as well as one or more AI models. Then a human annotator picks the entity that Figure 4: Entity graph constructed from a star-expansion of the entity tree in Fig 2(b) and the conversation paths in Fig. 3(c) This model represents the entities, their frequent combinations, and the paths frequently used in their invocation. they feel is most natural [31, 23]. Our methodology varies from these as we are not focused on making a machine learning model to accurately predict these entities precisely. Our goal is to demonstrate more broad patterns of people conversing over and through the topics. To this end, we do not evaluate with a standard machine learning paradigm aiming to optimize for metrics such as accuracy, precision, recall, etc. To demonstrate that our entity graph captures broad patterns that can be further explored we perform two tasks: (1) the generalization task and (2) a similarity prediction task. Each task uses 5-fold cross validation where we split the entity graph into 80/20 splits for \(\mathcal{H}_{\text{train}}\) and \(\mathcal{H}_{\text{test}}\) respectively. We perform this cross validation in a disjoint manner with the Reddit threads that we have extracted. This creates 5 different entity graphs, one for each split, and validates the model's generalization to unseen Reddit threads. Although this disjoint split ensures the threads are separate, we do not consider the temporal aspect of these threads. The first task: generalization, gets at the heart of our broader question on the predictability of conversation paths. In this task we simply calculate the number of entity sets, at each level in \(\mathcal{H}_{\text{test}}\) that also appear in the same level in \(\mathcal{H}_{\text{train}}\) of our entity graph. Formally, we measure generalization as \(1-\frac{\|S_{\ell}\in\mathcal{H}_{\text{test}}\setminus S_{\ell}\in\mathcal{ H}_{\text{train}}\|}{\|S_{\ell}\in\mathcal{H}_{\text{test}}\|}\) for each \(\ell\). In simple terms, generalization tells us, given an unseen conversation comment, if the model can make a prediction from the given comment by matching at least one entity in our entity graph model. This task therefore validates how well the model captures general conversation patterns by Figure 5: Percent of the predictions made on the testing set that, on average, exist in the training set for 5-folds. Higher is better. matching at the entity level. Results of this analysis are shown in Fig. 5 where color and shape combinations indicate the subreddit and \(\ell\) is represented along the x-axis. Error bars represent the 95% confidence interval of the mean across the 5 folds. We find that the entity graph captures much more of the information early in conversations. As the depth increases to three and beyond, we note a sharp drop in the overlap between the test and training sets. The widening confidence interval also indicates that the amount of information varies based on the test set. From these results, we conclude that analyzing the flow of an unseen conversation early-on is reasonable, but findings from deeper in the conversation may be difficult because key entities may be missing from the entity graph. The second task: similarity prediction looks to measure the similarity between a predicted entity set and the actual entity set. This methodology uses the entity embeddings from the E2E entity linking model to represent the entities in the vector space. For each root in \(\mathcal{H}_{\text{test}}\) we find its matching root in the \(\mathcal{H}_{\text{train}}\); if a match does not exist, we discard and start again. Then we make the Markovian assumption and perform probabilistic prediction for each path in the training set via \(Pr(S_{\ell+1}(c_{y})|S_{\ell}(c_{x}))\), _i.e._, the empirical probability of a conversation moving to \(S_{\ell+1}(c_{y})\) given the conversation is currently at \(S_{\ell}(c_{x})\) in \(\mathcal{H}_{\text{train}}\). The probability for each transition is based on the edge weights that we captured during the graph construction step. As determined in the previous experiment, entity sets are increasingly unlikely to match exactly as the depth increases; so rather than a 0/1 loss, we measure the word movers distance (WMD) between the predicted entities and the actual entities [28]. Results for this comparison are shown in Fig. 6 for three of the larger subreddits. We again find that as the depth of the conversation increases the distance between our predicted tree and the ground truth entities rises. These results indicate that as a conversation continues, the variety of topics discussed tends to increase. Therefore, predictions are likely to not align well very to those of the true conversation. This is most clearly seen in the /r/politics plot in Fig. 6, where we note a sharp increase in the later parts of the conversation. If the variety of topics was consistent, then we would expect the WMD to stay relatively flat throughout the conversation depth. ## 4 Conversation Traversals Next, we investigate RQ2 through a visualization of the entity graph. Recall that the entity graph contains entity sets over the depths of the conversation. Specifically, we seek to understand what conversations on Reddit look like. Do they splinter, narrow, or behave in some other way? We call the set of visual paths _conversation traversals_ because they indicate how users traverse the entity graph. We generate these visual conversation traversals using a slightly modified force directed layout [16]. Graph layout algorithms operate like graph embedding algorithms LINE, node2vec, etc, but rather than embedding graphs into a high dimension space, visual graph layout tools embed nodes and edges into a 2D space. In our setting we do make some restrictions to the algorithm in order to force topics to coalesce into a visually meaningful and standardized space. Specifically, we fix the position of each vertex in our graph on the x-axis according to \(\ell\). As in Fig. 4, individual entity vertices always occur to the left of entity set vertices, making the visualization illustrate how conversations flow from the start to finish in a left to right fashion. This restriction forces the embedding algorithm to adjust the position only on the y-coordinate, and this is necessary to allow the individual entity to entity set edges from the star-expansion to pull entity set vertices close together if and only if they share many common entities. Loosely connected or disconnected entities will therefore not be pulled together. As a result, the y-axis tends to cluster entities and entity-sets together in a semantically meaningful way. Embedding algorithms are typically parameterized with a learning rate parameter that determines how much change can happen to the learned Figure 6: Box plot of Word Movers Distance (WMD) as a function of the conversation depth \(\ell\). Lower is better. Box plots represent WMD-error of entity representations predicted by the narrative hypergraph over all entities, over all depth, over five folds. representation at each iteration. Because we want entities to be consistent horizontally, we modify the learning rate function to increasingly dampen embedding updates over 100 iterations per depth. For example, given a entity graph of depth \(L=10\), we would expect 1,000 iterations total. We initially allow all entities and entity sets to update according to the default learning rate, but as the iterations increase to 100 the learning rate of the entities and entity sets at \(\ell=1\) will slowly dampen and eventually lock into place at iteration 100. When these entities and entity sets lock we also lock those same entities and entity sets at all other depths. This ensures that each of these entities and entity sets will be drawn as a horizontal line at the given y position. Then, from iterations 100-200, the learning rate of the entities and entity sets at \(\ell=2\) will slowly dampen and eventually lock into place at iteration 200. Meanwhile the entities and entity sets at deep levels will continue to be refined. In this way, the semantically meaningful y-coordinates tend to propagate from left to right as the node embedding algorithm iterates. One complication is that the sheer number of entities and the conversation paths over the entities is too large to be meaningful to an observer. So we do not draw the entity-nodes generated by the star-expansion and instead opt to rewire entities sets based on the possible paths through the individual entity nodes. We also tune the edge opacity based on the edge weights. We draw the resulting graph with D3 to provide an interactive visualization [2]. Conversation traversals of the entity graph generated from /r/news is illustrated in Fig. 7. This illustration is cropped to remove the four deepest vertical axes (on the right) and is also cropped to show the middle half of the illustration. A zoomed in version highlights some interesting entity sets present in the /r/news conversation. Recall that the entity sets are consistent horizontally so that both red circles on the left and the right of the inset plot both indicate the entity set with Donald_Trump; likewise the blue circles on the left and the right of the insert both represent Barack_Obama. Edges moving visually left to right indicate topical paths found in online discourse. In the /r/news subreddit, which tracks only US news, Donald_Trump and Barack_Obama are frequent visits, but so too are national entities like United_States (not highlighted), Iraq, and others. It is difficult to see from this illustration, but the expanded interactive visualization shows a common coalescing pattern where large sets of entities and unique combinations of ideas typically coalesce into more simple singleton entities like Barack_Obama or United_States. ### Spreading Activation Next, we adapt the illustration of conversation traversals to begin to answer RQ3. Specifically, we are interested in how the differences in starting points, at the roots of the comment tree, have any impact on the eventual shape of the conversation. For example, given a conversation starting with Donald_Trump how will the conversation take shape for liberals and how might that conversation be different among conservatives? This kind of analysis provides endless possibilities in the analysis of how different groups of people think and articulate ideas a given topic. To help answer this question, we employ tools from the study of _spreading activation_[11]. Spreading activation is a concept from cognitive psychology that has been used to model how ideas spread and propagate in the brain from an initial source. A popular use for spreading activation has been on semantic networks to find the relatedness between different concepts. Formally, spreading activation works by specifying two parameters: (1) a firing threshold \(F\in[0,\ldots,1]\) and (2) a decay factor \(D\in[0,\ldots,1]\). The vertex/entity set selected by a user will be given an initial activation \(A_{i}\) of 1. This is then propagated to each connected vertex as \(A_{i}\times w_{j}\times D\) Figure 7: Entity graph showing the visual conversation traversals from /r/news. This illustration shows the paths of conversations over entity sets. The x-axis represents the depth of the conversation; entity sets are clustered into a semantically meaningful space along the y-axis. Inset graph highlights five example entity sets and their connecting conversation paths. Node colors represent equivalent entity sets. In this example we highlight how entity sets are placed in meaningful semantic positions in relation to one another. Figure 8: Entity graph example of spreading activation on /r/news when Barack_Obama is selected as the starting entity. The x-axis represents the (threaded) depth at which each entity was mentioned within conversations rooted at Barack_Obama. The y-axis represents the semantic space of each entity, \(i\).e., similar entities are closer than dissimilar entities on the y-axis. Node colors represent equivalent entity sets. In this example, we observe that conversations starting from Barack_Obama tend to center around the United_States, political figures such as Donald_Tump, and discussion around whether his religion is Islam. where \(w_{j}\) is the weight of each edge connection to the corresponding vertex. Each vertex will then acquire its own activation value \(A_{i}\) based on the total amount of signal received from all incoming edges. If a vertex has acquired enough activation to exceed the firing threshold \(F\), it too will fire further propagating forward through the graph. In the common setting, vertices are only allowed to fire once and the spreading will end once there is no more vertices to activate. In our work we use spreading activation as a method for a user to select a starting topic/entity set within the illustration of conversation traversals. The spreading activation function will then propagate the activation of entities along the conversation paths to highlight those that are mostly likely to activate from a given starting point. Because we permit the entity graph to be constructed (and labeled) from multiple subreddits, we can also use the spreading activation function to compare and contrast how users from different subreddits activate in response to a topic. After spreading activation has been calculated, our interactive visualization tool removes all vertices and links that are not part of the activated portion of the graph. All of the vertices involved in spreading activation will have their size scaled based on how much activation they received. An example of this is cropped and illustrated in Fig. 8, which shows how spreading activation occurs when the entity set Barack_Obama is activated within /r/news. Here we see that conversations starting with (only) Barack_Obama tend to move towards discussions about the United_States. We also note that the Islam entity is semantically far away from Barack_Obama and Donald_Trump as indicated its placement on the y-axis. The results from using spreading activation allow for a much more granular investigation of conversational flow. These granular levels of conversational flow demonstrate that an individual can search for patterns related to influence campaigns, echo chambers and other social media maladies across a number of topics. ## 5 Comparative Analysis The visual conversation traversals appears to be helpful for investigating trends within a group. But, our final goal is to use these to compare and contrast how different groups move through the conversation space. Our first attempt at this was to use and overlay separate plots and attempt to compare the trends. This would be challenging though because it would fail to capture the magnitude in any differences between the groups for various entity set transitions. Our second attempt, instead, modified the entity graph creation process to take in data from two different subreddits. By using both communities we can capture how often an entity transition occurs in each subreddit and use color gradients to indicate the relative strength of each transition probability based on the edge weight we find in each subreddit. This visually shows if correlations occur between subreddits. In the present work, we examined three different scenarios among the subreddits in our dataset. Scenario 1: liberals and conservativesDetermining how motivated groups communicate about and respond to various topics is of enormous importance in modern communication studies. For example, communication specialists and political scientists are interested in understanding how users respond to coordinated influence campaigns that flood social media channels with the same message [33]. Repetition is key for the idea to stick, and we would expect then that these forms of messaging would begin to appear in the entity graphs and possibly visually indicated in the conversation traversals. Although a full analysis of this difficult topic is not within the purvue of the current work, we do perform a comparative analysis of /r/Conservative and /r/politics as proxies for comparing conservative and liberal groups, respectively. We pay particular attention to determining the particular topics and entities that each group tends to go towards later (deeper) in the conversation. Such a comparative analysis may be key to understanding how coordinated influence campaigns orient the conversation of certain groups or de-rail them. The comparative illustration using spreading activation was used at the beginning of the paper in Fig. 1 and is not re-illustrated in this section. The illustration yields some interesting findings. While one might expect /r/Conservative to discuss members or individuals related to the republican party, we instead find that conversations tend to migrate toward mentions of liberal politicians (_e.g._, Joe_Biden) indicated by red lines in Fig. 1. The reverse holds true as well: mentions of Joe_Biden leads towards mentions of the Republican_Party by the liberal group, as indicated by the blue line connecting the two. A brief inspection of the underlying comments reveals that users in each subreddit tend to talk in a negative manner towards the other party's politicians. This is a clear example of affective polarization [22] being captured by our visualization tool. Affective polarization is where individuals organize around principles of dislike and distrust towards the out-group (the other political party) even moreso than trust in their in group. Another finding we observe is the more pronounced usage of the United_States by conservatives than liberals. This observation could be explained by the finding that conservatives show a much larger degree of overt patriotism than liberal individuals [20], which has more recently lead to a renewed interest in populism and nationalism [12]. Scenario 2: US news and WorldnewsIn our second scenario, we compare the conversations from /r/news (red) and /r/worldnews (blue), which are geared towards US-only news and non-US news respectively. The comparison between these subreddits reveals unsurprising findings. A much larger portion of the entity sets come from /r/worldnews as they discuss a much broader range of topics. Many of the entity transitions that are dominated by /r/worldnews come from discussions of other countries, events, and people outside of the United States. The aspects that are shown to come primarily from /r/news are topics surrounding the United States, China, and major political figures from the United States. An example of this can be seen in Fig. 9 which illustrates spreading activation starting from White_House. Here, the dominating red lines, which reflects transitions from within conversations on /r/news, converge to United_States, even after topics like Russia or Islam are discussed. An interesting side note is that many of the unlabeled entities entering the conversation via blue lines (/r/worldnews) in \(\ell=5\) and \(\ell=6\) represent other countries such as Canada, Japan, Mexico, and Germany. The findings from this comparative analysis do not show any extremely interesting results but, it does show that the entity graph is able to capture what one would see as the assumed patterns to find from comparing these two subreddits of interest. Scenario 3: COVID and VaccinesOur final analysis focuses on comparing a single subreddit, /r/Coronavirus, but during two different time periods. There is a large amount of work that has been done analyzing Covid online looking at partisanship [36], user reaction to misinformation [24], and differences in geographic concerns [19]. The first segment (highlighted in red) comes from the period of January through June in 2020, which was during the emergence of the novel Coronavirus. Although the /r/Coronavirus subreddit had existed for many years prior, it became extremely active during this time. The second segment was from the following year January - June 2021. This time period corresponded to the development, approval and early adoption of vaccines. Our analysis of this visualization yielded some interesting findings related to the coronavirus pandemic that we illustrate in Fig. 10. If we begin spreading activation from the perspective of United_States we find that most of the discussion leads to China and Italy in 2020, which appears reasonable because of China and Italy's early struggles with virus outbreaks. In comparison, the 2021 data appeared more likely to mention Sweden, India, and Germany, which had severe outbreaks during those months. Our findings from spreading activation allow us to capture the shifting changes in countries of interest from 2020 to 2021 as the pandemic progressed. ## 6 Discussion In the current work we presented a new perspective by which to view and think about online discourse. Rather than taking the traditional social networks view where information flows over the human participants, our view is to consider human conversations as stepping over a graph of concepts and entities. We call these discourse maps _entity graphs_ and we show that they present a fundamentally different view of online human communication. Figure 9: Illustration of an entity graph created from threaded conversations from /r/news (red-edges) and r/worldnews (blue-edges). The x-axis represents the (threaded) depth at which each entity set was mentioned within conversations rooted at White_House. The y-axis represents the semantic space of each entity, \(i\).e., similar entities are closer than dissimilar entity sets on the y-axis. Nodes colors represent equivalent entity sets. Conversations in /r/news tends to coalesce to United_States, while conversations in /r/worldnews tend to scatter into various other countries (unlabeled black nodes connected by thin blue lines) Taking this perspective we set out to answer three research questions about (1) discourse prediction, (2) illustration, and (3) behavior comparisons between groups. We found that discourse remains difficult to predict, and this prediction gets harder the deeper into the conversation we attempt predictions. We demonstrate that the visual conversation traversals provide a view of group discourse, and we find that online discourse tends to coalesce into narrow, simple topics as the conversation deepens - although those topics could be wildly different from starting topic. Finally, we show that the spreading activation function is able to focus the visualization to provide a comparative analysis of competing group dynamics. Figure 10: Comparison between the first 6 months of /r/Coronavirus from 2020 to 2021. Illustration of an entity graph created from threaded conversations from /r/Coronavirus in Jan–June of 2020 (red-edges) and from Jan–June of 2021 (blue-edges). The x-axis represents the (threaded) depth at which each entity set was mentioned within conversations rooted at United_States. The y-axis represents the semantic space of each entity set, \(i\).e., similar entity sets are closer than dissimilar entity sets on the y-axis. Node colors represent equivalent entity sets. Conversations tended to focus on China and Italy early in the pandemic, but turn towards a broader topic space later in the pandemic. ### Limitations While the work in its current state is helpful for better understanding conversations, it is not without its limitations. Foremost, in the present work we only considered conversations on Reddit. Another limitation is that the entity linking method we chose is geared towards high-precision at the cost of low-recall. This means that we can be confident that the entities extracted in the conversations are mostly correct, but we have missed some portion of entities. The recall limitation does inhibit the total number of entities we were able to collect; a better system would provide for better insights in our downstream analysis. This issue can also be highlighted with the long tail distribution of entities and the challenges this poses to current methods [21]. An entity linking model that focuses on recall may still result in useful graphs as prior works have found that many of the entities are considered "close enough" even when they are not a perfect match to ground truth data [14]. Using a different entity linking model could lead to different patterns extracted from our method. For a model that optimizes for higher recall it could create a much larger entity graph, though it would likely contain a fair amount of noise due to the precision-recall trade off. Another limitation inherent to the present work is the consideration of conversations as threaded trees. This is an imperfect representation of natural, in-person conversation, and still different from unthreaded conversations like those found on Twitter and Facebook, which may require a vastly different entity graph construction method. Finally, the interactive visualization tool is limited in its ability to process enormous amounts of conversation data because of its reliance on JavaScript libraries and interactive browser rendering. ### Future Work These limitations leave open avenues for further exploration in future work. Our immediate goals are to use the entity graphs to better understand how narratives are crafted and shaped across communities. Improvements in the entity linking process and addition of concept vertices, pronoun anaphora resolution, threaded information extraction and other advances in SocialNLP will serve to improve the technology substantially. We also plan to ingest other threaded conversational domains such as Hackernews, 4chan, and even anonymized email data. Extensions of this work could also include capturing more information between entity transitions such as the sentiment overlayed on a given entity or group of entities. This extra in formation could allow us to create entity graphs that not only show the transition but also how various groups speak and feel about those specific entities. ## 7 Acknowledgements The authors would like to thank Yifan Ding and Justus Hibshman for their feedback on the paper. This work is supported in part by the Defense Advanced Research Projects Agency (DARPA) and Army Research Office (ARO) under Contract No. W911NF-21-C-0002.
2306.12425
Cohomologies of pre-LieDer pairs and applications
In this paper, we use the higher derived bracket to give the controlling algebra of pre-LieDer pairs. We give the cohomology of pre-LieDer pairs by using the twist $L_\infty$-algebra of this controlling algebra. In particular, we define the cohomology of regular pre-LieDer pairs. We study infinitesimal deformations of pre-LieDer pairs, which are characterized by the second cohomology group of pre-LieDer pairs. We also define the cohomology of regular pre-LieDer pairs with coefficients in arbitrary representation and using the second cohomology group to classify abelian extensions of regular pre-LieDer pairs.
Shanshan Liu, Liangyun Chen
2023-03-08T07:05:46Z
http://arxiv.org/abs/2306.12425v1
# Cohomologies of pre-LieDer pairs and applications ###### Abstract. In this paper, we use the higher derived bracket to give the controlling algebra of pre-LieDer pairs. We give the cohomology of pre-LieDer pairs by using the twist \(L_{\infty}\)-algebra of this controlling algebra. In particular, we define the cohomology of regular pre-LieDer pairs. We study infinitesimal deformations of pre-LieDer pairs, which are characterized by the second cohomology group of pre-LieDer pairs. We also define the cohomology of regular pre-LieDer pairs with coefficients in arbitrary representation and using the second cohomology group to classify abelian extensions of regular pre-LieDer pairs. Key words and phrases:pre-LieDer pair, \(L_{\infty}\)-algebra, \(V\)-data, cohomology, deformation, extension. 1991 Mathematics Subject Classification: **MSC 2020**: 17A36, 17A40, 17B10, 17B40, 17B60, 17B63, 17D25 ###### Contents * 1 Introduction * 2 Cohomologies of pre-Lie algebras with representations * 3 Maurer-Cartan characterizations and deformations of pre-LieDer pairs * 3.1 \(L_{\infty}\)-algebras and higher derived brackets * 3.2 The \(L_{\infty}\)-algebra that controls deformations of pre-LieDer pairs * 4 Cohomologies and infinitesimal deformations of pre-LieDer pairs * 4.1 Cohomologies of pre-LieDer pairs * 4.2 Infinitesimal deformations of pre-LieDer pairs * 4.3 Cohomologies of regular pre-LieDer pairs * 5 Abelian extensions of regular pre-LieDer pairs ## 1. Introduction Derivations play an important role in the study of various algebraic structures. Derivations can be used to construct higher derived bracket and homotopy Lie algebras [35], deformation formulas [10] and diffferential Galois theory [30]. They are also important in control theory [1, 2], functional analysis [6, 7]. Moreover, they also play a fundamental role in the study of gauge theories in quantum field theory via the BV-formalism [8]. In [14, 29], the authors studied the operad of associative algebras with derivation. Recently, in [33] the authors study the cohomology, extensions and deformations of Lie algebras with derivations. Using similar ideas, the author study associative algebras with derivations [12] and Leibniz algebras with derivations [13]. The notion of a pre-Lie algebra (also called left-symmetric algebras, quasi-associative algebras, Vinberg algebras and so on) has been introduced independently by M. Gerstenhaber in deformation theory of rings and algebras [20]. Pre-Lie algebra arose from the study of affine manifolds and affine structures on Lie group [23], homogeneous convex cones [36]. Its defining identity is weaker than associativity. This algebraic structure describes some properties of cochains space in Hochschild cohomology of an associative algebra, rooted trees and vector fields on affine spaces. Introduction Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra and let \(\mathfrak{g}^{C}\) be a pre-Lie algebra. Let \(\mathfrak{g}\) be a pre-Lie algebra. **Definition 2.2**.: ([4]) \(A\) **representation** _of a pre-Lie algebra \((\mathfrak{g},\cdot)\) on a vector space \(V\) consists of a pair \((\rho,\mu)\), where \(\rho:\mathfrak{g}\longrightarrow\mathfrak{gl}(V)\) is a representation of the sub-adjacent Lie algebra \(\mathfrak{g}^{C}\) on \(V\), and \(\mu:\mathfrak{g}\longrightarrow\mathfrak{gl}(V)\) is a linear map, such that for all \(x,y\in\mathfrak{g}\):_ \[\mu(y)\circ\mu(x)-\mu(x\cdot y)=\mu(y)\circ\rho(x)-\rho(x)\circ\mu(y).\] We denote a representation of a pre-Lie algebra \((\mathfrak{g},\cdot)\) by \((V;\rho,\mu)\). Furthermore, let \(L,R:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{g})\) be linear maps, where \(L_{x}y=x\cdot y,R_{x}y=y\cdot x\). Then \((\mathfrak{g};L,R)\) is also a representation, which is called the regular representation. **Definition 2.3**.: _Let \((V;\rho,\mu)\) be a representation of a pre-Lie algebra \((\mathfrak{g},\cdot)\) and \(D:\mathfrak{g}\longrightarrow V\) a linear map such that for all \(x,y\in\mathfrak{g}\)_ \[D(x\cdot y)=\rho(x)D(y)+\mu(y)D(x).\] _Then \(D\) is a_ **derivation** _of the pre-Lie algebra \((\mathfrak{g},\cdot)\) with respect to the representation \((V;\rho,\mu)\). A pre-Lie algebra \((\mathfrak{g},\cdot)\) with \(D\) is called a_ **pre-LieDer pair**_, which is denoted by \((\mathfrak{g},D,\rho,\mu)\). In particular, let \((\mathfrak{g};L,R)\) be a regular representation, then \((\mathfrak{g},D,L,R)\) is called a regular pre-LieDer pair and simply denoted by \((\mathfrak{g},D)\)._ **Definition 2.4**.: _Let \((\mathfrak{g},D,\rho,\mu)\) and \((\mathfrak{g}^{\prime},D^{\prime},\rho^{\prime},\mu^{\prime})\) be two pre-LieDer pairs. A_ **morphism**_\((f_{\mathfrak{g}},f_{V})\) from \((\mathfrak{g},D,\rho,\mu)\) to \((\mathfrak{g}^{\prime},D^{\prime},\rho^{\prime},\mu^{\prime})\) consists of a pre-Lie algebra morphism \(f_{\mathfrak{g}}:\mathfrak{g}\longrightarrow\mathfrak{g}^{\prime}\) and a linear map \(f_{V}:V\longrightarrow V^{\prime}\) such that for all \(x\in\mathfrak{g}\)_ \[f_{V}\circ\rho(x) = \rho^{\prime}(f_{\mathfrak{g}}(x))\circ f_{V}, \tag{2}\] \[f_{V}\circ\mu(x) = \mu^{\prime}(f_{\mathfrak{g}}(x))\circ f_{V},\] (3) \[f_{V}\circ D = D^{\prime}\circ f_{\mathfrak{g}}. \tag{1}\] Let \((V;\rho,\mu)\) be a representation of a pre-Lie algebra \((\mathfrak{g},\cdot)\). The set of \(n\)-cochains is given by \[C^{n}(\mathfrak{g};V)=\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes \mathfrak{g},V),\quad\forall n\geq 1.\] For all \(f\in C^{n}(\mathfrak{g};V),\ x_{1},\ldots,x_{n+1}\in\mathfrak{g}\), define the coboundary operator \(\mathrm{d}:C^{n}(\mathfrak{g};V)\longrightarrow C^{n+1}(\mathfrak{g};V)\) by \[(\mathrm{d}f)(x_{1},\ldots,x_{n+1}) = \sum_{i=1}^{n}(-1)^{i+1}\rho(x_{i})f(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n+1})\] \[+\sum_{i=1}^{n}(-1)^{i+1}\mu(x_{n+1})f(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n},x_{i})\] \[-\sum_{i=1}^{n}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}}\ldots,x_{n},x _{i}\cdot x_{n+1})\] \[+\sum_{1\leq i<j\leq n}(-1)^{i+j}f([x_{i},x_{j}]_{C},x_{1}, \ldots,\hat{x_{i}},\ldots,\hat{x_{j}},\ldots,x_{n+1}).\] **Theorem 2.5**.: ([15]) _With the above notation, \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g};V),\mathrm{d})\) is a chain complex._ **Definition 2.6**.: ([15]) _The cohomology of the cochain complex \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g};V),\mathrm{d})\) is called the cohomology of the pre-Lie algebra \((\mathfrak{g},\cdot_{\mathfrak{g}})\) with coefficients in the representation \((V;\rho,\mu)\). The corresponding cohomology group is denoted by \(H^{n}(\mathfrak{g};V)\)._ **Remark 2.7**.: _We use \(\mathrm{d}_{\mathrm{reg}}\) to denote the coboundary operator of \((\mathfrak{g},\cdot)\) with the coefficients in the regular representation._ A permutation \(\sigma\in\mathbb{S}_{n}\) is called an \((i,n-i)\)-unshuffle if \(\sigma(1)<\cdots<\sigma(i)\) and \(\sigma(i+1)<\cdots<\sigma(n)\). If \(i=0\) and \(i=n\), we assume \(\sigma=\mathrm{Id}\). The set of all \((i,n-i)\)-unshuffles will be denoted by \(\mathbb{S}_{(i,n-i)}\). The notion of an \((i_{1},\ldots,i_{k})\)-unshuffle and the set \(\mathbb{S}_{(i_{1},\ldots,i_{k})}\) are defined similarly. Let \(\mathfrak{g}\) be a vector space. We consider the graded vector space \(C^{*}(\mathfrak{g};\mathfrak{g})=\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g}; \mathfrak{g})=\mathfrak{G}_{n=1}^{+\infty}\mathrm{Hom}(\wedge^{n-1}\mathfrak{ g}\otimes\mathfrak{g},\mathfrak{g})\). It was shown in [11, 31, 37] that \(C^{*}(\mathfrak{g};\mathfrak{g})\) equipped with the Matsushima-Nijenhuis bracket \[[P,Q]^{MN}=P\circ Q-(-1)^{pq}Q\circ P,\quad\forall P\in C^{p+1}(\mathfrak{g}; \mathfrak{g}),Q\in C^{q+1}(\mathfrak{g};\mathfrak{g})\] ia a graded Lie algebra, where \(P\circ Q\in C^{p+q+1}(\mathfrak{g};\mathfrak{g})\) is defined by \[P\circ Q(x_{1},\ldots,x_{p+q+1})\] \[= \sum_{\sigma\in\mathbb{S}(q,1,p-1)}\mathrm{sgn}(\sigma)P(Q(x_{ \sigma(1)},\ldots,x_{\sigma(q)},x_{\sigma(q+1)}),x_{\sigma(q+2)},\ldots,x_{ \sigma(p+q)},x_{p+q+1})\] \[+(-1)^{pq}\sum_{\sigma\in\mathbb{S}(p,q)}\mathrm{sgn}(\sigma)P(x _{\sigma(1)},\ldots,x_{\sigma(p)},Q(x_{\sigma(p+1)},\ldots,x_{\sigma(p+q)},x_ {p+q+1})).\] In particular, \(\pi\in\mathrm{Hom}(\otimes^{2}\mathfrak{g},\mathfrak{g})\) defines a pre-Lie algebra if and only if \([\pi,\pi]^{MN}=0\). If \(\pi\) is a pre-Lie algebra structure, then \(d_{\pi}:=[\pi,\cdot]^{MN}\) is a graded derivation of the graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN})\) satisfying \(d_{\pi}\circ d_{\pi}=0\), so that \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},d_{\pi})\) becomes a differential graded Lie algebra. Let \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) be vector spaces and elements in \(\mathfrak{g}_{1}\) will be denoted by \(x,y,x_{i}\) and elements in \(\mathfrak{g}_{2}\) will be denoted by \(u,v,v_{i}\). Let \(c:\wedge^{n-1}\mathfrak{g}_{1}\otimes\mathfrak{g}_{1}\longrightarrow\mathfrak{ g}_{2}\) be a linear map. We can construct a linear map \(\hat{c}\in C^{n}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus \mathfrak{g}_{2})\) by \[\hat{c}(x_{1}+v_{1},\ldots,x_{n}+v_{n}):=c(x_{1},\ldots,x_{n}).\] In general, for a given linear map \(f:\wedge^{k-1}\mathfrak{g}_{1}\otimes\wedge^{l}\mathfrak{g}_{2}\otimes \mathfrak{g}_{1}\longrightarrow\mathfrak{g}_{j}\) for \(j\in\{1,2\}\), we define a linear map \(\hat{f}\in C^{k+l}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) by \[\hat{f}(x_{1}+v_{1},\ldots,x_{k+l}+v_{k+l})=\sum_{\sigma\in\mathbb{S}(k-1,l)} \mathrm{sgn}(\sigma)f(x_{\sigma(1)},\ldots,x_{\sigma(k-l)},v_{\sigma(k)}, \ldots,v_{\sigma(k+l-1)},x_{k+l}).\] Similarly, for \(f:\wedge^{k}\mathfrak{g}_{1}\otimes\wedge^{l-1}\mathfrak{g}_{2}\otimes \mathfrak{g}_{2}\longrightarrow\mathfrak{g}_{j}\) for \(j\in\{1,2\}\), we define a linear map \(\hat{f}\in C^{k+l}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) by \[\hat{f}(x_{1}+v_{1},\ldots,x_{k+l}+v_{k+l})=\sum_{\sigma\in\mathbb{S}(k,l-1)} \mathrm{sgn}(\sigma)f(x_{\sigma(1)},\ldots,x_{\sigma(k)},v_{\sigma(k+1)}, \ldots,v_{\sigma(k+l-1)},v_{k+l}).\] We call the linear map \(\hat{f}\) a **horizontal lift** of \(f\), or simply a lift. We define \(\mathcal{G}^{k,l}=\wedge^{k-1}\mathfrak{g}_{1}\otimes\wedge^{l}\mathfrak{g}_ {2}\otimes\mathfrak{g}_{1}+\wedge^{k}\mathfrak{g}_{1}\otimes\wedge^{l-1} \mathfrak{g}_{2}\otimes\mathfrak{g}_{2}\). The vector space \(\wedge^{n-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2})\otimes(\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) is isomorphic to the direct sum of \(\mathcal{G}^{k,l},k+l=n\). **Definition 2.8**.: ([27]) _A linear map \(f\in\mathrm{Hom}(\wedge^{n-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2})\otimes( \mathfrak{g}_{1}\oplus\mathfrak{g}_{2}),\mathfrak{g}_{1}\oplus\mathfrak{g}_{2})\) has a_ **bidegree \(k|l\)** _if the following four conditions hold:_ * \(k+l+1=n\)_;_ * _If_ \(X\) _is an element in_ \(\mathcal{G}^{k+1,l}\)_, then_ \(f(X)\in\mathfrak{g}_{1}\)_;_ * _If_ \(X\) _is an element in_ \(\mathcal{G}^{k,l+1}\)_, then_ \(f(X)\in\mathfrak{g}_{2}\)_;_ * _All the other case,_ \(f(X)=0\)_._ _We denote a linear map \(f\) with bidegree \(k|l\) by \(\|f\|=k|l\)._ We call a linear map \(f\)**homogeneous** if \(f\) has a bidegree. We denote the set of homogeneous linear maps of bidegree \(k|l\) by \(C^{k|l}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus\mathfrak{g} _{2})\). We have \(k+l\geq 0,k,l\geq-1\) because \(n\geq 1\) and \(k+1,l+1\geq 0\). In our later study, the subspaces \(C^{k|0}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus\mathfrak{ g}_{2})\) and \(C^{l-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus\mathfrak{ g}_{2})\) will be frequently used. By the above lift, we have the following isomorphisms: \[C^{k|0}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2}) \cong \operatorname{Hom}(\wedge^{k}\mathfrak{g}_{1}\otimes\mathfrak{g} _{1},\mathfrak{g}_{1})\oplus\operatorname{Hom}(\wedge^{k}\mathfrak{g}_{1} \otimes\mathfrak{g}_{2},\mathfrak{g}_{2})\oplus\operatorname{Hom}(\wedge^{k-1 }\mathfrak{g}_{1}\otimes\mathfrak{g}_{2}\otimes\mathfrak{g}_{1},\mathfrak{g} _{2});\] \[C^{l-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2}) \cong \operatorname{Hom}(\wedge^{l-1}\mathfrak{g}_{1}\otimes\mathfrak{ g}_{1},\mathfrak{g}_{2}).\] **Lemma 2.9**.: ([27]) _If \(\|f\|=k_{f}l_{f}\) and \(\|g\|=k_{g}l_{g}\), then \([f,g]^{MN}\) has the bidegree \(k_{f}+k_{g}|l_{f}+l_{g}\)._ **Lemma 2.10**.: ([27]) _If \(\|f\|=-1|k\) (resp. \(k|-1\)) and \(\|g\|=-1|l\) (resp. \(l\|-1\)), then \([f,g]^{MN}=0\)._ By Lemma 2.9 and Lemma 2.10, we obtain that \((\oplus_{k=0}^{+\infty}C^{k|0}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}, \mathfrak{g}_{1}\oplus\mathfrak{g}_{2}),[\cdot,\cdot]^{MN})\) is a graded Lie subalgebra of the graded Lie algebra \((C^{*}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus \mathfrak{g}_{2}),[\cdot,\cdot]^{MN})\) and \((\oplus_{l=0}^{+\infty}C^{l-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}, \mathfrak{g}_{1}\oplus\mathfrak{g}_{2}),[\cdot,\cdot]^{MN})\) is an abelian graded Lie subalgebra of the graded Lie algebra \((C^{*}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus \mathfrak{g}_{2}),[\cdot,\cdot]^{MN})\). **Proposition 2.11**.: ([28]) _Let \(\mathfrak{g}\) and \(V\) be two vector spaces. Then \(\pi+\rho+\mu\in C^{1|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V)\) is a Maurer-Cartan element of the graded Lie algebra \((\oplus_{k=0}^{+\infty}C^{k|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V),[ \cdot,\cdot]^{MN})\) if and only if \((V;\rho,\mu)\) is a representation of the pre-Lie algebra \((\mathfrak{g},\pi)\)._ Let \((V;\rho,\mu)\) be a representation of a pre-Lie algebra \((\mathfrak{g},\pi)\). By Proposition 2.11, \(\pi+\rho+\mu\) is a Maurer-Cartan element of the the graded Lie algebra \((\oplus_{k=0}^{+\infty}C^{k|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V),[ \cdot,\cdot]^{MN})\). It follows from the graded Jacobi identity that \(\mathrm{d}_{\pi+\rho+\mu}:=[\pi+\rho+\mu,\cdot]^{MN}\) is a graded derivation of the graded Lie algebra \((\oplus_{k=0}^{+\infty}C^{k|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V),[ \cdot,\cdot]^{MN})\) satisfying \(\mathrm{d}_{\pi+\rho+\mu}^{2}=0\). Thus, we have **Theorem 2.12**.: _Let \((V;\rho,\mu)\) be a representation of a pre-Lie algebra \((\mathfrak{g},\pi)\). Then \((\oplus_{k=0}^{+\infty}C^{k|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V),[ \cdot,\cdot]^{MN},\mathrm{d}_{\pi+\rho+\mu})\) is a differential graded Lie algebra._ _Furthermore, \((V;\rho+\rho^{\prime},\mu+\mu^{\prime})\) is representation of a pre-Lie algebra \((g,\pi+\pi^{\prime})\) for \(\pi^{\prime}\in\operatorname{Hom}(\mathfrak{g}\otimes\mathfrak{g},\mathfrak{ g})\), \(\rho^{\prime}\in\operatorname{Hom}(\mathfrak{g}\otimes V,V)\) and \(\mu^{\prime}\in\operatorname{Hom}(V\otimes\mathfrak{g},\mathfrak{g})\) if and only if \(\pi^{\prime}+\rho^{\prime}+\mu^{\prime}\) is a Maurer-Cartan element of the differential graded Lie algebra \((\oplus_{k=0}^{+\infty}C^{k|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V),[ \cdot,\cdot]^{MN},\mathrm{d}_{\pi+\rho+\mu})\)._ Let \((V;\rho,\mu)\) be a representation of a pre-Lie algebra \((\mathfrak{g},\pi)\). Define the set of \(0\)-cochains \(C^{0}(\mathfrak{g},\pi,\rho,\mu)\) to be \(0\). For \(n\geq 1\), we define the set of \(n\)-cochains \(C^{n}(\mathfrak{g},\pi,\rho,\mu)\) by \[C^{n}(\mathfrak{g},\pi,\rho,\mu):=\operatorname{Hom}(\wedge^{n-1}\mathfrak{g} \otimes\mathfrak{g},\mathfrak{g})\oplus\operatorname{Hom}(\wedge^{n-1} \mathfrak{g}\otimes V,V)\oplus\operatorname{Hom}(\wedge^{n-2}\mathfrak{g} \otimes V\otimes\mathfrak{g},V).\] Define the coboundary operator \(\partial:C^{n}(\mathfrak{g},\pi,\rho,\mu)\longrightarrow C^{n+1}(\mathfrak{g}, \pi,\rho,\mu)\) by \[\partial f:=(-1)^{n-1}\mathrm{d}_{\pi+\rho+\mu}f=(-1)^{n-1}[\pi+\rho+\mu,f]^{ MN}.\] Since \(\mathrm{d}_{\pi+\rho+\mu}^{2}=0\), we obtain that \(\partial^{2}=0\). Thus, \((\oplus_{n=0}^{+\infty}C^{n}(\mathfrak{g},\pi,\rho,\mu),\partial)\) is a cochain complex. **Definition 2.13**.: _The cohomology of the cochain complex \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g},\pi,\rho,\mu),\partial)\) is called the cohomology of the pre-Lie algebra \((\mathfrak{g},\pi)\) with the representation \((V;\rho,\mu)\). The corresponding cohomology group is denoted by \(H^{n}(\mathfrak{g},\pi,\rho,\mu)\)._ For all \(f=(f_{\mathfrak{g}},f_{\rho},f_{\mu})\in C^{n}(\mathfrak{g},\pi,\rho,\mu)\), where \(f_{\mathfrak{g}}\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g}, \mathfrak{g})\), \(f_{\rho}\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes V,V)\) and \(f_{\mu}\in\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes V\otimes \mathfrak{g},V)\). Then have \[\partial f=((\partial f)_{\mathfrak{g}},(\partial f)_{\rho},(\partial f)_{\mu}).\] By straightforward computation, we obtain that \[(\partial f)_{\mathfrak{g}} = (-1)^{n-1}[\pi,f_{\mathfrak{g}}]^{MN}=\mathrm{d}_{\mathrm{reg}}f_{ \mathfrak{g}}, \tag{4}\] \[(\partial f)_{\rho} = (-1)^{n-1}([\rho,f_{\mathfrak{g}}]^{MN}+[\pi+\rho,f_{\rho}]^{MN}) \tag{6}\] \[(\partial f)_{\mu} = (-1)^{n-1}([\mu,f_{\mathfrak{g}}]^{MN}+[\mu,f_{\rho}]^{MN}+[\pi+\rho +\mu,f_{\mu}]^{MN}). \tag{5}\] More precisely, for all \(x_{1},\ldots,x_{n}\in\mathfrak{g},u\in V\), we have \[(\partial f)_{\rho}(x_{1},\ldots,x_{n},u) = \sum_{i=1}^{n}(-1)^{i+1}\rho(f_{\mathfrak{g}}(x_{1},\ldots,\hat{ x_{i}},\ldots,x_{n},x_{i}))u\] \[+\sum_{i=1}^{n}(-1)^{i+1}\rho(x_{i})f_{\rho}(x_{1},\ldots,\hat{ x_{i}},\ldots,x_{n},u)\] \[-\sum_{i=1}^{n}(-1)^{i+1}f_{\rho}(x_{1},\ldots,\hat{x_{i}},\ldots,x_{n},\rho(x_{i})u)\] \[+\sum_{1\leq i<j\leq n}(-1)^{i+j}f_{\rho}([x_{i},x_{j}]_{C},x_{1},\ldots,\hat{x_{i}},\ldots,\hat{x_{j}},\ldots,x_{n},u).\] and \[(\partial f)_{\mu}(x_{1},\ldots,x_{n-1},u,x_{n})\] \[= (-1)^{n-1}(\mu(f_{\mathfrak{g}}(x_{1},\ldots,x_{n}))u+\mu(x_{n}) f_{\rho}(x_{1},\ldots,x_{n-1},u)-f_{\rho}(x_{1},\ldots,x_{n-1},\mu(x_{n})u))\] \[-\sum_{i=1}^{n-1}(-1)^{i+1}f_{\mu}(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n-1},u,x_{i}\cdot x_{n})\] \[+\sum_{i=1}^{n-1}(-1)^{i+1}\rho(x_{i})f_{\mu}(x_{1},\ldots,\hat{ x_{i}},\ldots,x_{n-1},u,x_{n})\] \[-\sum_{i=1}^{n-1}(-1)^{i+1}f_{\mu}(x_{1},\ldots,\hat{x_{i}}\ldots,x_{n-1},\rho(x_{i})u,x_{n})\] \[+\sum_{i=1}^{n-1}(-1)^{i+1}\mu(x_{n})f_{\mu}(x_{1},\ldots,\hat{ x_{i}},\ldots,x_{n-1},u,x_{i})\] \[+\sum_{i=1}^{n-1}(-1)^{i+1}f_{\mu}(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n-1},\mu(x_{i})u,x_{n})\] \[+\sum_{1\leq i<j\leq n-1}(-1)^{i+j}f_{\mu}([x_{i},x_{j}]_{C},x_{1},\ldots,\hat{x_{i}},\ldots,\hat{x_{j}},\ldots,x_{n-1},u,x_{n}).\] ## 3. Maurer-Cartan characterizations and deformations of pre-LieDer pairs In this section, first, we recall \(L_{\infty}\)-algebras and higher derived brackets. Then, we use higher derived brackets to construct an \(L_{\infty}\)-algebra, whose Maurer-Cartan elements are pre-LieDer pairs. Finally, we construct a twist \(L_{\infty}\)-algebra by this Maurer-Cartan element, which controls deformations of pre-LieDer pairs. ### \(L_{\infty}\)-algebras and higher derived brackets Let \(\mathfrak{g}=\oplus_{k\in\mathbb{Z}}\mathfrak{g}^{k}\) be a \(\mathbb{Z}\)-graded vector space. The desuspension operator \(s^{-1}\) changes the grading of \(\mathfrak{g}\) according to the rule \((s^{-1}\mathfrak{g})^{i}:=\mathfrak{g}^{i+1}\). The degree \(-1\) map \(s^{-1}:\mathfrak{g}\longrightarrow s^{-1}\mathfrak{g}\) is defined by sending \(x\in\mathfrak{g}\) to its copy \(s^{-1}x\in s^{-1}\mathfrak{g}\). The notion of an \(L_{\infty}\)-algebra was introduced by Stasheff in [32]. See [25, 26] for more details. **Definition 3.1**.: _An \(L_{\infty}\)-**algebra** is a \(\mathbb{Z}\)-graded vector space \(\mathfrak{g}=\oplus_{k\in\mathbb{Z}}\mathfrak{g}^{k}\) equipped with a collection \((k\geq 1)\) of linear maps \(l_{k}:\mathfrak{g}^{k}\mathfrak{g}\ \to\ \mathfrak{g}\) of degree \(1\) with the property that, for any homogeneous elements \(x_{1},\cdots,x_{n}\in\mathfrak{g}\), we have_ 1. (graded symmetry) _for every_ \(\sigma\in S_{n}\)_,_ \[l_{n}(x_{\sigma(1)},\cdots,x_{\sigma(n)})=\varepsilon(\sigma)l_{n}(x_{1}, \cdots,x_{n}),\] 2. (generalized Jacobi identity) _for all_ \(n\geq 1\)_,_ \[\sum_{i=1}^{n}\sum_{\sigma\in S(i,n-i)}\varepsilon(\sigma)l_{n-i+1}(l_{i}(x_{ \sigma(1)},\cdots,x_{\sigma(i)}),x_{\sigma(i+1)},\cdots,x_{\sigma(n)})=0,\] _where \(\varepsilon(\sigma)=\varepsilon(\sigma;x_{1},\cdots,x_{n})\) is the Koszul sign for a permutation \(\sigma\in S_{n}\) and \(x_{1},\cdots,x_{n}\in\mathfrak{g}\)._ We denote an \(L_{\infty}\)-algebra by \((\mathfrak{g},\{l_{k}\}_{k=1}^{+\infty})\). **Definition 3.2**.: \(A\) **Maurer-Cartan element** _of an \(L_{\infty}\)-algebra \((\mathfrak{g},\{l_{k}\}_{k=1}^{+\infty})\) is an element \(\alpha\in\mathfrak{g}^{0}\) satisfying_ \[\sum_{k=1}^{+\infty}\frac{1}{k!}l_{k}(\alpha,\cdots,\alpha)=0. \tag{7}\] **Remark 3.3**.: _In general, the Maurer-Cartan equation (7) makes sense when the \(L_{\infty}\)-algebra is a filtered \(L_{\infty}\)-algebra [17]. In the following, the \(L_{\infty}\)-algebra under consideration satisfies \(l_{k}=0\) for \(k\) sufficiently big, so the Maurer-Cartan equation make sense._ Let \(\alpha\) be a Maurer-Cartan element of a \(L_{\infty}\)-algebra \((\mathfrak{g},\{l_{k}\}_{k=1}^{+\infty})\). Define \(l_{k}^{\alpha}:\mathfrak{g}\ \to\ \mathfrak{g}\ (k\geq 1)\) by \[l_{k}^{\alpha}(x_{1},\cdots,x_{k})=\sum_{n=0}^{+\infty}\frac{1}{n!}l_{k+n}( \underbrace{\alpha,\cdots,\alpha}_{n},x_{1},\cdots,x_{k}).\] **Theorem 3.4**.: ([16, 19]) _With the above notation, \((\mathfrak{g},\{l_{k}^{\alpha}\}_{k=1}^{+\infty})\) is an \(L_{\infty}\)-algebra which is called the twisted \(L_{\infty}\)-algebra by \(\alpha\)._ **Theorem 3.5**.: ([19]) _Let \((\mathfrak{g},\{l_{k}^{\alpha}\}_{k=1}^{+\infty})\) be a twist \(L_{\infty}\)-algebra by \(\alpha\). Then \(\alpha+\alpha^{\prime}\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((\mathfrak{g},\{l_{k}\}_{k=1}^{+\infty})\) if and only if \(\alpha^{\prime}\) is a Maurer-Cartan element of the twist \(L_{\infty}\)-algebra \((\mathfrak{g},\{l_{k}^{\alpha}\}_{k=1}^{+\infty})\)._ **Definition 3.6**.: ([34]) A \(V\)**-data** consists of a quadruple \((L,\mathfrak{h},P,\Delta)\), where * \((L,[\cdot,\cdot])\) is a graded Lie algebra, * \(\mathfrak{h}\) is an abelian graded Lie subalgebra of \((L,[\cdot,\cdot])\), * \(P:L\ \to\ L\) is a projection, that is \(P\circ P=P\), whose image is \(\mathfrak{h}\) and its kernel is a graded Lie subalgebra of \((L,[\cdot,\cdot])\), * \(\Delta\) is an element in \(\ker(P)^{1}\) such that \([\Delta,\Delta]=0\). **Theorem 3.7**.: ([34, 18]) Let \((L,\mathfrak{h},P,\Delta)\) be a \(V\)-data. Then the graded vector space \(s^{-1}L\oplus\mathfrak{h}\) is an \(L_{\infty}\)-algebra, where \[l_{1}(s^{-1}f,\theta) = (-s^{-1}[\Delta,f],P(f+[\Delta,\theta])),\] \[l_{2}(s^{-1}f,s^{-1}g) = (-1)^{|f|}s^{-1}[f,g],\] \[l_{k}(s^{-1}f,\theta_{1},\ldots,\theta_{k-1}) = P([\cdots[[f,\theta_{1}],\theta_{2}],\cdots,\theta_{k-1}]),\quad k \geq 2,\] \[l_{k}(\theta_{1},\ldots,\theta_{k-1},\theta_{k}) = P([\cdots[[[\Delta,\theta_{1}],\theta_{2}],\cdots,\theta_{k}]),\quad k \geq 2.\] Here \(\theta,\theta_{1},\ldots,\theta_{k}\) are homogeneous elements of \(\mathfrak{h}\) and \(f,g\) are homogeneous elements of \(L\). All the other \(L_{\infty}\)-algebra products that are not obtained from the ones written above by permutations of arguments, will vanish. Moreover, if \(L^{\prime}\) is a graded Lie subalgebra of \(L\) that satisfies \([\Delta,L^{\prime}]\subset L^{\prime}\), then \(s^{-1}L^{\prime}\oplus\mathfrak{h}\) is an \(L_{\infty}\)-subalgebra of the above \(L_{\infty}\)-algebra \((s^{-1}L\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\). ### The \(L_{\infty}\)-algebra that controls deformations of pre-LieDer pairs **Proposition 3.8**.: _We have a \(V\)-data \((L,\mathfrak{h},P,\Delta)\) as follows:_ * _the graded Lie algebra_ \((L,[\cdot,\cdot])\) _is given by_ \((\oplus_{n=1}^{+\infty}\mathrm{Hom}(\wedge^{n-1}(\mathfrak{g}\oplus V) \otimes(\mathfrak{g}\oplus V),\mathfrak{g}\oplus V),[\cdot,\cdot]^{MN})\)_;_ * _The abelian graded Lie subalgebra_ \(\mathfrak{h}\) _is given by_ \[\mathfrak{h}=\oplus_{n=0}^{+\infty}C^{n|-1}(\mathfrak{g}\oplus V,\mathfrak{g} \oplus V)=\oplus_{n=0}^{+\infty}\mathrm{Hom}(\wedge^{n-1}\mathfrak{g}\otimes \mathfrak{g},V);\] * \(P:L\longrightarrow L\) _is a projection onto the subspace_ \(\mathfrak{h}\)_;_ * \(\Delta=0\)_._ _Consequently, we obtain an \(L_{\infty}\)-algebra \((s^{-1}L\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\), where \(l_{i}\) are given by_ \[l_{1}(s^{-1}f,\theta) = P(f),\] \[l_{2}(s^{-1}f,s^{-1}g) = (-1)^{[f]}s^{-1}[f,g]^{MN},\] \[l_{k}(s^{-1}f,\theta_{1},\ldots,\theta_{k-1}) = P([\cdots[[f,\theta_{1}]^{MN},\theta_{2}]^{MN},\cdots,\theta_{k- 1}]^{MN}),\quad k\geq 2,\] _for homogeneous elements \(\theta,\theta_{1},\ldots,\theta_{k-1}\in\mathfrak{h}\), \(f,g\in L\) and all the other possible combinations vanish._ Proof.: Obviously, \(\mathfrak{h}\) is an abelian graded Lie subalgebra of \((L,[\cdot,\cdot])\) and \(\mathrm{Ker}(P)\) is a graded Lie subalgebra of \((L,[\cdot,\cdot])\). Thus, \((L,\mathfrak{h},P,\Delta)\) is a \(V\)-data. By Theorem 3.7, we obtain an \(L_{\infty}\)-algebra \((s^{-1}L\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\). We set \(L^{\prime}\) by \[L^{\prime}=\oplus_{n=0}^{+\infty}C^{n|0}(\mathfrak{g}\oplus V,\mathfrak{g} \oplus V)=\oplus_{n=0}^{+\infty}(\mathrm{Hom}(\wedge^{n}\mathfrak{g}\otimes \mathfrak{g},\mathfrak{g})\oplus\mathrm{Hom}(\wedge^{n}\mathfrak{g}\otimes V,V )\oplus\mathrm{Hom}(\wedge^{n-1}\mathfrak{g}\otimes V\otimes\mathfrak{g},V)).\] Now, we give the main result in this subsection. **Theorem 3.9**.: _With the above notation, \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\) is an \(L_{\infty}\)-algebra, where \(l_{k}\) are given by_ \[l_{2}(s^{-1}f,s^{-1}g) = (-1)^{[f]}s^{-1}[f,g]^{MN},\] \[l_{2}(s^{-1}f,\theta) = P([f,\theta]^{MN}),\] \[l_{k}(s^{-1}f,\theta_{1},\ldots,\theta_{k-1}) = 0,\quad k\geq 3,\] _for homogeneous elements \(\theta,\theta_{1},\ldots,\theta_{k-1}\in\mathfrak{h}\), \(f,g\in L^{\prime}\) and all the other possible combinations vanish._ _Moreover, for all \(\pi\in\mathrm{Hom}(\mathfrak{g}\otimes\mathfrak{g},V)\), \(\rho\in\mathrm{Hom}(\mathfrak{g}\otimes V,V)\), \(\mu\in\mathrm{Hom}(V\otimes\mathfrak{g},V)\) and \(D\in\mathrm{Hom}(\mathfrak{g},V)\), \((s^{-1}(\pi+\rho+\mu),D)\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\) if and only if \((\mathfrak{g},D,\rho,\mu)\) is a pre-LieDer pair._ Proof.: Obviously, \(L^{\prime}\) is a graded Lie subalgebra. Since \(\Delta=0\), we have \([\Delta,L^{\prime}]\)=0. Thus, \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\) is an \(L_{\infty}\)-algebra. It is straightforward to deduce that \[[\pi+\rho+\mu,D]^{MN}\in\mathrm{Hom}(\mathfrak{g}\otimes\mathfrak{g},V), \quad[[\pi+\rho+\mu,D]^{MN},D]^{MN}=0.\] Then, we have \[\frac{1}{2}l_{2}((s^{-1}(\pi+\rho+\mu),D),(s^{-1}(\pi+\rho+\mu),D))\] \[= \frac{1}{2}l_{2}(s^{-1}(\pi+\rho+\mu),s^{-1}(\pi+\rho+\mu))+l_{2}(s ^{-1}(\pi+\rho+\mu),D)\] \[= (-\frac{1}{2}s^{-1}[\pi+\rho+\mu,\pi+\rho+\mu]^{MN},P[\pi+\rho+\mu, D]^{MN})\] \[= (-\frac{1}{2}s^{-1}[\pi+\rho+\mu,\pi+\rho+\mu]^{MN},[\pi+\rho+\mu, D]^{MN}).\] Thus, \((s^{-1}(\pi+\rho+\mu),D)\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\) if and only if \[[\pi+\rho+\mu,\pi+\rho+\mu]^{MN}=0,\quad[\pi+\rho+\mu,D]^{MN}=0.\] By Proposition 2.11, \([\pi+\rho+\mu,\pi+\rho+\mu]^{MN}=0\) if and only if \((V;\rho,\mu)\) is a representation of the pre-Lie algebra \((\mathfrak{g},\pi)\). For all \(x,y\in\mathfrak{g}\), we have \[0 = [\pi+\rho+\mu,D]^{MN}(x,y)\] \[= \rho(x,D(y))+\mu(D(x),y)-D(\pi(x,y))\] \[= \rho(x)D(y)+\mu(y)D(x)-D(x\cdot y),\] which implies that \([\pi+\rho+\mu,D]^{MN}=0\) if and only if \(D\) is a derivation. Thus, \((s^{-1}(\pi+\rho+\mu),D)\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\) if and only if \((\mathfrak{g},D,\rho,\mu)\) is a pre-LieDer pair. This finishes the proof. **Theorem 3.10**.: _Let \((\mathfrak{g},D,\rho,\mu)\) be a pre-LieDer pair. Then \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}^{(s^{-1}(\pi+\rho+\mu),D)}\}_{k=1} ^{+\infty})\) is an \(L_{\infty}\)-algebra, where \(l_{k}^{(s^{-1}(\pi+\rho+\mu),D)}\) are given by_ \[l_{1}^{(s^{-1}(\pi+\rho+\mu),D)}(s^{-1}f,\theta) = l_{2}((s^{-1}(\pi+\rho+\mu),D),(s^{-1}f,\theta)),\] \[l_{2}^{(s^{-1}(\pi+\rho+\mu),D)}((s^{-1}f_{1},\theta_{1}),(s^{-1 }f_{2},\theta_{2})) = l_{2}((s^{-1}f_{1},\theta_{1}),(s^{-1}f_{2},\theta_{2})),\] \[l_{k}^{(s^{-1}(\pi+\rho+\mu),D)}((s^{-1}f_{1},\theta_{1}),\ldots, (s^{-1}f_{k}\theta_{k})) = 0,\quad k\geq 3,\] _Furthermore, for all \(\pi^{\prime}\in\operatorname{Hom}(\mathfrak{g}\otimes\mathfrak{g},V)\), \(\rho^{\prime}\in\operatorname{Hom}(\mathfrak{g}\otimes V,V)\), \(\mu^{\prime}\in\operatorname{Hom}(V\otimes\mathfrak{g},V)\) and \(D^{\prime}\in\operatorname{Hom}(\mathfrak{g},V)\), \((\mathfrak{g},\pi+\pi^{\prime},D+D^{\prime},\rho+\rho^{\prime},\mu+\mu^{\prime})\) is a pre-LieDer pair if and only if \((s^{-1}(\pi^{\prime}+\rho^{\prime}+\mu^{\prime}),D^{\prime})\) is a Maurer-Cartan element of the twist \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}^{(s^{-1}(\pi+\rho+\mu),D)}\}_{k=1} ^{+\infty})\)._ Proof.: By Theorem 3.9, \((\mathfrak{g},\pi+\pi^{\prime},D+D^{\prime},\rho+\rho^{\prime},\mu+\mu^{\prime})\) is a pre-LieDer pair if and only if \((s^{-1}(\pi+\pi^{\prime}+\rho+\rho^{\prime}+\mu+\mu^{\prime}),D+D^{\prime})\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\). By Theorem 3.5, \((s^{-1}(\pi+\pi^{\prime}+\rho+\rho^{\prime}+\mu+\mu^{\prime}),D+D^{\prime})\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}\}_{k=1}^{+\infty})\) if and only if \((s^{-1}(\pi^{\prime}+\rho^{\prime}+\mu^{\prime}),D^{\prime})\) is a Maurer-Cartan element of the twist \(L_{\infty}\)-algebra \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}^{(s^{-1}(\pi+\rho+\mu),D)}\}_{k=1} ^{+\infty})\). ## 4. Cohomologies and infinitesimal deformations of pre-LieDer pairs In this section, first, we give the cohomology of pre-LieDer pairs. Then, we study infinitesimal deformations of pre-LieDer pairs by using this cohomology. We show that equivalent infinitesimal deformations are in the same second cohomology group. Finally, we use this cohomology to give the cohomology of regular pre-LieDer pairs. In this subsection, we will also denote the pre-Lie multiplication \(\cdot\) by \(\pi\). ### Cohomologies of pre-LieDer pairs Let \((\mathfrak{g},\pi,D,\rho,\mu)\) be a pre-LieDer pair. Define the set of \(0\)-cochains \(C^{0}(\mathfrak{g},\pi,\rho,\mu,D)\) to be \(0\) and define the set of \(1\)-cochains \(C^{1}(\mathfrak{g},\pi,\rho,\mu,D)\) to be \(\operatorname{Hom}(\mathfrak{g},\mathfrak{g})\oplus\operatorname{Hom}(V,V)\). For \(n\geq 2\), define the set of \(n\)-cochains \(C^{n}(\mathfrak{g},\pi,\rho,\mu,D)\) by \[C^{n}(\mathfrak{g},\pi,\rho,\mu,D):=C^{n}(\mathfrak{g},\pi,\rho, \mu)\oplus C^{n-1}(\mathfrak{g};V)\] \[= \operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g}, \mathfrak{g})\oplus\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes V,V) \oplus\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes V\otimes\mathfrak{g},V)\oplus\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g},V).\] Define the coboundary operator \(\mathcal{D}:C^{n}(\mathfrak{g},\pi,\rho,\mu,D)\longrightarrow C^{n+1}( \mathfrak{g},\pi,\rho,\mu,D)\) by \[\mathcal{D}(f,\theta)=(-1)^{n-2}l_{1}^{s^{-1}(\pi+\rho+\mu),D)}(s^{-1}f,\theta), \tag{8}\] where \(f\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g},\mathfrak{ g})\oplus\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes V,V)\oplus \operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes V\otimes\mathfrak{g},V)\) and \(\theta\in\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g},V)\). **Theorem 4.1**.: \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g},\pi,\rho,\mu,D),\mathcal{D})\) _is a cochain complex, i.e. \(\mathcal{D}\circ\mathcal{D}=0\)._ Proof.: Since \((s^{-1}L^{\prime}\oplus\mathfrak{h},\{l_{k}^{(s^{-1}(\pi+\rho+\mu),D)}\}_{k=1} ^{+\infty})\) is an \(L_{\infty}\)-algebra, we have \(l_{1}^{(s^{-1}(\pi+\rho+\mu),D)}\circ l_{1}^{(s^{-1}(\pi+\rho+\mu),D)}=0\), which implies that \(\mathcal{D}\circ\mathcal{D}=0\). **Definition 4.2**.: _The cohomology of the cochain complex \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g},\pi,\rho,\mu,D),\mathcal{D})\) is called the cohomology of the pre-LieDer pair \((\mathfrak{g},D,\rho,\mu)\). The corresponding cohomology group is denoted by \(H^{n}(\mathfrak{g},\pi,\rho,\mu,D)\)._ In the sequel, we give the explicit formula of the operator \(\mathcal{D}\). For all \(f\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g},\mathfrak{ g})\oplus\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes V,V)\oplus \operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes V\otimes\mathfrak{g},V)\) and \(\theta\in\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g},V)\), we have \[\|[\pi+\rho+\mu,f]^{MN}\|=n|0,\quad\|[\pi+\rho+\mu,\theta]^{MN}\|=n|-1,\quad \|[f,D]^{MN}\|=n|-1.\] Thus, by (8),we have \[\mathcal{D}(f,\theta) = (-1)^{n-2}l_{1}^{(s^{-1}(\pi+\rho+\mu),D)}(s^{-1}f,\theta)\] \[= (-1)^{n-2}l_{2}((s^{-1}(\pi+\rho+\mu),D),(s^{-1}f,\theta))\] \[= (-1)^{n-2}(l_{2}(s^{-1}(\pi+\rho+\mu),s^{-1}f)+l_{2}(s^{-1}(\pi+ \rho+\mu),\theta)+l_{2}(D,s^{-1}f))\] \[= (-1)^{n-2}(-s^{-1}[\pi+\rho+\mu,f]^{MN},P[\pi+\rho+\mu,\theta]^{ MN}+P[f,D]^{MN})\] \[= (-1)^{n-2}(-s^{-1}[\pi+\rho+\mu,f]^{MN},[\pi+\rho+\mu,\theta]^{ MN}+[f,D]^{MN})\] \[= (\partial f,\mathrm{d}\theta+\delta f).\] **Lemma 4.3**.: _With the above notation, for all \(f=(f_{\mathfrak{g}},f_{\rho},f_{\mu})\in\operatorname{Hom}(\wedge^{n-1} \mathfrak{g}\otimes\mathfrak{g},\mathfrak{g})\oplus\operatorname{Hom}(\wedge^{ n-1}\mathfrak{g}\otimes V,V)\oplus\operatorname{Hom}(\wedge^{n-2}\mathfrak{g} \otimes V\otimes\mathfrak{g},V)\), \(\delta:\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g}, \mathfrak{g})\oplus\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes V,V) \oplus\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes V\otimes\mathfrak{g},V)\longrightarrow \operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g},V)\) is given by_ \[(\delta f)(x_{1},\ldots,x_{n-1}x_{n}) = \sum_{i=1}^{n-1}(-1)^{i+1}f_{\mu}(x_{1},\ldots,\hat{x}_{i},\ldots,x_{n-1},D(x_{i}),x_{n})\] \[+(-1)^{n-2}(f_{\rho}(x_{1},\ldots,x_{n-1},D(x_{n}))-D(f_{ \mathfrak{g}}(x_{1},\ldots,x_{n}))).\] Proof.: For all \(x_{1},\ldots,x_{n}\in\mathfrak{g}\), we have \[(\delta f)(x_{1},\ldots,x_{n})\] \[= (-1)^{n-2}[f,D]^{MN}(x_{1},\ldots,x_{n})\] \[= (-1)^{n-2}(\sum_{i=1}^{n-1}(-1)^{i+1}f_{\mu}(D(x_{i}),x_{1},\ldots, \hat{x}_{i},\ldots,x_{n})+f_{\rho}(x_{1},\ldots,x_{n-1},D(x_{n}))-D(f_{\mathfrak{g }}(x_{1},\ldots,x_{n})))\] \[= \sum_{i=1}^{n-1}(-1)^{i+1}f_{\mu}(x_{1},\ldots,\hat{x}_{i},\ldots, x_{n-1},D(x_{i}),x_{n})+(-1)^{n-2}(f_{\rho}(x_{1},\ldots,x_{n-1},D(x_{n}))-D(f_{ \mathfrak{g}}(x_{1},\ldots,x_{n}))).\] The proof is finished. The following diagram well explain the above operators: At the end of this subsection, we give the relation between various cohomologies groups. **Theorem 4.4**.: _There is a short exact sequence of the cochain complexes:_ \[0\longrightarrow(\oplus_{n=0}^{+\infty}C^{n}(\mathfrak{g};V),\mathrm{d}) \stackrel{{\iota}}{{\longrightarrow}}(\oplus_{n=0}^{+\infty}C^{ n}(\mathfrak{g},\pi,\rho,\mu,D),\mathcal{D})\stackrel{{ p}}{{\longrightarrow}}(\oplus_{n=0}^{+\infty}C^{n}(\mathfrak{g},\pi,\rho,\mu), \partial)\longrightarrow 0,\] _where \(\iota(\theta)=(0,\theta)\) and \(p(f,\theta)=f\) for all \(f\in C^{n}(\mathfrak{g},\pi,\rho,\mu)\) and \(\theta\in C^{n-1}(\mathfrak{g};V)\)._ _Consequently, there is a long exact sequence of the cohomology groups:_ \[\cdots\longrightarrow H^{n}(\mathfrak{g};V)\stackrel{{ H^{n}( \mathfrak{g})}}{{\longrightarrow}}H^{n}(\mathfrak{g},\pi,\rho,\mu,D)\stackrel{{ H^{n}(\mathfrak{g})}}{{\longrightarrow}}H^{n}(\mathfrak{g},\pi,\rho,\mu) \stackrel{{ c^{n}}}{{\longrightarrow}}H^{n+1}(\mathfrak{g};V) \longrightarrow\cdots,\] _where the connecting map \(c^{n}\) is defined by \(c^{n}([\alpha])=[\delta(\alpha)]\), for all \([\alpha]\in H^{n}(\mathfrak{g},\pi,\rho,\mu)\)._ Proof.: By the explicit formula of the coboundary operator \(\mathcal{D}\), we have the short exact sequence of chain complexs which induces a long exact sequence of cohomology groups. ### Infinitesimal deformations of pre-LieDer pairs **Definition 4.5**.: _Let \((\mathfrak{g},\pi,D,\rho,\mu)\) be a pre-LieDer pair. If for all \(t\in\mathbb{K}\), \((\mathfrak{g},\pi_{t}=\pi+t\omega,D_{t}=D+t\hat{D},\rho_{t}=\rho+t\sigma,\mu_{ t}=\mu+t\tau)\) is a pre-LieDer pair, where \(\omega:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{g},\sigma: \mathfrak{g}\otimes V\longrightarrow\mathfrak{g},\tau:V\otimes\mathfrak{g} \longrightarrow\mathfrak{g}\) and \(\hat{D}:\mathfrak{g}\longrightarrow V\) are linear maps. We say that \((\omega,\sigma,\tau,\hat{D})\) generates an_ **infinitesimal deformation** _of a pre-LieDer pair \((\mathfrak{g},\pi,D,\rho,\mu)\)._ It is direct to check that \((\omega,\sigma,\tau,\hat{D})\) generates an infinitesimal deformation of a pre-LieDer pair \((\mathfrak{g},\pi,D,\rho,\mu)\) if and only if for all \(x,y\in\mathfrak{g}\), the following equalities are satisfied: \[[\pi+\rho+\mu,\omega+\sigma+\tau]^{MN} = 0, \tag{10}\] \[[\omega+\sigma+\tau,\omega+\sigma+\tau]^{MN} = 0,\] (11) \[[\pi+\rho+\mu,\hat{D}]^{MN}+[\omega+\sigma+\tau,D]^{MN} = 0,\] (12) \[[\omega+\sigma+\tau,\hat{D}]^{MN} = 0. \tag{9}\] Obviously, \((\omega,\sigma,\tau)\in C^{2}(\mathfrak{g},\pi,\rho,\mu)\), (9) means that \((\omega,\sigma,\tau)\) is a 2-cocycle of the pre-Lie algebra with representations, that is \(\partial(\omega,\sigma,\tau)=0\). (10) means that \((V;\sigma,\tau)\) is a representation of the pre-Lie algebra \((\mathfrak{g},\omega)\). (11) means that \(\mathrm{d}\hat{D}+\delta(\omega,\sigma,\tau)=0\). (12) means that \((\mathfrak{g},\omega,\hat{D},\sigma,\tau)\) is a pre-LieDer pair. By (9) and (11), we have \[\mathcal{D}(\omega,\sigma,\tau,\hat{D})=(\partial(\omega,\sigma,\tau),\mathrm{d }\hat{D}+\delta(\omega,\sigma,\tau))=0.\] **Theorem 4.6**.: _With the above notation, \((\omega,\sigma,\tau,\hat{D})\) is a \(2\)-cocycle of the pre-LieDer pair \((\mathfrak{g},D,\rho,\mu)\)._ **Definition 4.7**.: _Let \((\pi^{\prime}_{t}=\pi+t\omega^{\prime},\rho^{\prime}_{t}=\rho+t\sigma^{\prime}, \mu^{\prime}_{t}=\mu+t\tau^{\prime},D^{\prime}_{t}=D+t\hat{D^{\prime}})\) and \((\pi_{t}=\pi+t\omega,\rho_{t}=\rho+t\sigma,\mu_{t}=\mu+t\tau,D_{t}=D+t\hat{D})\) be two infinitesimal deformations of the pre-LieDer pair \((\mathfrak{g},\pi,D,\rho,\mu)\). We call them_ **equivalent** _if there exists \(N\in\mathfrak{gl}(\mathfrak{g})\) and \(S\in\mathfrak{gl}(V)\) such that \((\mathrm{Id}_{\mathfrak{g}}+tN,\mathrm{Id}_{V}+tS)\) is a homomorphism from the pre-LieDer pair \((\mathfrak{g},\pi^{\prime}_{t},D^{\prime}_{t},\rho^{\prime}_{t},\mu^{\prime}_ {t})\) to the pre-LieDer pair \((\mathfrak{g},\pi_{t},D_{t},\rho_{t},\mu_{t})\), i.e. for all \(x,y\in\mathfrak{g}\), the following equalities hold:_ \[(\mathrm{Id}_{\mathfrak{g}}+tN)(x\cdot^{\prime}y) = (\mathrm{Id}_{\mathfrak{g}}+tN)(x)\cdot_{t}(\mathrm{Id}_{ \mathfrak{g}}+tN)(y),\] \[(\mathrm{Id}_{V}+tS)\circ\rho^{\prime}_{t}(x) = \rho_{t}((\mathrm{Id}_{\mathfrak{g}}+tN)(x))\circ(\mathrm{Id}_{ V}+tS),\] \[(\mathrm{Id}_{V}+tS)\circ\mu^{\prime}_{t}(x) = \mu_{t}((\mathrm{Id}_{\mathfrak{g}}+tN)(x))\circ(\mathrm{Id}_{ V}+tS),\] \[(\mathrm{Id}_{V}+tS)\circ D^{\prime}_{t} = D_{t}\circ(\mathrm{Id}_{\mathfrak{g}}+tN).\] _An infinitesimal deformation is said to be_ **trivial** _if it equivalent to \((\pi,\rho,\mu,D)\)._ By direct calculations, \((\pi^{\prime}_{t},\rho^{\prime}_{t},\mu^{\prime}_{t},D^{\prime}_{t})\) and \((\pi_{t},\rho_{t},\mu_{t},D_{t})\) are equivalent deformations if and only if for all \(x,y\in\mathfrak{g},u\in V\), the following equalities hold: \[\omega^{\prime}(x,y)-\omega(x,y) = N(x)\cdot y+x\cdot N(y)-N(x\cdot y), \tag{14}\] \[N(\omega^{\prime}(x,y)) = N(x)\cdot N(y)+\omega(x,N(y))+\omega(N(x),y),\] (15) \[\omega(N(x),N(y)) = 0,\] (16) \[\sigma^{\prime}(x)u-\sigma(x)u = \rho(x)S(u)+\rho(N(x))u-S(\rho(x)u),\] (17) \[S(\sigma^{\prime}(x)u) = \rho(N(x))S(u)+\sigma(x)S(u)+\sigma(N(x))u,\] (18) \[\sigma(N(x))S(u) = 0\] (19) \[\tau^{\prime}(x)u-\tau(x)u = \mu(x)S(u)+\mu(N(x))u-S(\mu(x)u),\] (20) \[S(\tau^{\prime}(x)u) = \mu(N(x))S(u)+\tau(x)S(u)+\tau(N(x))u,\] (21) \[\tau(N(x))S(u) = 0,\] (22) \[\hat{D^{\prime}}(x)-\hat{D}(x) = D(N(x))-S(D(x)),\] (23) \[S(\hat{D^{\prime}}(x)) = \hat{D}(N(x)). \tag{13}\] We summarize the above discussion into the following conclusion: **Theorem 4.8**.: _Let \((\mathfrak{g},D,\rho,\mu)\) be a pre-LieDer pair. If two infinitesimal deformations \((\pi^{\prime}_{t}=\pi+t\omega^{\prime},\rho^{\prime}_{t}=\rho+t\sigma^{\prime},\mu^{\prime}_{t}=\mu+t\tau^{\prime},D^{\prime}_{t}=D+t\hat{D}^{\prime})\) and \((\pi_{t}=\pi+t\omega,\rho_{t}=\rho+t\sigma,\mu_{t}=\mu+t\tau,D_{t}=D+t\hat{D})\) are equivalent, then \((\omega^{\prime},\sigma^{\prime},\tau^{\prime},\hat{D^{\prime}})\) and \((\omega,\sigma,\tau,\hat{D})\) are in the same cohomology class of \(H^{2}(\mathfrak{g},\pi,\rho,\mu,D)\)._ Proof.: By Theorem 4.6, we obtain that \((\omega^{\prime},\sigma^{\prime},\tau^{\prime},\hat{D^{\prime}}),(\omega, \sigma,\tau,\hat{D})\in Z^{2}(\mathfrak{g},\pi,\rho,\mu,D)\). Obviously, \((N,S)\in C^{1}(\mathfrak{g},\pi,\rho,\mu,D)\). For all \(x,y\in\mathfrak{g},u,v\in V\), by (13), (16) and (19), we have \[((\omega^{\prime},\sigma^{\prime},\tau^{\prime})-(\omega,\sigma, \tau))((x,u),(y,v))\] \[= (N(x)\cdot y+x\cdot N(y)-N(x\cdot y),\rho(x)S(v)+\rho(N(x))v-S( \rho(x)v),\] \[\mu(y)S(u)+\mu(N(y))u-S(\mu(y)u))\] \[= \partial(N,S)((x,u),(y,v)),\] which implies that \((\omega^{\prime},\sigma^{\prime},\tau^{\prime})-(\omega,\sigma,\tau)=\partial(N,S)\). For all \(x\in\mathfrak{g}\), by (22), we have \[(\hat{D^{\prime}}-\hat{D})(x)=D(N(x))-S(D(x))=\delta(N,S)(x),\] which implies that \(\hat{D}^{\prime}-\hat{D}=\delta(N,S)\). Thus, we have \[(\omega^{\prime},\sigma^{\prime},\tau^{\prime},\hat{D}^{\prime})-(\omega,\sigma, \tau,\hat{D})=(\partial(N,S),\delta(N,S))=\mathcal{D}(N,S).\] This finishes the proof. ### Cohomologies of regular pre-LieDer pairs Let \((\mathfrak{g},D)\) be a regular pre-LieDer pair. Define the set of \(0\)-cochains \(C^{0}(\mathfrak{g},D)\) to be \(0\) and define the set of \(1\)-cochains \(C^{1}(\mathfrak{g},D)\) to be \(\operatorname{Hom}(\mathfrak{g},\mathfrak{g})\). For \(n\geq 2\), define the set of \(n\)-cochains \(C^{n}(\mathfrak{g},D)\) by \[C^{n}(\mathfrak{g},D):=\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes \mathfrak{g},\mathfrak{g})\oplus\operatorname{Hom}(\wedge^{n-2}\mathfrak{g} \otimes\mathfrak{g},\mathfrak{g}).\] For all \(f\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g}, \mathfrak{g})\) and \(\theta\in\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g}, \mathfrak{g})\). Define the embedding \(i:C^{n}(\mathfrak{g},D)\longrightarrow C^{n}(\mathfrak{g},\pi,L,R,D)\) by \[i(f,\theta)=(f,f,f,\theta).\] Denote by \(\operatorname{Im}^{n}(i)=i(C^{n}(\mathfrak{g},D))\). Then we have **Proposition 4.9**.: _With the above notation, \((\oplus_{n=0}^{+\infty}\operatorname{Im}^{n}(i),\mathcal{D})\) is a subcomplex of the cochain complex \((\oplus_{n=0}^{+\infty}C^{n}(\mathfrak{g},\pi,L,R,D),\mathcal{D})\)._ Proof.: For all \(f\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g},\mathfrak{ g})\), \(\theta\in\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g}, \mathfrak{g})\), \((f,f,f,\theta)\in\operatorname{Im}^{n}(i)\), we have \[\mathcal{D}(f,f,f,\theta) = (\partial(f,f,f),\mathrm{d}\theta+\delta(f,f,f))\] \[= (\partial(f,f,f)_{\mathfrak{g}},\partial(f,f,f)_{\rho},\partial( f,f,f)_{\mu},\mathrm{d}_{\mathrm{reg}}\theta+\delta(f,f,f)).\] By (4), we have \(\partial(f,f,f)_{\mathfrak{g}}=\mathrm{d}_{\mathrm{reg}}f\). For all \(x_{1},\ldots,x_{n+1}\in\mathfrak{g}\), we have \[\partial(f,f,f)_{\rho}(x_{1},\ldots,x_{n+1}) = \sum_{i=1}^{n}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}},\ldots,x_{n},x _{i})\cdot x_{n+1}\] \[+\sum_{i=1}^{n}(-1)^{i+1}x_{i}\cdot f(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n+1})\] \[-\sum_{i=1}^{n}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}}\ldots,x_{n},x _{i}\cdot x_{n+1})\] \[+\sum_{1\leq i<j\leq n}(-1)^{i+j}f([x_{i},x_{j}]_{C},x_{1},\ldots, \hat{x_{i}},\ldots,\hat{x_{j}},\ldots,x_{n+1})\] \[= \mathrm{d}_{\mathrm{reg}}f(x_{1},\ldots,x_{n+1}).\] and \[\partial(f,f,f)_{\mu}(x_{1},\ldots,x_{n+1})\] \[= (-1)^{n-1}(x_{n}\cdot f(x_{1},\ldots,x_{n-1},x_{n+1})+f(x_{1}, \ldots,x_{n})\cdot x_{n+1}-f(x_{1},\ldots,x_{n-1},x_{n}\cdot x_{n+1})\] \[-\sum_{i=1}^{n-1}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}},\ldots,x_{n},x_{i}\cdot x_{n+1})+\sum_{i=1}^{n-1}(-1)^{i+1}x_{i}\cdot f(x_{1},\ldots,\hat{ x_{i}},\ldots,x_{n+1})\] \[-\sum_{i=1}^{n-1}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}}\ldots,x_{n-1 },x_{i}\cdot x_{n},x_{n+1})+\sum_{i=1}^{n-1}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i} },\ldots,x_{n},x_{i})\cdot x_{n+1}\] \[+\sum_{i=1}^{n-1}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}},\ldots,x_{n- 1},x_{n}\cdot x_{i},x_{n+1})\] \[(X+u)\succ_{\sim}(y+v):=x\cdot y+\tilde{\rho}(x)(v)+\tilde{\mu}(y)(u), \quad\forall x,y\in\mathfrak{g},u,v\in V.\] and a linear map \(D+K:\mathfrak{g}\oplus V\longrightarrow\mathfrak{g}\oplus V\) by \[(D+K)(x+u):=D(x)+K(u),\quad\forall x\in\mathfrak{g},u\in V.\] **Proposition 5.2**.: _With the above notation, \((\mathfrak{g}\oplus V,\cdot_{\times},D+K)\) is a regular pre-LieDer pair, which is denoted by \((\mathfrak{g}\ltimes_{(\tilde{\varphi},\tilde{\mu})}V,D+K)\) and called the_ **semi-direct product** _of the regular pre-LieDer pair \((\mathfrak{g},D)\) and the representation \((V,K,\tilde{\rho},\tilde{\mu})\)._ Proof.: Since \((V;\tilde{\rho},\tilde{\mu})\) is a representation of the pre-Lie algebra \((\mathfrak{g},\cdot)\), it is obviously that \((\mathfrak{g}\oplus V,\cdot_{\times})\) is a pre-Lie algebra. For all \(x,y\in\mathfrak{g},u,v\in V\), by (24) and (25), we have \[(D+K)((x+u)\cdot_{\times}(y+v)) = D(x\cdot y)+K(\tilde{\rho}(x)v)+K(\tilde{\mu}(y)u)\] \[= D(x)\cdot y+x\cdot D(y)+\tilde{\rho}(x)K(v)+\tilde{\rho}(D(x))v +\tilde{\mu}(y)K(u)+\tilde{\mu}(D(y))u\] \[= (x+u)\cdot_{\times}(D+K)(y+v)+(D+K)(x+u)\cdot_{\times}(y+v),\] which implies that \(D+K\) is a derivation of the pre-Lie algebra \((\mathfrak{g}\oplus V,\cdot_{\times})\). Thus, \((\mathfrak{g}\oplus V,\cdot_{\times},D+K)\) is a regular pre-LieDer pair. Let \((V,K,\tilde{\rho},\tilde{\mu})\) be a representation of \((\mathfrak{g},D)\). Define the set of \(0\)-cochains \(C^{0}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\) to be \(0\) and define the set of \(1\)-cochains \(C^{1}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\) to be \(\operatorname{Hom}(\mathfrak{g},V)\). For \(n\geq 2\), define the set of \(n\)-cochains \(C^{n}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\) by \[C^{n}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu}):=\operatorname{Hom}( \wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g},V)\oplus\operatorname{Hom}( \wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g},V).\] For all \(f\in\operatorname{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g},V)\) and \(\theta\in\operatorname{Hom}(\wedge^{n-2}\mathfrak{g}\otimes\mathfrak{g},V)\), define the coboundary operator \(\mathcal{D}_{(\tilde{\rho},\tilde{\mu})}:C^{n}(\mathfrak{g},D;V,K,\tilde{ \rho},\tilde{\mu})\longrightarrow C^{n+1}(\mathfrak{g},D;V,K,\tilde{\rho}, \tilde{\mu})\) by \[\mathcal{D}_{(\tilde{\rho},\tilde{\mu})}(f,\theta)=(\mathrm{d}_{(\tilde{\rho },\tilde{\mu})}f,\mathrm{d}_{(\tilde{\rho},\tilde{\mu})}\theta+\Omega_{( \tilde{\rho},\tilde{\mu})}f),\] where \(\mathrm{d}_{(\tilde{\rho},\tilde{\mu})}\) is the coboundary operator of the pre-Lie algebra \((\mathfrak{g},\cdot)\) with coefficients in the representation \((V,\tilde{\rho},\tilde{\mu})\) and \(\Omega_{(\tilde{\rho},\tilde{\mu})}:\operatorname{Hom}(\wedge^{n-1} \mathfrak{g}\otimes\mathfrak{g},V)\longrightarrow\operatorname{Hom}(\wedge^{n -1}\mathfrak{g}\otimes\mathfrak{g},V)\) is defined by \[\Omega(f)(x_{1},\ldots,x_{n})=(-1)^{n-2}(\Sigma_{i=1}^{n}f(x_{1},\ldots,x_{i- 1},D(x_{i}),x_{i+1},\ldots,x_{n})-K(f(x_{1},\ldots,x_{n}))).\] **Theorem 5.3**.: _With the above notation, \((\mathfrak{g}_{n=0}^{+\infty}C^{n}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu }),\mathcal{D}_{(\tilde{\rho},\tilde{\mu})})\) is a cochain complex, i.e. \(\mathcal{D}_{(\tilde{\rho},\tilde{\mu})}\circ\mathcal{D}_{(\tilde{\rho}, \tilde{\mu})}=0\)._ Proof.: By Proposition 5.2, we obtain that \((\mathfrak{g}\ltimes_{(\rho,\mu)}V,D+K)\) is a regular pre-LieDer pair. By Theorem 4.10, \((\mathfrak{g}_{n=0}^{+\infty}C^{n}(\mathfrak{g}\oplus V,D+K),\tilde{\mathcal{D }})\) is a cochain complex. It is straightforward to deduce that \((\mathfrak{g}_{n=0}^{+\infty}C^{n}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu }),\mathcal{D}_{(\tilde{\rho},\tilde{\mu})})\) is a subcomplex of \((\mathfrak{g}_{n=0}^{+\infty}C^{n}(\mathfrak{g}\oplus V,D+K),\tilde{\mathcal{D }})\). Thus, we obtain that \(\mathcal{D}_{(\tilde{\rho},\tilde{\mu})}\circ\mathcal{D}_{(\tilde{\rho}, \tilde{\mu})}=0\). **Definition 5.4**.: _The cohomology of the cochain complex \((\mathfrak{g}_{n=1}^{+\infty}C^{n}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu }),\mathcal{D}_{(\tilde{\rho},\tilde{\mu})})\) is called the cohomology of the regular pre-LieDer pair \((\mathfrak{g},D)\) with coefficients in the representation \((V,K,\tilde{\rho},\tilde{\mu})\). The corresponding cohomology group is denoted by \(H^{n}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\)._ **Definition 5.5**.: _Let \((\mathfrak{g},\cdot,D)\) and \((V,\cdot_{V},K)\) be two regular pre-LieDer pairs. An_ **extension** _of \((\mathfrak{g},D)\) by \((V,K)\) is a short exact sequence of pre-LieDer pair morphisms:_ _where \((\widehat{\mathfrak{g}},\cdot_{\mathfrak{g}},\tilde{D})\) is a regular pre-LieDer pair._ _It is called an_ **abelian extension** _if \((V,\cdot_{V})\) is an abelian pre-Lie algebra, i.e. for all \(u,v\in V,u\cdot_{V}v=0\)._ **Definition 5.6**.: \(A\) **section** _of an extension \((\hat{\mathfrak{g}},\hat{D})\) of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\) is a linear map \(s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}\) such that \(p\circ s=\mathrm{Id}_{\mathfrak{g}}\)._ Let \((\hat{\mathfrak{g}},\hat{D})\) be an abelian extension of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\) and \(s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}\) a section. For all \(x,y\in\mathfrak{g}\), define linear maps \(\theta:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow V\) and \(\xi:\mathfrak{g}\longrightarrow V\) respectively by \[\theta(x,y) = s(x)\cdot_{\hat{\mathfrak{g}}}s(y)-s(x\cdot y),\] \[\xi(x) = \hat{D}(s(x))-s(D(x)).\] And for all \(x,y\in\mathfrak{g},u\in V\), define \(\tilde{\rho},\tilde{\mu}:\mathfrak{g}\longrightarrow\mathrm{gl}(V)\) respectively by \[\tilde{\rho}(x)(u) = s(x)\cdot_{\hat{\mathfrak{g}}}u,\] \[\tilde{\mu}(x)(u) = u\cdot_{\hat{\mathfrak{g}}}s(x).\] Obviously, \(\hat{\mathfrak{g}}\) is isomorphic to \(\mathfrak{g}\oplus V\) as vector spaces. Transfer the regular pre-LieDer pair structure on \(\hat{\mathfrak{g}}\) to that on \(\mathfrak{g}\oplus V\), we obtain a regular pre-LieDer pair \((\mathfrak{g}\oplus V,\diamond,\phi)\), where \(\diamond\) and \(\phi\) are given by \[(x+u)\diamond(y+v) = x\cdot y+\theta(x,y)+\tilde{\rho}(x)(v)+\tilde{\mu}(y)(u),\quad \forall\ x,y\in\mathfrak{g},u,v\in V,\] \[\phi(x+u) = D(x)+\xi(x)+K(u),\quad\forall\ x\in\mathfrak{g},u\in V.\] **Theorem 5.7**.: _With the above notation, \((V,K,\tilde{\rho},\tilde{\mu})\) is a representation of the regular pre-LieDer pair \((\mathfrak{g},D)\). Moreover, this representation is independent of the choice of sections._ Proof.: For all \(x,y\in\mathfrak{g}\), \(u\in V\), by the definition of a pre-Lie algebra, we have \[0 = (x\diamond y)\diamond u-x\diamond(y\diamond u)-(y\diamond x) \diamond u+y\diamond(x\diamond u)\] \[= (x\cdot y+\theta(x,y))\diamond u-x\diamond\tilde{\rho}(y)(u)-(y \cdot x+\theta(y,x))\diamond u+y\diamond\tilde{\rho}(x)(u)\] \[= \tilde{\rho}(x\cdot y)u-\tilde{\rho}(x)\tilde{\rho}(y)u-\tilde{ \rho}(y\cdot x)u+\tilde{\rho}(y)\tilde{\rho}(x)u,\] which implies that \[\tilde{\rho}([x,y]_{C})=\tilde{\rho}(x)\circ\tilde{\rho}(y)-\tilde{\rho}(y) \circ\tilde{\rho}(x). \tag{26}\] Similarly, we have \[\tilde{\mu}(y)\circ\tilde{\mu}(x)-\tilde{\mu}(x\cdot y)=\tilde{\mu}(y)\circ \tilde{\rho}(x)-\tilde{\rho}(x)\circ\tilde{\mu}(y). \tag{27}\] For all \(x\in\mathfrak{g}\), \(u\in V\), we have \[0 = \phi(x\diamond u)-\phi(x)\diamond u-x\diamond\phi(u)\] \[= \phi(\tilde{\rho}(x)u)-(D(x)+\xi(x))\diamond u-x\diamond K(u)\] \[= K(\tilde{\rho}(x)u)-\tilde{\rho}(D(x))u-\tilde{\rho}(x)K(u),\] which implies that \[K(\tilde{\rho}(x)u)=\tilde{\rho}(x)K(u)+\tilde{\rho}(D(x))u. \tag{28}\] Similarly, we have \[K(\tilde{\mu}(x)u)=\tilde{\mu}(x)K(u)+\tilde{\mu}(D(x))u. \tag{29}\] Thus, By (26), (27), (28) and (29), we obtain that \((V,K,\tilde{\rho},\tilde{\mu})\) is a representation. Let \(s^{\prime}\) be another section and \((V,K,\tilde{\rho}^{\prime},\tilde{\mu}^{\prime})\) the corresponding representation of the regular pre-LieDer pair \((\mathfrak{g},D)\). Since \(s(x)-s^{\prime}(x)\in V\), then we have \[\tilde{\rho}(x)u-\tilde{\rho}^{\prime}(x)u=(s(x)-s^{\prime}(x))\cdot_{\hat{ \mathfrak{g}}}u=0,\] which implies that \(\tilde{\rho}=\tilde{\rho}^{\prime}\). Similarly, we have \(\tilde{\mu}=\tilde{\mu}^{\prime}\). Thus, this representation is independent of the choice of sections. **Theorem 5.8**.: _With the above notation, \((\theta,\xi)\) is a \(2\)-cocycle of the regular pre-LieDer pair \((\mathfrak{g},D)\) with coefficients in the representation \((V,K,\tilde{\rho},\tilde{\mu})\)._ Proof.: For all \(x,y,z\in\mathfrak{g}\), by the definition of a pre-Lie algebra, we have \[0 = (x\diamond y)\diamond z-x\diamond(y\diamond z)-(y\diamond x) \diamond z+y\diamond(x\diamond z)\] \[= (x\cdot y+\theta(x,y))\diamond z-x\diamond(y\cdot z+\theta(y,z))- \big{(}\cdot x+\theta(y,x)\big{)}\diamond z+y\diamond\big{(}x\cdot z+\theta(x,z)\big{)}\] \[= \theta(x\cdot y,z)+\tilde{\mu}(z)\theta(x,y)-\theta(x,y\cdot z)- \tilde{\rho}(x)\theta(y,z)\] \[-\theta(y\cdot x,z)-\tilde{\mu}(z)\theta(y,x)+\theta(y,x\cdot z )+\tilde{\rho}(y)\theta(x,z)\] \[= -\mathrm{d}_{\langle\tilde{\rho},\tilde{\mu}\rangle}\theta(x,y)\] which implies that \(\mathrm{d}_{\langle\tilde{\rho},\tilde{\mu}\rangle}\theta=0\). For all \(x,y\in\mathfrak{g},u,v\in V\), we have \[0 = \phi((x+u)\diamond(y+v))-\phi(x+u)\diamond(y+v)-(x+u)\diamond \phi(y+v)\] \[= \phi(x\cdot y+\theta(x,y)+\tilde{\rho}(x)v+\tilde{\mu}(y)u)-(D(x )+\xi(x)+K(u))\diamond(y+v)-(x+u)\diamond(D(y)+\xi(y)+K(v))\] \[= K(\theta(x,y))+\xi(x\cdot y)-\tilde{\mu}(y)\xi(x)-\theta(D(x),y )-\tilde{\rho}(x)\xi(y)-\theta(x,D(y))\] \[= -(\mathrm{d}_{\langle\tilde{\rho},\tilde{\mu}\rangle}\xi+\Omega_{ \langle\tilde{\rho},\tilde{\mu}\rangle}\theta)(x,y),\] which implies that \(\mathrm{d}_{\langle\tilde{\rho},\tilde{\mu}\rangle}\xi+\Omega_{\langle\tilde{ \rho},\tilde{\mu}\rangle}\theta=0\). Thus, we obtain that \(\mathcal{D}_{\langle\tilde{\rho},\tilde{\mu}\rangle}(\theta,\xi)=0\), which implies that \((\theta,\xi)\) is a \(2\)-cocycle of the the regular pre-LieDer pair \((\mathfrak{g},D)\) with coefficients in the representation \((V,K,\tilde{\rho},\tilde{\mu})\). The proof is finished. **Definition 5.9**.: _Let \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{ 2}})\) be two abelian extensions of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\). They are said to be_ **isomorphic** _if there exists a regular pre-LieDer pair isomorphism \(\zeta:(\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{ \mathfrak{g}}_{1}})\longrightarrow(\hat{\mathfrak{g}}_{2},\cdot_{\hat{ \mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{2}})\), such that the following diagram is commutative:_ **Lemma 5.10**.: _Let \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{ 2}})\) be two isomorphic abelian extensions of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\). Then they are give rise to the same representation of \((\mathfrak{g},D)\)._ Proof.: Let \(s_{1}:\mathfrak{g}_{1}\longrightarrow\hat{\mathfrak{g}}_{1}\) and \(s_{2}:\mathfrak{g}_{2}\longrightarrow\hat{\mathfrak{g}}_{2}\) be two sections of \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{ 2}})\) respectively. By Theorem 5.7, we obtain that \((V,K,\tilde{\rho}_{1},\tilde{\mu}_{1})\) and \((V,K,\tilde{\rho}_{2},\tilde{\mu}_{2})\) are their representations respectively. Define \(s_{1}^{\prime}:\mathfrak{g}_{1}\longrightarrow\hat{\mathfrak{g}}_{1}\) by \(s_{1}^{\prime}=\zeta^{-1}\circ s_{2}\). Since \(\zeta:(\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{ \mathfrak{g}}_{1}})\longrightarrow(\hat{\mathfrak{g}}_{2},\cdot_{\hat{ \mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{2}})\) is a pre-LieDer pair isomorphism satisfying the commutative diagram in Definition 5.9, by \(p_{2}\circ\zeta=p_{1}\), we have \[p_{1}\circ s_{1}^{\prime}=p_{2}\circ\zeta\circ\zeta^{-1}\circ s_{2}=\mathrm{ Id}_{\mathfrak{g}}.\] Thus, we obtain that \(s_{1}^{\prime}\) is a section of \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{\mathfrak{g}}_{ 1}})\). For all \(x\in\mathfrak{g},u\in V\), we have \[\tilde{\rho}_{1}(x)(u)=s_{1}^{\prime}(x)\cdot_{\hat{\mathfrak{g}}_{1}}u=(\zeta^ {-1}\circ s_{2})(x)\cdot_{\hat{\mathfrak{g}}_{1}}u=\zeta^{-1}(s_{2}(x)\cdot_{ \hat{\mathfrak{g}}_{2}}u)=\tilde{\rho}_{2}(x)(u),\] which implies that \(\tilde{\rho}_{1}=\tilde{\rho}_{2}\). Similarly, we have \(\tilde{\mu}_{1}=\tilde{\mu}_{2}\). This finishes the proof. So in the sequel, we fixed a representation \((V,K,\tilde{\rho},\tilde{\mu})\) of a regular pre-LieDer pair \((\mathfrak{g},D)\) and consider abelian extensions that induce the given representation. **Theorem 5.11**.: _Abelian extensions of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\) are classified by the second cohomology group \(H^{2}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\)._ Proof.: Let \((\hat{\mathfrak{g}},\hat{D})\) be an abelian extension of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\). Choosing a section \(s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}\), by Theorem 5.8, we obtain that \((\theta,\xi)\in Z^{2}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\). Now we show that the cohomological class of \((\theta,\xi)\) does not depend on the choice of sections. In fact, let \(s\) and \(s^{\prime}\) be two different sections. Define \(\varphi:\mathfrak{g}\longrightarrow V\) by \(\varphi(x)=s(x)-s^{\prime}(x)\). Then for all \(x,y\in\mathfrak{g}\), we have \[\theta(x,y) = s(x)\cdot_{\hat{\mathfrak{g}}}s(y)-s(x\cdot y)\] \[= \left(s^{\prime}(x)+\varphi(x)\right)\cdot_{\hat{\mathfrak{g}}} \left(s^{\prime}(y)+\varphi(y)\right)-s^{\prime}(x\cdot y)-\varphi(x\cdot y)\] \[= s^{\prime}(x)\cdot_{\hat{\mathfrak{g}}}s^{\prime}(y)+\tilde{ \rho}(x)\varphi(y)+\tilde{\mu}(y)\varphi(x)-s^{\prime}(x\cdot y)-\varphi(x \cdot y)\] \[= \theta^{\prime}(x,y)+\mathrm{d}_{(\tilde{\rho},\tilde{\mu})} \varphi(x,y),\] which implies that \(\theta-\theta^{\prime}=\mathrm{d}_{(\tilde{\rho},\tilde{\mu})}\varphi\). Similarly, we have \(\xi-\xi^{\prime}=\Omega_{(\tilde{\rho},\tilde{\mu})}\varphi\). Therefore, we obtain that \((\theta-\theta^{\prime},\xi-\xi^{\prime})=(\mathrm{d}_{(\tilde{\rho},\tilde{ \mu})}\varphi,\Omega_{(\tilde{\rho},\tilde{\mu})}\varphi)=\mathcal{D}_{( \tilde{\rho},\tilde{\mu})}\varphi\), \((\theta,\xi)\) and \((\theta^{\prime},\xi^{\prime})\) are in the same cohomological class. Now we go on to prove that isomorphic abelian extensions give rise to the same element in \(H^{2}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\). Assume that \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{ 2}})\) are two isomorphic abelian extensions of a regular pre-LieDer pair \((\mathfrak{g},D)\) by \((V,K)\), and \(\zeta:(\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},D_{\hat{ \mathfrak{g}}_{1}})\longrightarrow(\hat{\mathfrak{g}}_{2},\cdot_{\hat{ \mathfrak{g}}_{2}},D_{\hat{\mathfrak{g}}_{2}})\) is a pre-LieDer pair isomorphism satisfying the commutative diagram in Definition 5.9. Assume that \(s_{1}:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}_{1}\) is a section of \(\hat{\mathfrak{g}}_{1}\). By \(p_{2}\circ\zeta=p_{1}\), we have \[p_{2}\circ(\zeta\circ s_{1})=p_{1}\circ s_{1}=\mathrm{Id}_{\mathfrak{g}}.\] Thus, we obtain that \(\zeta\circ s_{1}\) is a section of \(\hat{\mathfrak{g}}_{2}\). Define \(s_{2}=\zeta\circ s_{1}\). Since \(\zeta\) is an isomorphism of pre-LieDer pair and \(\zeta\)\(|_{V}{=\mathrm{Id}_{V}}\), for all \(x,y\in\mathfrak{g}\), we have \[\theta_{2}(x,y) = s_{2}(x)\cdot_{\hat{\mathfrak{g}}_{2}}s_{2}(y)-s_{2}(x\cdot y)\] \[= (\zeta\circ s_{1})(x)\cdot_{\hat{\mathfrak{g}}_{2}}(\zeta\circ s _{1})(y)-(\zeta\circ s_{1})(x\cdot y)\] \[= \zeta(s_{1}(x)\cdot_{\hat{\mathfrak{g}}_{1}}s_{1}(y)-s_{1}(x \cdot y))\] \[= \theta_{1}(x,y),\] Similarly, we have \(\xi_{1}=\xi_{2}\). Thus, isomorphic abelian extensions gives rise to the same element in \(H^{2}(\mathfrak{g},D;V,K,\tilde{\rho},\tilde{\mu})\). Conversely, given two \(2\)-cocycles \((\theta_{1},\xi_{1})\) and \((\theta_{2},\xi_{2})\), we can construct two abelian extensions \((\mathfrak{g}\oplus V,\diamond_{1},\phi_{1})\) and \((\mathfrak{g}\oplus V,\diamond_{2},\phi_{2})\). If \((\theta_{1},\xi_{1}),(\theta_{2},\xi_{2})\in H^{2}(\mathfrak{g},D;V,K,\tilde{ \rho},\tilde{\mu})\), then there exists \(\varphi:\mathfrak{g}\longrightarrow V\), such that \(\theta_{1}=\theta_{2}+\mathrm{d}_{(\tilde{\rho},\tilde{\mu})}\varphi\) and \(\xi_{1}=\xi_{2}+\Omega_{(\tilde{\rho},\tilde{\mu})}\varphi\). We define \(\zeta:\mathfrak{g}\oplus V\longrightarrow\mathfrak{g}\oplus V\) by \[\zeta(x+u)=x+u+\varphi(x),\quad\forall\ x\in\mathfrak{g},u\in V.\] For all \(x,y\in\mathfrak{g},u,v\in V\), by \(\theta_{1}=\theta_{2}+\mathrm{d}_{(\tilde{\rho},\tilde{\mu})}\varphi\), we have \[\zeta((x+u)\diamond_{1}(y+v))-\zeta(x+u)\diamond_{2}\zeta(y+v)\] \[= \zeta(x\cdot y+\theta_{1}(x,y)+\tilde{\rho}(x)(v)+\tilde{\mu}(y) (u))-\left(x+u+\varphi(x)\right)\diamond_{2}\left(y+v+\varphi(y)\right)\] \[= \theta_{1}(x,y)+\varphi(x\cdot y)-\theta_{2}(x,y)-\tilde{\rho}(x) \varphi(y)-\tilde{\mu}(y)\varphi(x)\] \[= \theta_{1}(x,y)-\theta_{2}(x,y)-\mathrm{d}_{(\tilde{\rho},\tilde{ \mu})}\varphi(x,y)\] \[= 0,\] and for all \(x\in\mathfrak{g},u\in V\), by \(\xi_{1}=\xi_{2}+\Omega_{(\tilde{\rho},\tilde{\mu})}\varphi\), we have \[\zeta\circ\phi_{1}(x+u)-\phi_{2}\circ\zeta(x+u)\] \[= \zeta\big{(}D(x)+\xi_{1}(x)+K(u)\big{)}-\phi_{2}\big{(}x+u+ \varphi(x)\big{)}\] \[= \xi_{1}(x)+\varphi(D(x))-\xi_{2}(x)-K(\varphi(x))\] \[= \xi_{1}(x)-\xi_{2}(x)-\Omega_{(\tilde{\varphi},\tilde{\mu})}\varphi(x)\] \[= 0,\] Thus, \(\zeta\) is a pre-LieDer pair isomorphism from \((\mathfrak{g}\oplus V,\diamond_{1},\phi_{1})\) to \((\mathfrak{g}\oplus V,\diamond_{2},\phi_{2})\). Moreover, it is obvious that the diagram in Definition 5.9 is commutative. This finishes the proof. **Acknowledgement:** This work is supported by NSF of Jilin Province (No. YDZJ202201ZYTS589), NNSF of China (Nos. 12271085, 12071405) and the Fundamental Research Funds for the Central Universities.
2308.04933
You Are How You Walk: Quantifying Privacy Risks in Step Count Data
Wearable devices have gained huge popularity in today's world. These devices collect large-scale health data from their users, such as heart rate and step count data, that is privacy sensitive, however it has not yet received the necessary attention in the academia. In this paper, we perform the first systematic study on quantifying privacy risks stemming from step count data. In particular, we propose two attacks including attribute inference for gender, age and education and temporal linkability. We demonstrate the severity of the privacy attacks by performing extensive evaluation on a real life dataset and derive key insights. We believe our results can serve as a step stone for deriving a privacy-preserving ecosystem for wearable devices in the future.
Bartlomiej Surma, Tahleen Rahman, Monique Breteler, Michael Backes, Yang Zhang
2023-08-09T13:06:13Z
http://arxiv.org/abs/2308.04933v1
# You Are How You Walk: Quantifying Privacy Risks in Step Count Data ###### Abstract Wearable devices have gained huge popularity in today's world. These devices collect large-scale health data from their users, such as heart rate and step count data, that is privacy sensitive, however it has not yet received the necessary attention in the academia. In this paper, we perform the first systematic study on quantifying privacy risks stemming from step count data. In particular, we propose two attacks including attribute inference for gender, age and education and temporal linkability. We demonstrate the severity of the privacy attacks by performing extensive evaluation on a real life dataset and derive key insights. We believe our results can serve as a step stone for deriving a privacy-preserving ecosystem for wearable devices in the future. + Footnote †: sharp}\)Equal contribution ## I Introduction In the current era of the Internet of Things, extensive amounts of data are generated by users not just from internet browsing and social media but also smart devices like wearables, cars or even household appliances. While some data is shared willingly and purposely by the users themselves (e.g. social media posts), some data is shared unknowingly (e.g. web browser telemetry, tracking pixels). Yet some other data, such as biomedical or genomic data is shared only with a trusted party for a specific purpose (e.g. health situation monitoring and improvement, activity tracking and analysis, dietary and lifestyle advice/consultation). This has led to huge interest in privacy concerns arising out of such data. Recently, acts like the General Data Privacy Regulation or the California Consumer Privacy Act have imposed strict regulations for collecting, processing and sharing user data. Certain types of data are classified as personal or even sensitive personal1. However, many kinds of biomedical data (e.g. step counts, heart rate, SpO2, nutrition and sleep data) are in the gray zone, which are considered sensitive personal information only under special circumstances when they indicate diseases or disabilities. In case of biomedical services, the user typically does not have fine grained control over the data shared, i.e., it is an all or nothing scenario. Such data is extremely sensitive, as it can expose other information about the user, such as their medical condition, ethnicity, habits, kinship or financial status. Therefore, biomedical data needs to be treated with extreme caution by the service provider. Especially in the recent years, wearable devices have gained huge popularity and become inseparable parts of people's lives. Data collected from these devices is used for a plethora of purposes by multiple parties, such as private companies, health organizations and government agencies. On the other hand, this gives rise to increasing concerns over privacy of the user. Footnote 1: [https://www.burges-salmon.com/news-and-insight/legal-updates/gdpr-personal-data-and-sensitive-personal-data/](https://www.burges-salmon.com/news-and-insight/legal-updates/gdpr-personal-data-and-sensitive-personal-data/) In two recent works [15, 16] Jiang et al. devise a new deep neural network model to predict age and gender from pedometer data. The authors use number of steps made each day collected over 259 consecutive days and are able to find different walking behaviors during week days, weekends and holidays. The shortcoming of these papers is that an attacker needs to collect data over hundreds of days, which is not very practical. In this paper, we perform the first, extensive study of privacy risks arising from fine grained user's step count data collected over short time spans such as one week or even a single day. Such data is collected by various activity trackers (e.g. Edmondo, FitBit, Apple's Health), but can also be collected from the accelerometer on smartphones either by an app or an opened website. Such accelerometer data is not considered sensitive and therefore can be collected without explicit permission from the user. While concerns about privacy of walking patterns data were raised before 2, to the best of our knowledge, a systematic in-depth analysis of the privacy issues has not been performed yet. Footnote 2: [https://w3c.github.io/sensors/#user-identifying](https://w3c.github.io/sensors/#user-identifying) ### _Our Contributions_ We perform the first large-scale study of the privacy issues within fine grained user's step count data. In particular, we design various attribute inference and linkability attacks. For the attribute inference attack, we assume that the adversary has access to the step count data of the target user for example collected from a smartphone. The adversary then tries to infer certain personal attributes to which he normally would not have access. For the linkability attack, the adversary has access to target user's step count data as well as an anonymized database containing further sensitive information along with the step count data collected at a certain point in time. The adversary then tries to link the target user with the record in the database. This could have a wide variety of implications like targeted advertisements, surveillance, unfair credit score and pricing by insurance companies or banks, to name a few. We demonstrate the performance of the attacks with an extensive evaluation on a real world dataset of 1000 participants. We make the following key contributions: * In order to leverage the power of deep learning, we develop three conceptually different feature extraction methods, namely, statistical, distributions and autoencoders. * Our attribute inference attack is aimed at finding the correlation between the walking patterns of the user and three personal attributes namely gender, age and education. We find that gender and age can be inferred with a high confidence, while education does not show a strong correspondence with our data. * We also run an ensemble version of our attribute inference attack by splitting the user step data into actions based on active and non-active periods. We classify each action separately and infer the users' attribute based on all the actions of a user. * We make a comparison between all the feature selection methods and classifiers on all the attribute inference tasks. * We run three different types on linkability attacks. The first one is unsupervised and it relies on the distance (under some metric) between feature vectors of different samples. The second attack uses a pairwise vector similarity metric between features of different samples to fit a traditional machine learning classifier, namely, random forest. The third one uses the One Shot Learning method with Siamese networks. ### _Organization_ The paper is organized as follows: in Section II, we present some related work. We introduce our dataset in Section III. In Section IV we describe how we extract features from the raw data. Next in Section V we present our attribute inference attack followed by the experimental evaluation in Section VI. In Section VII we present our user linkability attack, followed by its experimental evaluation in Section VIII. Finally we conclude our paper in Section IX. ## II Related Work It has been shown, that it is possible to fingerprint Android devices by accessing the accelerometer API [5, 9] from a website opened on the phone. Bittel et al. [4] shown that iPhone's accelerometer is just as accurate and precise as biomedical measurement devices. Accelerometer data from a phone carried in a pants pocket is sufficient to predict owner's activity (e.g. sitting, standing, walking, running) [1, 21, 12]. Accelerometer sensor data reveals how users hold and touch smartphones by hand, based on which Davarci et al. [8] perform child/adult detection. Users' demographics inference has been studied in many previous works, which further confirms how important for privacy is such data. It has been demonstrated that users' gender and age can be inferred from their web browsing behaviors [14], mobile communication patterns [10] or applications usage and web browsing on smartphones [17]. The potential benefits of activity tracking for health and well-being has received a lot of attention from both academia and industry. Shameli et al. [23] study how competitions affect physical activity using a dataset of competitions within the Argus smartphone app. Althoff et al. [3] study activity distribution from smartphones of 717,527 people across 111 countries. Using the same dataset, Pierson et al. [22] use Cyclic Hidden Markov Models to detect and model activity cycles. However, few have studied users' behaviors when sharing such personal fitness data and the privacy concerns that arise from the collection, aggregation, and sharing of this data. Vitak et al. [24] highlight the relationship between users' demographics, data sharing behaviors, privacy concerns, and internet skills. Hassan et al. [13] demonstrate an attack against Endpoint Privacy Zones, that infers users' protected locations from their information in public posts using activity data collected from the Strava app. Meteriz et al. [19] predict the location trajectory of users from publicly available elevation profiles. Nguyen et al. [20] demonstrate high accuracy location tracking just through smartphone accelerometer and magnetometer's footprints. ## III Dataset Our step count dataset was collected by DZNE (German Center of Neurodegenerative Diseases) among inhabitants of a middle-size German city, as part of their Rheinland Study to identify factors that influence adult health over the lifespan and into old age. The participants gave their explicit consent to use their pseudonymized data for research purposes. Even pseudonymized, the data is very sensitive; we ensured it never leaves trusted devices and won't be able to share it with researchers from other institutes. The data was collected using an ActivPal Sensor, a small device worn on the thigh, which was carried by users for 7 consecutive days. This can, to a good extent, simulate step count data collected from any other wearable device (e.g. smartwatch) or from a phone kept in a pocket throughout the day. ### _Data Description_ Our raw dataset consists of number of step in each 15s period, for 1000 participants. We exclude 3 users due to incomplete data. Each user has age, gender and education attributes. The age of participants in our dataset ranges from 30 to 91 years old. Figure 1 shows the distribution of users of different ages in our dataset. We divide users into two classes based on the median age (55 years). Table I shows the percentage of users in each class by gender and education. For education there are 3 levels: * **Low.** Early childhood education up to lower secondary education. * **Middle.** Upper secondary education up to Bachelor degree or equivalent level. * **High.** Master degree or equivalent level up to Doctoral degree or equivalent level. For the classification task of education, we discard users with low education, since they comprise only \(1.8\%\) of all participants. Figure 2 shows distribution of users among all 3 attributes. You can notice weak negative correlation of age and education. To measure it we encode education as \(0,1,2\) for low, medium, high respectively and gender as 0 for male and 1 for female. Table II confirms our observations. Figure 3 shows the result of performing Principal Component Analysis (PCA) on users' raw step counts. The two colors denote the two classes for each attribute. We note that PCA on raw data does not suffice to separate users by gender, age or education. Therefore, in the next section we present methods to preprocess this raw data in order to extract meaningful features for our tasks. ### _Ethical Considerations_ We note that our primary institution does not provide an IRB nor mandate (or enable) approval for such experiments. Meanwhile, the organization who collected the data followed the standard protocol for all the ethical compliance. We also store the data in an independent server with f access control and during experiments, we strictly follow the data processing guideline defined by the data collecting organization. ## IV Feature Extraction In this section we describe our three conceptually different feature extraction methods, followed by three different normalization techniques we use. We refer to the sequence of the number of steps made by a user \(u\) every 15 seconds, as the raw step count vector, \(\vec{s}_{raw,u}\). These raw step count vectors \(\vec{s}_{raw,u}\) are time series and thus are hard to work with when directly used as feature vectors for machine learning tasks. Most standard machine learning techniques learn based on features that should be comparable between data points, which is not the case for time series. Fig. 1: Distribution of age in our dataset Fig. 3: PCA of users raw step counts over the whole week (top) and split over 7 days (bottom) for each of our 3 attributes Fig. 2: Distribution of users between all three attributes, the darker the color the more users with identical attributes A small displacement of an event in time would result in a feature vector much different from the original one (e.g., by cosine similarity). For example if a user begins a morning walk even five minutes later than usual, the beginning and the end of the walk are now different features and this cannot be captured by traditional machine learning models. To this end, we use different feature extraction methods as described next. For a user \(u\), we collectively refer to the set of step count feature vectors from different days by \(S_{u}\), and a feature vector for a day \(d\) as \(\vec{s}_{u}^{d}\). These \(\vec{s}_{u}^{d}\) can either be the raw step counts or features created according to one of the methods below or their normalized versions. We omit subscript \(u\) when it is clear that we are talking about one specific user. Similarly, we omit the superscript \(d\) when the \(d\) is irrelevant. ### _Statistical Method_ For each user, we first split \(\vec{s}_{raw}\) into smaller, non overlapping subvectors of a defined window size \(w\) (the last subvector might be smaller). Then we calculate basic statistical values, which reflect different characteristics of a user's walking behavior: sum (how much the user walks), maximum (highest speed of the user), mean (how fast the user walked on average), median (around what speed did user use most of the time) and standard deviation (how much acceleration / deceleration did user do) on each subvector and combine all the calculated values in a single output vector \(\vec{s}_{stat}\). We use different subsets of statistics and different values of \(w\) in our experiments. **Example 4.1**: _Let \(\vec{s}_{raw}=(5,0,0,2,3,4,3,0),w=3\), \(stat=\{\textit{sum},\textit{mean}\}\). We split \(\vec{s}_{raw}\) into three subvectors \((5,0,0)\), \((2,3,4)\) and \((3,0)\). Thus \(\vec{s}_{\{\textit{sum},\textit{mean}\}}=(\textit{sum}(5,0,0),\textit{mean} (5,0,0),\textit{sum}(2,3,4),\textit{mean}(2,3,4),\textit{sum}(3,0),\textit{ mean}(3,0))=(5,1.67,9,3,3,1.5)\)._ ### _Distributional Method_ We want to capture a distribution of steps done in every 15s period. We also want to retain the information about time of a day when users walk more or less. Therefore, we split the \(\vec{s}_{raw}\) into subvectors, calculate distributions on them and finally concatenate them. Precisely, we find the maximal number of steps \(\textit{max\_steps}\) any user makes in 15s. Then, for each user, we split \(\vec{s}_{raw}\) into smaller, non overlapping subvectors of a defined window size \(w\) (the last subvector might be smaller). We group all possible number of steps \(0,1,2,\ldots,\textit{max\_steps}\) into buckets of size \(b\). Then for each resulting vector, we count the number of occurrences of steps in each bucket. Since \(0\) steps is the most common event, we have a bucket containing just \(0\) steps, and further buckets containing \(b\) different number of steps. We combine the calculated value counts into a single output vector \(\vec{s}_{dist}\). **Example 4.2**: _For \(\vec{s}_{raw}=(5,0,0,2,3,4,3,0)\), \(w=3\), \(b=3\), \(\textit{max\_steps}=6\). We split \(\vec{s}_{raw}\) into three subvectors \((5,0,0)\), \((2,3,4)\) and \((3,0)\). Each subvector will have three buckets, i.e., \(\{0\},[1,3]\), and \([4,6]\). For the first subvector \((5,0,0)\), we have \(2\) instances in the first bucket, i.e., \(\{0\}\), \(0\) instances in the second bucket, i.e., \([1,3]\) and \(1\) instance in the third, i.e, \([4,6]\). Analogous calculations are done for the next two subvectors. The resulting feature vector is then \(\vec{s}_{dist}=(2,0,1,0,2,1,1,1,0)\). ### _Autoencoder_ Autoencoders are a type of neural networks consisting of an encoder and decoder. In general, the encoder compresses the input into a lower dimensional vector and the decoder recreates the original input from it. The unsupervised training objective of the network is to reconstruct the original input with as little loss as possible. Because of the created "bottleneck", a data compression method is learnt by the neural network. Once trained, the autoencoder model can be used to transform data into their compressed representations by only using the encoder part. Autoencoders are widely used for denoising, anomaly detection, feature extraction and more. In this work, we first train an autoencoder with the normalized \(\vec{s}_{raw}\) of all the users in our training set to train the model. We then encode each vector into a compressed representation \(\vec{s}_{ae}\) with the trained encoder part of our network. We then use the resulting \(\vec{s}_{ae}\) as feature vectors for attacks. We create two variants of autoencoders: one consisting of densely connected layers and the other one consisting of 1 dimensional Convolutional Neural Network (1D CNN) layers. ### _Normalization_ We now describe the normalization techniques we use on the raw data as well as the features generated by the methods above. #### Iv-D1 Feature-wise Normalization In this method, each \(i\)-th value of a feature vector is divided by the maximum of \(i\)-th values of all feature vectors. This normalization is mostly useful for statistics and distribution features. #### Iv-D2 Vector-wise Normalization In this method, each element of an input vector is divided by the value of the largest element of this vector. For raw step count data, such processing removes information of how fast a user can walk, but keeps information when does she walk more and when less. Fig. 4: Illustration of a basic Autoencoder #### Iv-A3 Probability Distribution In this method, each element of a vector is divided by the sum of all of its elements. Such processing preserves information about the overall relative walking speed of a person, but loses information about the total amount of steps made during the concerned time period. ## V Attribute Inference Attack It has been shown that social network user's personal attributes, e.g., gender, age, political believes, location, occupation, even if not disclosed, can be inferred from their friends [2], online behaviours [11] or from the content of their messages [7]. In this work we study whether gender, age and education can be inferred from user's step count data itself. Using a wide range of feature vectors extracted from raw step data, we train multiple different machine learning and deep neural network models on the attribute classification task and report the results in Section VI. ### _Experimental Setup_ Out of 1000 users in our dataset we need to filter out 3 of them due to step measurement errors. We focus on binary classification of 3 attributes: gender, age and education. For gender, we classify between male and female, where 56% of participants are female. For education, we focus on middle and high level education as explained before. Age of the participants range from 30 to 91 years old, we choose the median (55 years) and classify participants as younger than 55, or 55 and older, giving us 50% of users in each group. We assume an adversary that has access to some amount of user's data with their attributes for training her models and unlabeled step count data of users whose attributes she wants to infer. To simulate this settings, for each attribute, we randomly choose 80% of users of each class as training data set and the remaining 20% as testing data set. We run an extensive amounts of attacks for attribute inference. We first use a wide range of feature extraction techniques on the raw dataset, and then train and test many different classifiers on those feature vectors. ### _Feature Extraction_ We will show our methods for feature extraction in a step by step manner. 1. The raw step vector of each user \(\vec{s}_{raw}\) we take as a whole week \(\vec{s}_{raw}^{D}\) or split it into different days, resulting in \(7\)\(\vec{s}_{raw}^{d}\) for each user. This results in \(2\) feature vector types. 2. We run statistical methods (as described in IV-A) for all possible subsets of \(\{\mathit{max},\mathit{mean},\mathit{median},\mathit{std},\mathit{sum}\}\) (other then \(\varnothing\)) for window sizes \(w\in\{12,24,48,60,120,240,480,720,960,1440,1920,\\ 2880,5760\}\) on \(\vec{s}_{raw}^{d}\) and for window sizes \(w\in\{240,480,720,960,1440,1920,2880,5760,40320\}\) on \(\vec{s}_{raw}^{D}\). This results in \((2^{5}-1)*(13+9)=682\) feature vector types. 3. We run distributions as described in IV-B on \(\vec{s}_{raw}^{D}\) and \(\vec{s}_{raw}^{d}\) for window sizes \(w\in\{240,720,1440,2880\}\) and for bucket sizes \(b\in\{2,4,8\}\). This results in \(2*4*3=24\) feature vector types. 4. We apply three different normalization methods as described in IV-D to all feature vector types we made so far. This results in \((2+682+24)*3=2124\) normalized feature vector types. 5. On each normalized feature vector type we train a densely connected auto encoder. Because some of them are much shorter then others we need to adjust the layer sizes to avoid compressing a shorter vector into a single value. The densely connected auto encoder consists of five densely connected layers of sizes \((l_{1},l_{2},l_{3},l_{4},l_{5})\), where \(l_{1}\) is equal to the length of the input feature vector, \[l_{2} =\begin{cases}\mathit{min}(2048,\lfloor l_{1}/4\rfloor),&\text{if }l_{1}>255\\ \lfloor l_{1}/2\rfloor,&\text{otherwise},\end{cases}\] \[l_{3} =\begin{cases}\lfloor l_{2}/4\rfloor,&\text{if }l_{2}>127\\ \lfloor l_{2}/2\rfloor,&\text{otherwise},\end{cases}\] \(l_{4}=l_{2}\), \(l_{5}=l_{1}\). We then take third layer and use it as a new feature vector, which results in 2124 feature vector types. 6. On each normalized feature vector type we train a convolution auto encoder. It consists of two convolution layers with 8 filters each, \(k_{1}\) and \(k_{2}\) kernel sizes respectively and a max pooling layer in between with the pool size of \(p\). Then two transposed convolution layers with an unpooling layer between them with corresponding parameters follow. Because of how long it takes to train a cnn autoencoder, we use the following sets of parameters \((k_{1},k_{2})\in\{(6,6),(9,6),(9,9),(21,9)\}\) and \(p\in\{2,4\}\) only on weekly feature vectors \(\vec{s}_{raw}^{D}\) normalized with feature-wise normalization and \(k_{1}=21,k_{2}=9,p=4\) for the rest. This results in \(4*2*1+2121=2129\) feature vector types. 7. Additionally, we use a third type of data splitting from step 1, that we call actions. We take a whole week of step data and extract all periods of maximal length that have no subperiod of 8 consecutive periods with 0 steps done (2min of rest) and use each such action together with the length, and time of beginning of the action as a feature vector. We then apply statistics (all statistics together) from step 2 and distributions from step 3 with the window size equal to action length to obtain 3 (together with raw) feature vector types. The reasoning behind it is that maybe a combination of shorter activities are better suited to predict person's attributes. In total we are testing 7085 feature vector types of either seven or one vector per user and 3 actions feature vector types. ### _Classifiers_ We classify on three different binary tasks: gender (male or female), education (medium or high) and age (below 55 or above). We use the following classifiers from sklearn on each task with each feature vector type: Random Forest, Linear Regression, Support Vector Machine, Linear Support Vector Machine and 3 Densely Connected Neural Networks, all consisting of a densely connected and dropout layers. If \(l\) is the feature vector length and d is a 20% dropout layer, then the networks will look as follows: \((l,\frac{1}{4}l,\texttt{d},1),(l,\frac{1}{2}l,\texttt{d},\frac{1}{8}l,1),(l, \frac{1}{2}l,\texttt{d},\frac{1}{4}l,\texttt{d},\frac{1}{16}l,1)\). Now each feature vector set is split into 80% training and 20% testing data in a balanced way (e.g., if the classification is to be performed for gender, the 80 - 20 split is done on male and female separately and only then combined and shuffled together for both training and testing). For the feature vector sets, where we have 7 vectors per user, we make sure, that all the vectors are in either training or testing set for each user. In addition for actions feature vectors an additional aggregation step is needed, because our objective is to classify an attribute of a person and not the action itself. After obtaining scores from the classifier for each action of a specific person we discard 50% of the results, that are the least sure about the attribute and then calculate both arithmetic mean and majority vote. Due to long training time, we run CNN classifiers only on selected feature sets, namely raw daily and weekly step count with both feature-wise vector-wise normalization, all 3 actions, 4 distributions and 4 statistics (2 best performing on average and 2 with highest performance, as measured on other classifiers). Each CNN classifier looks as follows: 2 convolution layers with 16 filters each, \(k_{1}\) and \(k_{2}\) kernel sizes respectively, 50% dropout layer, max pooling layer with the pool size of \(p\), fully connected layer with 100 neurons and a fully connected layer with a single perceptron. We use \((k_{1},k_{2})\in((21,9),(6,6))\) and \(p\in(2,4,8)\) We run 3 LSTM classifier types, regular LSTM, bidirectional LSTM and bidirectional LSTM with attention. We use a Luong-style dot-product attention layer, [18]. We only run LSTMs on feature vectors resulting from fixed window size aggregations, i.e. statistical (with window size \(w\in\{60,240,720\}\)), distributional (with window size \(w\in\{60,240,720\}\), bucket size \(b\in\{2,4\}\), statistical activities (with all the statistics combined) and distributional activities (with bucket size \(b=2\)). We split the feature vectors according to actions or windows and feed each subvector one by one to the LSTM. With an analogy to training an LSTM on a sentence of words, where each word is encoded in a fixed size vector, each window or activity is a separate word and the result of distribution or statistics function is the embedding. All three LSTMs consist of one respecive LSTM layer with 16 or 32 units, one dropout layer with 0.2 dropout rate and a fully connected layer with a single perceptron. ## VI Attribute Inference Evaluation Our goal was not to train the best possible classifier for the attribute inference task, but rather to cover a wide range of different classifiers and feature vectors. We use AUC (Area Under ROC Curve) for measuring the performance of our classifiers. As a result, we have 149433 classification results in total: 7 non-CNN classifiers for 3 different attribute types run on 7085 feature vector types, 3 actions feature vector sets with either mean or majority vote, 6 CNN classifiers for 3 different attribute types run on 12 selected feature vector types and 3 actions feature vector sets with either mean or majority vote with either mean or majority vote and 6 LSTM classifiers 3 different attribute types run on 9 selected feature vector types and 2 action feature vector sets. ### _Normalization Method_ The normalization methods do not have a strong influence on the results. On average the best performing is feature-wise normalization, with the exception of random forest classifier, where normalization is not needed and it would, in almost all cases, cause a small drop in performance. ### _Statistical Method_ We first look at the best window size for the statistical methods. On Fig. 5 we can see best results for single statistics on different windows on the task of predicting age. Additionally we also plot highest and average result of all the (combined) statistics. We can see that the best results are obtained for \(w=720\) and \(w=1920\), which are 3 and 8 hours periods correspondingly. The best performing single statistic is maximal number of steps and the worst is median. Sum and mean of the steps are overlapping on the plot, since mean is just sum divided by a constant. Especially interesting is the 8 hours period since it reflects differences in people's habits during the night (from midnight to 8 a.m.), working period (from 8 a.m. to 4 p.m.) and leisure time (4 p.m. to midnight). With only the maximal number of steps done in each period we can already infer person's age with the AUC score of 0.74 with a linear regression classifier. On Fig. 6 we analyze best performing subsets of statistics on the window size \(w=720\). We can see that the easiest classification task is to predict user's age, then education and the hardest one is gender. Interestingly the best performing for both gender and age prediction is a combination of maximal and median values - the best and worst performing when taken alone. In general, combinations of more then two statistic values do not perform well. When a classifier has too many features and not enough data to train on, its performance will usually be lower then with less but well chosen features. For all attributes the best results are found on week long step count series. The best performing for age was a random forest classifier trained on maximal and median values on windows of size \(w=720\) (3 hours), achieving 0.78 AUC. The best result for education is achieved by the smallest densely connected neural network classifier (with only 1 hidden layer) trained on maximal value of windows of size \(w=240\) (1 hour) with feature-wise normalization, achieving 0.68 AUC. The best performance for gender was a random forest classifier trained on maximal values of windows of size \(w=1440\) (6 hours), achieving 0.65 AUC. We can see that for age and gender the best discriminator is how fast and how fast on average are people walking, while for education it is more important how much are people walking. We are confident with prediction regarding age, but much less so when it comes to education or gender. ### _Distributional Method_ On Fig. 7 we can see best results for all bucket sizes, window sizes and target attributes. We can observe, that bucket size \(b=2\) has always the best (or almost best) performance, meaning more fine grained distribution has a positive impact on the results (even though it generates more features that classifier will need to learn how to process). The best result for each attribute are as follows: * **Age.** Random forest classifier trained on \(b=2\) and \(w=240\) (1 hour) achieves 0.73 AUC, * **Gender.** Support vector classifier trained on \(b=2\) and \(w=1440\) (6 hours) achieves 0.67 AUC, * **Education.** Densely connected neural network (with 2 hidden layers) trained on \(b=8\) and \(w=720\) (3 hours) achieves 0.66 AUC. ### _Autoencoders_ We observe that autoencoders almost always cause a drop in AUC as compared to the feature vectors on which they were trained. The best autoencoder was a densly connected auto encoder on weekly max and mean statistics on windows of size 960 with feature-wise normalization, age classificiation logistic regression achieved 0.77 AUC value, while the same classifier with the same feature vectors but without auto encoder step achieved 0.73 AUC. The improvement, although non-negligable, is an exception rather than a trend. It might be the case, that we do not have enough data for the autoencoders to efficiently learn an intermediate representations, and training directly for the task of prediction show better results. It might, however, still be a useful technique if we have access to big number of unlabeled step count timeseries and we only know attributes of a small percentage of them. ### _Actions_ On average arithmetical mean outperforms majority voting of the result scores, by a small margin. However in some of the best performing cases, mean can have a drastic improvement (e.g., for plain steps and a CNN gender classifier, after cross validation, majority vote achieves 0.53 AUC, while mean gives 0.78). The best results were obtained for gender prediction with raw actions and a random forest classifier resulting in AUC of 0.87. For age the best was a densely connected neural network classifier (with 2 hidden layers) trained on distribution of actions and achieved AUC of 0.69. For education the best was logistic regression trained on distributions of actions achieving AUC of 0.61. While with all other methods, gender was difficult to predict, actions can do it with a very high confidence. It suggests that there exists certain walking patterns characteristic for either man or woman. ### _LSTMs_ Because LSTMs use feature vectors as a 2d matrix rather then a long vector we do not include them in the previous evaluations of the feature selection methods, but instead dedicate them this section. For gender and education, distributional method outperforms statistical ones, while for age the statistical gives best results. Splitting the data based on activities rather then a fixed window size causes a big drop in the AUC score for all the attributes. For both gender and age, window size \(w=240\) gives the best result. For distributional methods, the bucket size does not seem to have a big influence on the AUC score, but best results are achieved for \(b=2\). Making the LSTM bidirectional or adding attention both improve the results for gender prediction, but do not achieve higher performance on the other two attributes. It is possible that due to larger amount of trainable weights, more data would be needed to make use these additions to the basic LSTM. ### _Summary_ Because of the amount of classifiers trained, we were not able to cross validate each experiment and rather we take the best performing ones (described in details in previous subsections) and run them again with 5 fold cross validation. This approach results in us reporting potentially lower best performance values then possible, since we might have missed some well performing methods due to unlucky split into training and testing data, while ensuring, that we do not report high, lucky performance scores. In general, weekly walking patterns are much better then daily for the attribute prediction task. Daily patterns can differ drastically between weekdays and weekends and thus, looking at a person's whole week is advised. The result for age prediction are in Fig. 9. It is the easiest attribute to predict and many classifiers achieve AUC greater then 0.7. Great performance of statistical feature vector generation confirms our observations, that for age inference the most important is how fast (in steps per minute) a person walks, as older people tend to walk slower. Data split into activities instead of fixed size windows does not improve the predictions, which means that no activities were found that would clearly indicate person's age. LSTMs perform very well on both statistical and distributional methods, however the best result is achieved by a CNN consisting of 2 convolution layers Fig. 5: Influence of window size on age prediction with 6 filters of size 16 each, a 0.5 dropout layer, max pooling layer of pool size 8, dense layer with 100 perceptrons and finally dense layer with 1 perceptron, run on a single maximum statistic on window of size 240 (maximum number of steps done every hour). The average AUC over 5 fold validated training-test split is 0.778 with standard deviation of 0.047. In Fig. 8 we show the Receiver Operating Characteristic curve for the best cross validation run. The results for gender prediction are in Fig. 10. In case of gender prediction, simple statistics are not enough and distributional method seems superior, as it contains more information. Again LSTMs and CNN outperform simpler machine learning methods. On most classifiers we can observe that activities perform better then fixed window size split. This indicates existence of certain walking patterns characteristic for males or females. We do not have data saying what were users doing during those activities, but it would be very interesting to study, which activities expose our gender. The best result is achieved by a CNN consisting of 2 convolution layers with 6 filters of size 16 each, a 0.5 dropout layer, max pooling layer of pool size 4, dense layer with 100 perceptrons and finally dense layer with 1 perceptron run on plain activities with the prediction for a specific user being an arithmetical mean of predictions of all their activities. The average AUC over 5 fold validated training-test split is 0.780 with standard deviation of 0.024. The results for education prediction are in Fig. 11. Education is by far the hardest attribute to predict and we are not able to train a reliable classifier. LSTMs seem to perform slightly better then other methods, with the best being a standard Fig. 6: Influence of statistics subset on age predictions Fig. 7: Influence of the window and bucket size for distributional method on the prediction quality LSTM with 16 units and a 0.2 dropout layer, trained on distributions with \(b=4\) and \(w=720\) (3h). The average AUC over 5 fold validated training-test split is 0.650 with standard deviation of 0.037. The reason why we are able to perform better then a random guess is probably because of the correlation of age and education in our data. To verify this claim we use the best performing (for age) CNN classifier, train it on statistical file for the task of predicting age and use it to predict education. In such setting we managed to achieve AUC of 0.637, which confirms our hypothesis. ## VII Linkability Our linkability attack tries to identify whether two observations of stepcount data belong to the same individual. In our experiments we focus on daily observations, meaning for each user \(u\) we have 7 daily feature vectors \(\vec{s}_{u}^{d}\), where \(d\in D\) and \(D=\{\textit{Monday},\textit{Tuesday},\ldots,\textit{Sunday}\}\). is smaller than \(t\), then we predict that the samples came from the same user. We use the euclidean distance and the cosine distance. Other distance metrics have similar or lower performance and are therefore not included. For the euclidean distance, the attack function is instantiated as: \[f_{eucl}^{link}(\vec{s}_{u}^{d_{1}},\vec{s}_{v}^{d_{1}})=\sqrt{\sum_{i=1}^{n}( s_{u,i}^{d_{1}}-s_{v,i}^{d_{2}})^{2}}<t_{eucl}\] Similarly, for the cosine distance, the attack function is instantiated as: \[f_{cos}^{link}(\vec{s}_{u}^{d_{1}},\vec{s}_{v}^{d_{2}})=1-\frac{\vec{s}_{u}^{ d_{1}}\vec{s}_{v}^{d_{2}\,T}}{||\vec{s}_{u}||\|\vec{s}_{v}||}<t_{cos}\] where \(\cdot^{T}\) denotes vector transposition and \(||\cdot||\) denotes the \(L_{2}\) norm of the vector. \(t_{cos}\) and \(t_{eucl}\) denotes thresholds for each attack. Experimenting with different values of thresholds \(t_{cos}\) and \(t_{eucl}\) gives us a range of false and true positive values. Plotting the false-positive rate on the x-axis and the true-positive rate on the y-axis produces the ROC (Receiver Operating Characteristic) curve. We use the area under this curve, referred to as AUC, to analyze the success of the attack. Unlike other metrics, AUC summarizes the performance in a straightforward manner: 0.5 is as bad as random guessing and 1.0 indicates perfect prediction. We assume the attacker knows the best threshold \(t_{\text{cos}}\) or \(t_{eucl}\) and therefore get an upper bound for the privacy threat resulting from the unsupervised attack. We denote these attacks by Euclidean and Cosine. ### _Random Forest Classifier based Attack_ This attack uses random forest classifier, which is a state-of-the-art supervised machine learning approach. We first apply a vector distance metric, namely, L1 distance between the pair of samples. We then use the resulting distance vectors as features to fit random forest classifiers. As above, we use the AUC between the positive class probability of the random forest and the true class labels to evaluate the performance of the attack. L2, element-wise arithmetic mean and Hadamard distance have similar or lower performance and are therefore not included. We denote this attack by RF_standard. ### _Siamese Neural Network based Attack_ One Shot Learning has become the state of the art solution when large amounts of labelled training data is not available for standard classification with deep neural networks. This attack uses One Shot Learning with Siamese Neural Networks [6]. Siamese networks are a special type of neural network architecture that contain two identical sub-networks that have the same configuration, the same parameters and weights, and mirrored updates during training. Instead of learning to classify its inputs, this model learns to differentiate between two inputs and finds a similarity or relationship between them. Specifically, our Siamese Network Model leverages the semantic similarities between pairs of step count vectors of the same user and the differences between pairs of step count vectors of different users. Just one sample of the true identity of a user in training may be sufficient to predict or recognize this user in the future. Figure 12 shows an illustration of a basic Siamese Neural Network. We use L1 distance to combine the output of the shared networks, followed by a fully connected layer with sigmoid activation function. The hypothesis is that two inputs from different users will produce different feature embeddings at the inner layers, and inputs from the same user will result in similar feature embeddings. Hence the element-wise absolute difference between the two feature embeddings must also be very different between the two cases. Therefore the score generated by the output sigmoid layer must also be different in both cases. This model is similar to having an auto encoder and a distinguishing classifier, but instead we learn encodings useful for this exact purpose of our task. We found that using auto encoders for this task does not produce representations resulting in good predictions and thus we exclude those experiments in the paper. We instantiate the two shared subnetworks with different state of the art neural network layers to get five different attack variants as follows: #### Iv-C1 Dense Layers In the first variant, we use two densely connected layers for the shared subnetworks. The first dense layer contains half the number of neurons as the size of the input. The second dense layer further contains half the number of neurons as the first one. We denote this attack by Dense_siamese. #### Iv-C2 LSTM Layers In this variant, the shared subnetworks consist of an LSTM layer with 8 units, and dropout of 0.2. We denote this attack by LSTM_siamese. Fig. 12: Illustration of a basic Siamese Network #### Vii-A3 Bidirectional LSTM Layers In this variant, the shared subnetworks consist of a Bidirectional LSTM layer with 8 units, and dropout of 0.2. We chose this attack by biLSTM_siamese. #### Vii-A4 Attention Layers In this variant, the shared subnetworks consist of a Luong-style dot-product attention layer, [18] applied to the output of the Bidirectional LSTM layer in biLSTM_siamese. We denote this attack by Attention_siamese. #### Vii-A5 1D CNN Layers In this variant, the shared subnetworks consist of two 1D CNN layers, followed by a max pooling layer of size 8. Each CNN layer had a filter size of 16 and a kernel size of 6. The max pooling layer is followed by a flatten layer, which is followed by a densely connected layer of 100 neurons. The result of using a pooling layer and creating down sampled or pooled feature maps is a summarized version of the features detected in the input. They are useful as small changes in the location of the feature in the input detected by the convolutional layer will result in a pooled feature map with the feature in the same location. We denote this attack by CNN_siamese. As before, we calculate the AUC between the true class labels and the result of the sigmoid function to evaluate the success of the attack. ## VIII Linkability - Evaluation We now present the experimental evaluation of our linkability attacks, namely our unsupervised attacks Euclidean and Cosine, our random forest attack RF_standard and our three Siamese network attacks CNN_siamese, LSTM_siamese and Dense_siamese on our dataset. ### _Experimental Setup_ We evaluate our linkability attacks on various stepcount features \(\vec{s}_{u}^{d}\) which include raw step counts \(\vec{s}_{raw,u}^{d}\) as well as \(\vec{s}_{stat,u}^{d},stat\in\{\mathit{sum},max,\mathit{median},\mathit{mean}\}\) and distributions \(\vec{s}_{dist,u}^{d}\), over a period of 24 hours on a day \(d\in D\). We also evaluate the attacks on the normalized (feature-wise normalization) versions of each feature as described in IV-D1 and denote the results by the suffix _norm. For example we denote the results of the attack Euclidean on the normalized features by Euclidean_norm. We remove features with variance less than \(0.001\) before fitting into each model. We create all possible pairs of daily step counts \(\vec{s}_{u}^{d}\) out of 7 days of data for each user to obtain positive class samples (vectors representing two different days of the same user). Thus for each user \(u\) we have \(\binom{7}{2}\), i.e. 21 positive samples in the form \(\{\vec{s}_{u}^{d_{1}},\vec{s}_{u}^{d_{2}}\}\), \(d_{1},d_{2}\in D\), \(d_{1}\neq d_{2}\). For negative samples, we randomly pick an equal number of pairs of daily step counts from different users in the form \(\{\vec{s}_{u}^{d_{1}},\vec{s}_{u}^{d_{2}}\}\), \(d_{1},d_{2}\in D\), \(u\neq v\). We perform a 5-fold cross validation to evaluate our attacks. The positive and negative samples are equally divided into both sets. We do not optimize any parameters for the supervised attacks. The results of our supervised attacks are therefore a lower bound on the privacy risk arising out of such an adversary model. Following the above strategies and removing symmetric user pairs, we get 20937 samples in each class and 41874 samples in total. We train on 33500 samples, test on 8374 samples in each iteration. ### _Results_ Figure 13 shows the results of our attacks on the features \(\vec{s}_{dist}\). The y-axis indicates the AUC (mean and the standard deviation via an error bar over all cross validation folds). The x-axis indicates different bucket size \(b\) and window size \(w\) combinations for the distributions in the input file. We notice that the unsupervised attacks Euclidean and Cosine and the standard random forest classifier RF_standard have very low performance. However our Dense_siamese and CNN_siamese classifiers hugely outperform them and achieves AUC higher than 0.75 for most of the inputs. The RNN based classifiers have only a slightly lower performance; the LSTM_siamese is outperformed by the biLSTM_siamese, which is further outperformed by the Attention_siamese as expected. For big bucket and window sizes, we have lower number of features. Therefore the Dense_siamese attack does not have a significant advantage over the simpler attacks. We notice that its performance is almost the same as the unsupervised attacks for \(b\_w=8\_2880\). Figures 14-17 shows the results of our attacks on the features \(\vec{s}_{max}\), \(\vec{s}_{sum}\), \(\vec{s}_{mean}\) and \(\vec{s}_{medi}\). The x-axis shows increasing window size \(w\) over which the statistic is calculated. We observe that our neural network based siamese attacks outperform the RF_standard, Euclidean and Cosine for smaller window sizes. However as window size increases, the performance of the siamese attacks drops quite fast. We also observe that CNN_siamese and biLSTM_siamese always achieved the top AUCs for each of the 5 feature extraction methods, closely followed by LSTM_siamese. The Attention_siamese struggles for small window sizes (larger number of timesteps), however as the number of timesteps increase, it outperforms biLSTM_siamese and LSTM_siamese. This shows that the success of attention mechanism is limited to sequences of smaller length. We further compare the five different feature extraction methods with each other to see which captured the most information for user linkability. Figure 18 shows the AUCs (mean and standard deviation over all cross validation folds) achieved by the three best performing inputs of each feature extraction method. We see that features extracted by the distribution method \(\vec{s}_{dist}\) outperform the rest and as expected \(\vec{s}_{medi}\), i.e. the median statistic shows the poorest performances. The highest average AUC, achieved by \(\vec{s}_{dist}\) with \(b\_w=4\_720\), is 0.78 using Dense_siamese. Fig. 14: Performance of all attacks on the statistical feature \(\vec{s}_{max}\) Fig. 13: Performance of all attacks on distribution feature \(\vec{s}_{dist}\). The x-axis indicates the bucket size bfollowed by the window size w. Fig. 18: Top 3 AUCs achieved by input features from each feature extraction method (with the Dense_siamese classifier). Fig. 15: Performance of all attacks on statistical feature \(\vec{s}_{sum}\) Fig. 14: Performance of all attacks on the statistical feature \(\vec{s}_{max}\) ## IX Conclusion We perform the first systematic analysis of privacy risks arising from physical activity data. Via an extensive evaluation on a real life dataset, we demonstrate that indeed step count data poses significant threat to various aspects of users' privacy. With a relatively small dataset of 1000 users only and using simple machine learning classifiers, we find that an adversary having access to actimetry (step count) data of different users can perform a linkability attack with a high confidence. Attribute inference is harder, but still possible. Among all three attributes, prediction of age is the easiest, while education cannot be predicted reliably. For gender inference specifically, we obtain promising results by averaging predictions from classifying shorter walking periods (activities). Some of these short patterns strongly indicate users' gender but not education or age. Exploring such smaller patterns for different attributes would be an interesting direction for future work. Note that our goal is not to quantify the worst case privacy risk of a specific attack but to explore a wide range of different feature extraction methods and classifiers in a spectrum of privacy attacks. Actimetry data is not currently regarded as privacy sensitive, but big players, having access to an even bigger dataset could deanonymize users and infer their hidden attributes much easier. We would like to bring the attention of policymakers to the sensitivity of pedometer data and propose that users should be made aware of the risks of sharing their step count data. This highlights the need for further research on privacy preserving ways of collecting such data.
2301.01398
Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete Trajectories
In multi-agent dynamic games, the Nash equilibrium state trajectory of each agent is determined by its cost function and the information pattern of the game. However, the cost and trajectory of each agent may be unavailable to the other agents. Prior work on using partial observations to infer the costs in dynamic games assumes an open-loop information pattern. In this work, we demonstrate that the feedback Nash equilibrium concept is more expressive and encodes more complex behavior. It is desirable to develop specific tools for inferring players' objectives in feedback games. Therefore, we consider the dynamic game cost inference problem under the feedback information pattern, using only partial state observations and incomplete trajectory data. To this end, we first propose an inverse feedback game loss function, whose minimizer yields a feedback Nash equilibrium state trajectory closest to the observation data. We characterize the landscape and differentiability of the loss function. Given the difficulty of obtaining the exact gradient, our main contribution is an efficient gradient approximator, which enables a novel inverse feedback game solver that minimizes the loss using first-order optimization. In thorough empirical evaluations, we demonstrate that our algorithm converges reliably and has better robustness and generalization performance than the open-loop baseline method when the observation data reflects a group of players acting in a feedback Nash game.
Jingqi Li, Chih-Yuan Chiu, Lasse Peters, Somayeh Sojoudi, Claire Tomlin, David Fridovich-Keil
2023-01-04T01:25:49Z
http://arxiv.org/abs/2301.01398v1
# Cost Inference for Feedback Dynamic Games from ###### Abstract. In multi-agent dynamic games, the Nash equilibrium state trajectory of each agent is determined by its cost function and the information pattern of the game. However, the cost and trajectory of each agent may be unavailable to the other agents. Prior work on using partial observations to infer the costs in dynamic games assumes an open-loop information pattern. In this work, we demonstrate that the feedback Nash equilibrium concept is more expressive and encodes more complex behavior. It is desirable to develop specific tools for inferring players' objectives in feedback games. Therefore, we consider the dynamic game cost inference problem under the feedback information pattern, using only partial state observations and incomplete trajectory data. To this end, we first propose an inverse feedback game loss function, whose minimizer yields a feedback Nash equilibrium state trajectory closest to the observation data. We characterize the landscape and differentiability of the loss function. Given the difficulty of obtaining the exact gradient, our main contribution is an efficient gradient approximator, which enables a novel inverse feedback game solver that minimizes the loss using first-order optimization. In thorough empirical evaluations, we demonstrate that our algorithm converges reliably and has better robustness and generalization performance than the open-loop baseline method when the observation data reflects a group of players acting in a feedback Nash game. Inverse Games, Dynamic Game Theory, Nash Equilibrium + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc + Footnote †: journal: Acm Referenc Footnote †: journal: Acm Referenc for the inverse feedback game problem, using noisy partial observations of an incomplete expert state trajectory. Experimental results show that our method reliably converges for inverse feedback games with nonlinear dynamics and is able to learn nonconvex costs. Moreover, the converged cost function can accurately predict the feedback Nash equilibrium state trajectories even for unseen initial states. ## 2. Related Work ### Non-cooperative Dynamic Games Non-cooperative dynamic game theory (Games, 1990; Games, 1990) provides a formal framework for analyzing strategic interaction in a multi-agent setting (Games, 1990; Games, 1990; Games, 1990). In non-cooperative games, each player minimizes its own individual cost function; since players' costs may not be mutually aligned, the resulting equilibrium behavior is generally competitive. Among different equilibrium concepts, the Nash equilibrium has been extensively studied because of its representative power of capturing many non-cooperative behaviors arising in real-world multi-agent systems (Games, 1990; Games, 1990). Recent advances in the literature aim to develop efficient solutions to Nash equilibrium problems in dynamic games. Though the solutions for the open-loop and feedback Nash equilibrium in linear quadratic (LQ) games are well understood (Games, 1990), for nonlinear games there is no closed-form solution in general. The work (Kurz, 2000) characterizes the local Nash solution concept for open-loop Nash equilibrium. In the feedback setting, numerous approaches have been proposed under various special cases (Kurz, 2000; Games, 2000). A value iteration based approach for computing feedback Nash equilibria of nonlinear games without constraints is introduced in (Kurz, 2000). Recently, a set of KKT conditions for feedback Nash equilibria in constrained nonlinear games is derived in (Kurz, 2000). Computing a feedback Nash equilibrium is challenging due to the nested KKT conditions in different time steps. Our work draws upon the ILQGames (Deng et al., 2009) framework, which at each iteration solves a linear-quadratic game that approximates the original game. The construction of the approximate game parallels the iterative linearization and quadraticization methods of iterative LQR (Kurz, 2000), and the dynamic programming equations that characterize equilibrium strategies in linear quadratic dynamic games (Games, 1990). This approach differs from the ALGames (Deng et al., 2009) method, which computes an open-loop Nash equilibrium strategy. ### _Inverse_ Non-cooperative Dynamic Games In contrast to the forward game problem of computing a strategy in dynamic games, an inverse game problem amounts to finding objectives for all agents such that the corresponding strategic (e.g., Nash equilibrium) interactions reproduce expert demonstrations. The inverse game problem is important because it paves the way for an agent to understand the preferences which explain other agents' behavior, which may facilitate more efficient multi-agent interaction and coordination. The problem of inverse infinite-horizon LQ games is considered in (Games, 1990), where the set of cost functions whose feedback Nash equilibrium strategies coincide with an expert strategy is derived. In (Kurz, 2000; Games, 1990), the two-player inverse LQ game is solved by transforming the problem to an inverse optimal control under the assumption that the control input data of one player is known. Two methods based on the KKT conditions of an open-loop Nash equilibrium are proposed for open-loop general-sum differential games in (Kurz, 2000). Several necessary conditions for open-loop Nash equilibria are proposed in (Kurz, 2000) and used for developing an inverse game solution for some classes of open-loop games. Recently, an efficient bilevel optimization framework (Kurz, 2000) based on the open-loop Nash equilibrium KKT conditions was proposed for solving inverse games with an open-loop Nash assumption. Another line of work on inferring costs in open-loop games (Games, 1990; Games, 1990; Games, 1990) proposes to minimize the residual violation of the KKT conditions. This KKT residual framework assumes the knowledge of complete trajectory data and is a convex problem. Given the difficulty of evaluating KKT conditions for feedback Nash equilibria in nonlinear games (Kurz, 2000), the extension of the KKT residual method to feedback nonlinear games may be subject to numerical difficulty. A bilevel optimization approach for inverse feedback game problem is proposed in (Kurz, 2000), with the assumption that both the expert state and control trajectories are observed without noise. In addition, an inverse game solver is proposed in (Kurz, 2000) where they infer the players' cost functions with the assumption that the expert strategy follows a new concept called Maximum Entropy Nash Equilibrium. To the best of the authors' knowledge, there is no work on inferring cost functions of nonlinear dynamic games under feedback Nash equilibrium condition, from noisy partial state observation and incomplete trajectory data. ## 3. Preliminaries Consider an \(N\)-player, \(T\)-stage, deterministic, discrete-time dynamic game, with a state \(x_{t}^{i}\in\mathbb{R}^{n_{i}}\) and control input \(u_{t}^{i}\in\mathbb{R}^{m_{i}}\) for each player \(i\in[N]:=\{1,\cdots,N\}\), \(t\in[T]\). Let the dimension of the joint state and control input be \(n=\sum_{i=1}^{N}n_{i}\) and \(m:=\sum_{i=1}^{N}m_{i}\), respectively. We denote by \(x_{t}:=[x_{t}^{1},\ldots,x_{t}^{N}]\in\mathbb{R}^{n}\) and \(u_{t}:=[u_{t}^{1},\ldots,u_{t}^{N}]\in\mathbb{R}^{m}\) the joint state and joint control at time \(t\in[T]\), respectively. The joint dynamics for the system is given by the differentiable dynamics map \(f_{t}(\cdot,\cdot):\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\): \[x_{t+1}=f_{t}(x_{t},u_{t}),\qquad\forall\,t=1,\cdots,T. \tag{1}\] We denote by \(\mathbf{f}:=\{f_{t}\}_{t=1}^{T}\) the set of dynamics across all the time instances within horizon \(T\). We define \(\mathbf{x}=\{x_{t}\}_{t=1}^{T}\) and \(\mathbf{u}=\{u_{t}\}_{t=1}^{T}\) to be a state trajectory and control trajectory, respectively, if \(x_{t+1}=f(x_{t},u_{t})\), for each \(t\in[T]\). The objective of each agent \(i\) is to minimize its overall cost, given by the sum of its running costs \(g_{t}^{i}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) over the time horizon: \[J^{i}(\mathbf{x},\mathbf{u}):=\sum_{t=1}^{T}g_{t}^{i}(x_{t},u_{t}) \tag{2}\] Define \(g_{t}:=\{g_{t}^{1},g_{t}^{2},\cdots,g_{t}^{N}\}\), \(t\in[T]\). We denote by \(\mathbf{g}:=\{g_{t}\}_{t=1}^{T}\) the set of cost functions for all the agents within horizon \(T\). To minimize (2), each player uses their observations of the environment to design a sequence of control inputs to deploy during the discrete time interval \([T]\). The information available to each player at each time characterizes the _information pattern_ of the dynamic game, and plays a major role in shaping the optimal responses of each player (Games, 1990). Below, we explore two such information patterns--_feedback_ and _open-loop_. ### Nash Solutions in Feedback Strategies Under the state feedback information pattern, each player observes the state \(x_{t}\) at each time \(t\), and uses this information to design a _feedback strategy_\(Y_{t}^{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m_{i}}\), given by: \(u_{t}^{i}:=Y_{t}^{i}(x_{t})\), for each \(i\in[N]\) and \(t\in[T]\). Let \(Y_{t}(x_{t}):=[Y_{t}^{1}(x_{t}),Y_{t}^{2}(x_{t}),\ldots,Y_{t}^{N}(x_{t})]\in \mathbb{R}^{m}\). Following the notation of (Bang et al., 2017), we denote by \(\Gamma_{t}^{i}\) the set of all state feedback strategies of player \(i\), for each \(i\in[N]\). Under this _feedback_ information pattern, the Nash equilibrium of the dynamic game is as defined below. Definition 1 (**Feedback Nash Equilibrium (FBNE)**(Bang et al., 2017, Ch. 6)).: _The set of control strategies \(\{Y_{t}^{1*},\cdots,Y_{t}^{N*}\}_{t=1}^{T}\) is called a feedback Nash equilibrium if no player is incentivized to unilaterally alter its strategy. Formally:_ \[W_{t}^{i*}\big{(}x_{t},Y_{t}^{1*}(x_{t}),\ldots,Y_{t}^{i*}(x_{t} ),\ldots,Y_{t}^{N*}(x_{t})\big{)}\big{)},\forall Y_{t}^{i}\in\Gamma_{t}^{i}, \forall t\in[T]. \tag{3}\] _where \(W_{t}^{i*}(\cdot,\cdot):\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\), \(t\in[T]\) is the optimal state-action function defined as follows,_ \[W_{T}^{i*}(x_{T},u_{T}) :=g_{T}^{i}(x_{T},u_{T})\] \[W_{t}^{i*}(x_{t},u_{t}) :=g_{t}^{i}(x_{t},u_{t})+V_{t+1}^{i*}(x_{t+1}),\forall t\in[T-1],\] \[V_{t}^{i*}(x_{t}) :=W_{t}^{i*}(x_{t},[Y_{t}^{1*}(x_{t}),\ldots,Y_{t}^{N*}(x_{t})]), \forall t\in[T]. \tag{4}\] We define \(\mathbf{x}\) and \(\mathbf{u}\) to be a FBNE state trajectory and a FBNE control trajectory, respectively, if \(u_{t}^{i}=Y_{t}^{i*}(x_{t})\), for each \(i\in[N]\) and \(t\in[T]\). We denote by \(\xi(\mathbf{f},\mathbf{g})\) the set of all FBNE state trajectories in the game defined by the dynamics \(\mathbf{f}\) and cost functions \(\mathbf{g}\). Remark 1 (Strong Time Consistency).: _The FBNE conditions of (3) implicitly enforce strong time-consistency (Bang et al., 2017, Def. 5.14) of the equilibrium strategies. That is, FBNE does not admit arbitrary feedback strategies, but imposes the additional condition that those strategies must also be in equilibrium for any subgame starting at a later stage from an arbitrary state._ ### Nash Solutions in Open-loop Strategies In contrast, under the open-loop information pattern, each player only observes the initial state \(x_{1}\). In this case, the strategy for each player \(i\in[N]\) is a map from \(x_{1}\) to \(\{u_{1}^{i},u_{2}^{i},\cdots,u_{T}^{i}\}\), which we denote by \(\phi^{i}(\cdot):\mathbb{R}^{n}\rightarrow\underbrace{\mathbb{R}^{m_{i}}\times \cdots\times\mathbb{R}^{m_{i}}}_{T}\). Let \(\Phi^{i}\) be the set of all open-loop strategies of the player \(i\), \(i\in[N]\). The corresponding _open-loop Nash equilibrium_ is defined as follows. Definition 2 (**Open-Loop Nash Equilibrium (OLNE)**(Bang et al., 2017, Ch. 6)).: _The tuple of control strategies \(\{\phi_{1}^{*},\cdots,\phi_{N}^{*}\}\) is called an open-loop Nash equilibrium if no player is incentivized to unilaterally alter its sequence of control inputs. Formally:_ \[J^{i}\Big{(}\mathbf{x},[\phi^{1*}(x_{1}),\cdots,\phi^{i*}(x_{1}),\cdots,\phi^{N*}(x_{1})]\Big{)}\] \[\leq J^{i}\Big{(}\mathbf{x},[\phi^{1*}(x_{1}),\cdots,\phi^{i}(x_{1}), \cdots,\phi^{N*}(x_{1})]\Big{)},\forall\phi^{i}\in\Phi^{i},\forall x_{1}\in \mathbb{R}^{n}. \tag{5}\] Remark 2 ().: _The OLNE definition does not imply the strong time-consistence as in the feedback counterpart (Bang et al., 2017)._ ### Feedback vs. Open-loop Nash Equilibria In this subsection, we demonstrate the difference between open-loop and feedback Nash equilibria and show the necessity of developing specific solutions for cost inference problems with the feedback information pattern, instead of applying existing work with the open-loop assumption (Shen et al., 2018). To this end, we introduce below several linear-quadratic (LQ) games where the open-loop Nash equilibrium (OLNE) and feedback Nash equilibrium (FBNE) state trajectories differ substantially. LQ games are a class of dynamic games with dynamics and player objectives of the form in (6) and (7), respectively, \[x_{t+1}=A_{t}x_{t}+\sum_{i\in[N]}B_{t}^{i}u_{t}^{i},\ \forall t\in[T], \tag{6}\] \[g_{t}^{i}(x_{t},u_{t})=\frac{1}{2}(x_{t}^{\top}Q_{t}^{i}x_{t}+\sum_{j\in[N]}{ u_{t}^{j}}^{\top}R_{t}^{j}u_{t}^{j}),\forall t\in[T],\forall i\in[N], \tag{7}\] where matrices \(\{A_{t},B_{t}^{i}\}\), positive semidefinite matrix \(Q_{t}^{i}\) and positive definite matrix \(R_{t}^{j}\) are defined with appropriate dimensions, for each \(i,j\in[N]\) and \(t\in[T]\). **Case Study:** We consider a two-player LQ game with a state vector \(x_{t}=[p_{x,t}^{1},p_{y,t}^{1},p_{x,t}^{2},p_{y,t}^{2}]\), where \(p_{x,t}^{i}\) and \(p_{y,t}^{i}\) are the x- and y-coordinates of agent \(i\in\{1,2\}\), respectively. Let \(u_{t}^{i}\in\mathbb{R}^{2}\) be the control input for the \(i\)-th agent, \(i\in\{1,2\}\). In this setting, we consider a class of games in which the first agent wants to drive the second agent to the origin, while the second agent wants to catch the first agent. The agents' joint dynamics and costs at time \(t\in[T]\) are specified as follows: \[x_{t+1} =\begin{bmatrix}I_{2}&0\\ 0&I_{2}\end{bmatrix}x_{t}+\begin{bmatrix}0\\ 0\end{bmatrix}u_{t}^{1}+\begin{bmatrix}0\\ I_{2}\end{bmatrix}u_{t}^{2},\] \[g_{t}^{1}(x_{t},u_{t}) =\|p_{x,t}^{2}\|_{2}^{2}+\|p_{y,t}^{2}\|_{2}^{2}+\|u_{t}^{1}\|_{2}^ {2},\] \[g_{t}^{2}(x_{t},u_{t}) =\|p_{x,t}^{2}-p_{x,t}^{1}\|_{2}^{2}+\|p_{y,t}^{2}-p_{y,t}^{1}\|_{2} ^{2}+\|u_{t}^{2}\|_{2}^{2}, \tag{8}\] Figure 1. Examples of cost functions that yield trajectories that are different under the OLNE and FBNE assumptions. where \(I_{2}\) is the \(2\times 2\) identity matrix. We visualize the unique FBNE and OLINE state trajectories of this example in the first row in Fig. 1. If we modify the cost function of the first player such that it wants to lead the \(x\)- and \(y\)-position of the second player to be aligned with each other, i.e., \[\hat{g}_{t}^{1}(x_{t},u_{t}):=\|p_{x,t}^{1}-p_{y,t}^{2}\|_{2}^{2}+\|u_{t}^{1}\| _{2}^{2}, \tag{9}\] then, the unique FBNE and OLINE state trajectories are still different, as shown in the second row of Fig. 1. Moreover, observations of players may be noisy in practice. To illustrate this, we consider a task where the two agents want to catch each other, but the first player's observation of the second player's position is inaccurate. We modify the first player's cost in (8) as follows: \[\hat{g}_{t}^{1}(x_{t},u_{t})=\|p_{x,t}^{1}-2p_{x,t}^{2}\|_{2}^{2}+\|p_{y,t}^{1 }-2p_{y,t}^{2}\|_{2}^{2}+\|u_{t}^{1}\|_{2}^{2}. \tag{10}\] The third row of Fig. 1 reveals that the FBNE state trajectory is robust to inaccurate observations, but the unique OLINE state trajectory is not. Thus, it is readily apparent that the OLINE and FBNE state strategies can be substantially different even for fixed cost functions. This difference in expressive power may be understood as a consequence of the strong time consistency property, which is enforced in the feedback information structure but not in the open-loop setting, per Remarks 1 and 2. A similar problem arises in the cost inference problem, where the existing OLINE cost inference algorithms may fail to infer the correct cost function in feedback games. ## 4. Problem Statement Let \(\mathbf{x}\) be an expert FBNE state trajectory under the nonlinear dynamics \(\mathbf{f}\) but unknown cost functions \(\{g_{t}^{i}\}_{t=1,i=1}^{T,N}\). Let \(\mathcal{T}\subseteq[T]\) be the set of observed time indices of the trajectory \(\mathbf{x}\). We denote by \(\mathbf{y}_{\mathcal{T}}:=\{y_{t}\}_{t=\mathcal{T}}\) the observation data of \(\mathbf{x}\), where \(y_{t}\in\mathbb{R}^{t}\) is a partial observation of the state, composed of certain coordinates of \(x_{t}\) corrupted by noise. The task is to infer the cost function of each player such that those inferred costs jointly yield a FBNE state trajectory that is as close as possible to the observed trajectory. We parameterize the cost of the player \(i\) by a vector \(\theta^{i}\in\mathbb{R}^{d_{i}}\), and let \(\theta=[\theta^{1},\theta^{2},\dots,\theta^{N}]\in\mathbb{R}^{d}\). Denote by \(g_{t,\theta}^{i}(x_{t},u_{t})=\sum_{j=1}^{d_{i}}\theta_{j}^{i}p_{t,j}^{i}(x_{ t},u_{t})\) player \(i\)'s parameterized cost at time \(t\in[T]\), for some basis functions \(\{(b_{t,j}^{i})\}_{j=1}^{d_{i}}\prod_{t=1,i=1}^{T,N}\). Define \(\mathbf{g}_{\theta}:=\{g_{t,\theta}^{i}\}_{t=1,i=1}^{T,N}\). Formally, this problem is of the form: \[\begin{array}{ll}\min_{\theta,x_{1},\hat{\mathbf{x}}}&-p(\mathbf{y}_{ \mathcal{T}}|\hat{\mathbf{x}})\\ \text{s.t.}&\hat{\mathbf{x}}\in\xi(\mathbf{f},\mathbf{g}_{\theta},x_{1}),\end{array} \tag{11}\] where \(p(\cdot|\cdot)\) is the likelihood function corresponding to a known sensor model and \(\xi(\mathbf{f},\mathbf{g}_{\theta},x_{1})\) represents the set of state trajectories from the initial condition \(x_{1}\in\mathbb{R}^{n}\) following a FBNE strategy, under the cost set \(\mathbf{g}_{\theta}\). Due to the noisy partial observation, \(x_{1}\) is not assumed to be known and instead needs to be inferred as well in (11). Note that the above formulation can also be extended to the cases where multiple partially observed incomplete trajectories from different initial conditions are available. **Running example:** We consider a highway platooning scenario where player 1 wants to guide player 2 to a particular lane of the road. The joint state vector is \(x_{t}=[p_{x,t}^{1},p_{y,t}^{1},\beta_{t}^{1},v_{t}^{1},p_{x,t}^{2},p_{y,t}^{2 },\beta_{t}^{2},v_{t}^{2}]\). The time horizon \(T=40\). The dynamics model for the player \(i\) is: \[\begin{array}{l}\begin{bmatrix}p_{x,t+1}^{i}\\ p_{y,t+1}^{i}\\ \beta_{t+1}^{i}\\ \end{bmatrix}=\begin{bmatrix}p_{x,t}^{i}\\ p_{y,t}^{i}\\ \beta_{t}^{i}\\ \end{bmatrix}+\Delta T\begin{bmatrix}v_{t}^{i}\cos(\beta_{t}^{i})\\ v_{t}^{i}\sin(\beta_{t}^{i})\\ \alpha_{t}^{i}\\ \end{bmatrix}\end{array} \tag{12}\] where \(\Delta T\) is a time discretization constant and \(u_{t}^{i}=[\alpha_{t}^{i},a_{t}^{i}]\in\mathbb{R}^{2}\) is the control input for player \(i\in[N]\). Let \(p_{x}^{*}\) be the target lane that player 1 wants to guide player 2 to. We parameterize the cost function of the player \(i\) by \(\theta^{i}\in\mathbb{R}^{2}\), \[\begin{array}{l}g_{t,\theta}^{1}(x_{t},u_{t})=\theta_{1}^{1}\|p_{x,t}^{1} \|_{2}^{2}+\theta_{2}^{1}\|p_{x,t}^{2}-p_{x}^{*}\|_{2}^{2}+\|u_{t}^{1}\|_{2}^{2} \\ g_{t,\theta}^{2}(x_{t},u_{t})=\theta_{1}^{2}\|p_{x,t}^{2}-p_{x,t}^{1}\|_{2}^{2}+ \theta_{2}^{2}\|v_{t}^{2}-1\|_{2}^{2}+\|u_{t}^{2}\|_{2}^{2},\forall t\in[T]. \end{array} \tag{13}\] The ground truth solution is \(\theta^{*}=[0,8,4,4]\). We assume that there is a period of occlusion happening from the time index \(t=11\) to \(t=19\), and the observed time index set is \(\mathcal{T}=\{1,2,\dots,10,20,21,\dots,40\}\). Also, it may be difficult for a human driver to measure other vehicles' velocity accurately, and therefore we assume that partial observation data \(\mathbf{y}_{\mathcal{T}}\) excludes the velocity of both cars in the data set, and is further subject to Gaussian noise of standard deviation \(\sigma\). The initial condition \(x_{1}\) is not known and needs to be inferred. We visualize the ground truth solution in the first subplot of Fig. 2 and the noisy incomplete trajectory data in the second subplot of Fig. 2. The many challenges of the above problem include: (a) partial observation; (b) noisy and incomplete expert trajectory data; and (c) the difficulty of evaluating and differentiating the objective in (11), due to the challenge of computing a FBNE strategy in nonlinear games (Kang et al., 2016). In the following sections, we will characterize the complexity of this inverse feedback game problem and propose an efficient solution. ## 5. Results: From Characterization to Computation In this section, we first characterize the complexity of the inverse feedback game problem (11). In particular, we will show the non-convexity of the loss function and the existence of multiple isolated _global_ minima. Based on this observation, we discuss regularization schemes that can mitigate this issue. Our main contribution is to characterize the differentiability of the inverse feedback game loss function in (11). Finally, we present a gradient approximation scheme that can be used in a first-order optimization formulation. Figure 2. Visualization of the running example. ### Characterization of the Inverse Feedback Dynamic Game Problem The inverse feedback dynamic game problem (11) is a constrained optimization problem, which is hard to solve due to the nonconvexity of the set \(\xi(\mathbf{f},\mathbf{g}_{\theta},x_{1})\). With a slight abuse of notation, we denote by \(\hat{\mathbf{x}}(\mathbf{f},\mathbf{g}_{\theta},x_{1})\in\xi(\mathbf{f}, \mathbf{g}_{\theta},x_{1})\) a FBNE state trajectory. To simplify the problem, we transform (11) to an unconstrained problem by substituting a forward game solution \(\hat{\mathbf{x}}(\mathbf{f},\mathbf{g}_{\theta},x_{1})\) into the likelihood function \(p(\mathbf{y}_{\mathcal{T}}|\hat{\mathbf{x}})\), as follows: \[\hat{L}(\theta,x_{1})\coloneqq-p(\mathbf{y}_{\mathcal{T}}|\hat{\mathbf{x}}( \mathbf{f},\mathbf{g}_{\theta},x_{1})). \tag{14}\] The minimizer of (14) is a local optimum to the original problem (11) and becomes global when \(\xi(\mathbf{f},\mathbf{g}_{\theta},x_{1})\) contains only a single element. Before we dive into the nonlinear setting, let us first consider a simplified LQ case to highlight the main challenges associated with the optimization of this loss. In the LQ case, the _evaluation_ of the loss (14) is straightforward if there exists a closed-form expression for \(p(\mathbf{y}_{\mathcal{T}}|\hat{\mathbf{x}})\), e.g., under a Gaussian observation model. Even in that setting, however, it is important to realize that the problem remains nonconvex, as shown in Fig. 3. The following proposition makes this challenge explicit, and the proof can be found in the Appendix. Proposition 1 ().: _There exists an inverse LQ game problem (11): (a) whose global minima are isolated, and (b) for which there exist multiple cost functions that exactly match expert data from any initial condition, when there is no observation noise._ Remark 3 ().: _Proposition 1 does not imply that any inverse LQ game problem will suffer from the multiple global minima issue. Instead, Proposition 1 suggests that simply normalizing the cost vector does not rule out the possibility of having multiple global solutions. That is, there exist two cost parameter vectors which are linearly independent, but generate the same FBNE state trajectories for any given initial state. This non-injective mapping from the cost parameter space to the FBNE state trajectory space is a fundamental problem in inverse feedback games, and is not particular to the formulation (11). In practice, this multiple global minima issue could be mitigated by adding \(L_{2}\) regularization, as visualized in Fig. 3._ Though being nonconvex, the loss function \(\hat{L}(\theta,x_{1})\) is differentiable with respect to both \(\theta\) and \(x_{1}\) under the condition of Theorem 3.2 in (Kang and Kemp, 2017), which follows from the implicit function theorem (Kemp, 2017). Inspired by the success of gradient-based methods in non-convex optimization with differentiable objective functions (Beng et al., 2016; Wang et al., 2017; Wang et al., 2017), one natural idea is to apply gradient descent to minimize \(\hat{L}(\theta,x_{1})\). In what follows, we discuss efficient ways to evaluate and differentiate \(\hat{L}(\theta,x_{1})\) in nonlinear games. ### Efficient Computation for a FBNE State Trajectory in Nonlinear Games It is easy to evaluate \(\hat{L}(\theta,x_{1})\) for LQ games, but when dynamics are nonlinear or objectives are non-quadratic, the problem becomes more challenging (Kemp, 2017). In forward games, this challenge can be addressed by using the ILQGames algorithm (Deng et al., 2016), which finds approximate local FBNE solutions in smooth non-LQ dynamic games. Given the effectiveness of this approximation scheme in those domains, we also adopt it as a submodule for evaluating the loss \(\hat{L}(\theta,x_{1})\). Akin to the ILQR method (Deng et al., 2016; Wang et al., 2017), in each step of the ILQGames algorithm, the system dynamics \(x_{t+1}=f(x_{t},u_{t})\) and the costs \(\{g_{t}^{i}(x,u)\}_{t=1,i=1}^{T,N}\) are linearized and quadratcized, respectively, around a state trajectory \(\mathbf{x}\) and a control trajectory \(\mathbf{u}\). A FBNE strategy for each player of the derived LQ game is then used to update the state and control trajectories. This iteration continues until a convergence criterion is satisfied. To be more specific, we approximate \(\hat{L}(\theta,x_{1})\) by a new loss function \(\tilde{L}(\theta,x_{1})\) defined as, \[\hat{L}(\theta,x_{1})\coloneqq\tilde{L}(\theta,x_{1})\coloneqq-p(\mathbf{y} _{\mathcal{T}}|\mathbf{x}(\tilde{\mathbf{f}}_{\theta},\tilde{\mathbf{g}}_{ \theta},x_{1})) \tag{15}\] where \(\mathbf{x}(\tilde{\mathbf{f}}_{\theta},\tilde{\mathbf{g}}_{\theta},x_{1})\) represents a FBNE state trajectory from initial condition \(x_{1}\), for the LQ game defined by the linearized dynamics \(\tilde{\mathbf{f}}_{\theta}\), quadratcized cost \(\tilde{\mathbf{g}}_{\theta}\coloneqq\{\tilde{\mathbf{g}}_{t,\theta}^{i}\}_{t =1,i=1}^{T,N}\) at the converged solution returned by ILQGames solver. Note that the linearized dynamics \(\tilde{\mathbf{f}}_{\theta}\) depend upon \(\theta\) via the state trajectory about which \(\mathbf{f}\) is linearized; this trajectory is simulated under the feedback policy returned by ILQGames, where the policy depends upon costs \(\mathbf{g}_{\theta}\). ### Differentiating the Loss in the Inverse Feedback Game Problem The challenge of computing a feedback Nash equilibrium strategy not only makes the evaluation of the loss function \(\hat{L}(\theta,x_{1})\) hard, but also renders differentiation difficult. In this work, we approximate the gradient of \(\hat{L}(\theta,x_{1})\) using a similar idea as the ILQGames algorithm in the previous section. In other words, we propose to use the LQ approximation of the nonlinear game specified by \(\tilde{\mathbf{f}}_{\theta}\) and \(\tilde{\mathbf{g}}_{\theta}\) to derive an approximation to the gradient of \(\hat{L}(\theta,x_{1})\). Note that \(\tilde{g}_{t,\theta}^{i}(x,u)=\sum_{j=1}^{d_{i}}\theta_{j}^{i}\tilde{b}_{t,j, \theta}^{i}(x,u)\), where \(\tilde{b}_{t,j,\theta}^{i}(x,u):\mathbb{R}^{n}\times\mathbb{R}^{m}\to \mathbb{R}\) is the \(j\)-th quadratcized cost basis function. By the chain rule, we have \[\frac{\partial\tilde{L}(\theta,x_{1})}{\partial\theta_{j}^{i}} =-\nabla_{\mathbf{x}}p(\mathbf{y}_{\mathcal{T}}|\mathbf{x}( \tilde{\mathbf{f}}_{\theta},\tilde{\mathbf{g}}_{\theta},x_{1})\cdot\frac{ \partial\mathbf{x}(\tilde{\mathbf{f}}_{\theta},\tilde{\mathbf{g}}_{\theta},x_{1 })}{\partial\theta_{j}^{i}},\] \[\frac{\partial\mathbf{x}(\tilde{\mathbf{f}}_{\theta},\tilde{ \mathbf{g}}_{\theta},x_{1})}{\partial\theta_{j}^{i}} =\Big{(}\nabla_{\tilde{\mathbf{f}}_{\theta}}\mathbf{x}(\tilde{ \mathbf{f}}_{\theta},\tilde{\mathbf{g}}_{\theta},x_{1})\frac{\partial\tilde{ \mathbf{f}}_{\theta}}{\partial\theta_{j}^{i}}+\nabla_{\tilde{\mathbf{g}}_{ \theta}}\mathbf{x}(\tilde{\mathbf{f}}_{\theta},\tilde{\mathbf{g}}_{\theta},x_{1}) \frac{\partial\tilde{\mathbf{g}}_{\theta}}{\partial\theta_{j}^{i}}\Big{)}.\] Figure 3. Visualization of the loss function \(L(\theta,x_{1})\) of the LQ game specified in (16) and (17), and its \(L_{2}\) regularization, with an initial condition \(x_{1}=1\). We adopt Gaussian likelihood function. The yellow hyperplane is drawn according to \(2Q^{1}+Q^{2}=3\). With \(L_{2}\) regularization, the number of global minima is reduced. The complexity of differentiating \(\tilde{L}(\theta,x_{1})\) comes from the fact that the linearized dynamics and the quadricized costs are functions of \(\theta\) implicitly, which makes the total derivative hard to compute. We propose to approximate the above gradient by treating the linearized \(\tilde{\mathbf{f}}_{\theta}\) and each quadricized cost basis function \(\tilde{b}^{i}_{t,j,\theta}\) as constants with respect to \(\theta\), denoted by \(\tilde{\mathbf{f}}\) and \(\tilde{b}^{i}_{t,j}\), and only compute the partial derivative with respect to \(\theta\), rather than the total derivative: \[\frac{\partial\tilde{L}(\theta,x_{1})}{\partial\theta_{j}^{i}}\simeq-\nabla_{ \mathbf{x}}p(y\gamma|\mathbf{x})\Big{|}_{\mathbf{x}(\tilde{\mathbf{f}},\tilde{ \theta},\sigma_{1})}.\frac{\partial\mathbf{x}(\tilde{\mathbf{f}},\{\sum_{j=1} ^{d_{i}}\theta_{j}^{i}\tilde{b}^{i}_{t,j}\}_{t=1,i=1}^{T,N},\mathbf{x}_{1})}{ \partial\theta_{j}^{i}}.\] This is based on the observation that at the convergence of the forward ILQGames solver, the linearized dynamics are a good approximation of the full nonlinear dynamics \(\mathbf{f}\), so long as the cost parameter being perturbed remains sufficiently small. We adopt a similar approximation for the gradient \(\nabla_{x_{1}}\tilde{L}(\theta,x_{1})\) by fixing the linearized dynamics and quadricized costs and obtaining the partial derivative with respect to \(x_{1}\). In summary, we approximate \(\nabla\hat{L}(\theta,x_{1})\) by \(\nabla\tilde{L}(\theta,x_{1})\). In practice, \(\nabla\tilde{L}(\theta,x_{1})\) can be efficiently computed by automatic differentiation (Zhao et al., 2017, Ch. 8). As exemplified in Fig. 4, the proposed gradient approximation is virtually always a descent direction and therefore aligns well with the true gradient of \(\hat{L}(\theta,x_{1})\). ### An Inverse Feedback Game Solver In this subsection, we present a solver for the inverse feedback game problem (11). In what follows, we first discuss how the three challenges mentioned in Section 4 are handled in our solver. We then introduce the proposed solver in Algorithm 1. The first two challenges on noisy partial observation and incomplete trajectory data are handled by maintaining an estimate of the full initial condition and a noise-free state-input trajectory. As shown in Section 6, this procedure of joint reconstruction and filtering enables our solver to reliably recover player costs even in scenarios of substantial partial observability. The third difficulty of evaluating and differentiating the objective function in the inverse _feedback_ game problem is mitigated by the efficient approximation outlined in Section 5.3. To jointly infer the initial condition, the cost and the state-input trajectory, we first adopt the coordinate gradient descent method, where gradient descent steps are first taken over the initial condition \(\hat{x}_{1}\), and then taken over the cost parameter. We update the estimate of the noise-free full state-input trajectory by computing a FBNE state trajectory from the inferred initial condition and the cost. ``` Data: Horizon \(T>0\), initial solution \(\theta^{(0)}\in\mathbb{R}^{d}\), observed time index set \(\mathcal{T}\subseteq[T]\), observation data \(\mathbf{y}_{T}\), max iteration number \(K\), tolerance \(\epsilon\). Result: Inferred cost parameter \(\hat{\theta}\) and \(\hat{x}_{1}\) 1for\(k=0,1,\ldots,K\)do 2\((\hat{\mathbf{x}}^{(k)},\{\tilde{b}^{T}_{t}\}_{t=t(k)}^{T,N},\tilde{\mathbf{f }}_{\theta(k)},\tilde{\mathbf{g}}_{\theta(k)})\leftarrow\text{ILQGames}( \mathbf{f},\mathbf{g}_{\theta(k)},x_{1}^{(k)})\) 3\(\nabla_{\mathbf{x}_{1}}\hat{L}(\theta^{(k)},x_{1}^{(k)})\leftarrow\) evaluated using \(\tilde{\mathbf{f}}_{\theta(k)}\) and \(\tilde{\mathbf{g}}_{\theta(k)}\) via Gradient Approximation in Section 5.3 4\(x_{1}^{(k+1)}\leftarrow x_{1}^{(k)}-\eta\nabla_{\mathbf{x}_{1}}\hat{L}(\theta^{ (k)},x_{1}^{(k)})\) with line search over \(\eta\) 5\((\hat{x}^{(k)},\{\tilde{b}^{T}_{t}\}_{t=t(k)}^{T,N},\tilde{\mathbf{f}}_{\theta (k)},\tilde{\mathbf{g}}_{\theta(k)})\leftarrow\)ILQGames\((\mathbf{f},\mathbf{g}_{\theta(k)},x_{1}^{(k+1)})\) 6\(\nabla_{\theta}\hat{L}(\theta^{(k)},x_{1}^{(k+1)})\leftarrow\) evaluated using \(\tilde{\mathbf{f}}_{\theta(k)}\) and \(\tilde{\mathbf{g}}_{\theta(k)}\) via Gradient Approximation in Section 5.3 7\(\theta^{(k+1)}\leftarrow\theta^{(k)}-\eta^{*}\nabla_{\theta}\hat{L}(\theta^{ (k)},x_{1}^{(k+1)})\) with line search over \(\eta^{\prime}\) 8Return \((\theta^{(k+1)},x_{1}^{(k+1)})\) if \(\|\theta^{(k)}-\theta^{(k-1)}\|_{2}\leq\epsilon\) or Return \((\theta^{(k^{\prime})},x_{1}^{(k^{\prime})})\), where \(k^{\prime}\leftarrow\operatorname*{argmin}_{k}\tilde{L}(\theta^{(k)},x_{1}^{(k )})\), if iteration number \(k\) reaches \(K\). 9 end for ``` **Algorithm 1**Inverse Iterative LQ (\(\mathbf{i}^{2}\)LQ) Games We summarize our proposed solver in Algorithm 1. At the \(k\)-th iteration, we first compute an approximate FBNE state trajectory \(\tilde{x}^{(k)}\) and the associated LQ approximation via the ILQGames algorithm of (Friedman and Boyd, 2016). Using this LQ approximation, we estimate \(\nabla_{x_{1}}\hat{L}(\theta,x_{1}^{(k)})\) using the procedure outlined in Section 5.3. We then update the initial condition \(x_{1}^{(k)}\) by a step of gradient descent, where the stepsize is chosen by a suitable linesearch technique (Zhao et al., 2017, Ch. 3) such that the loss \(\hat{L}(\theta,x_{1})\) is sufficiently decreased. Given the updated initial condition \(x_{1}^{(k+1)}\), we find a new approximate FBNE state trajectory via the ILQGames algorithm again, which is then used to estimate \(\nabla_{\theta}\hat{L}(\theta^{(k)},x_{1}^{(k+1)})\) via the procedure in Section 5.3. With this gradient, we update \(\theta^{(k)}\) by one step of gradient descent with linesearch. We repeat this procedure until, at convergence, we find a locally optimal solution \((\hat{\theta},\hat{x}_{1})\). ## 6. Experiments In this section, we adopt the open-loop solution method of (Friedman and Boyd, 2016) as the baseline method and compare it to Algorithm 1. In particular, we evaluate Algorithm 1 in several Monte Carlo studies which aim to justify the following claims. * The proposed gradient approximation often aligns with a descent direction in the loss function. * Algorithm 1 is more robust than the open-loop baseline method (Friedman and Boyd, 2016) with respect to noise in, and incomplete observations of, the expert demonstration trajectory. * The cost functions inferred by Algorithm 1 can be generalized to predict trajectories from unseen initial conditions. * Algorithm 1 can infer nonconvex costs in nonlinear games. ### Gradient Approximation Quality We continue the 2-vehicle platooning example defined in (12) and (13). We measure the performance of Algorithm 1 in two settings, incomplete expert trajectory data with noisy partial state observation, and complete expert trajectory data with noisy full observation. In the first case, each player's partial observation only contains its x-position, y-position and heading angle. The time index set of the incomplete trajectory is \(\mathcal{T}=[T]\setminus\{11,12,\ldots,19\}\). In the second case, the expert data includes the noisy observation of all the states of both players at all \(t\in[T]\). The ground truth expert state trajectory follows a FBNE strategy from the initial condition \(x_{1}=[0,0.5,\frac{\pi}{2},1,1,0,\frac{\pi}{2},1]\) and the target lane is \(p_{x}^{*}=0.0\). At each variance level \(\sigma\in\{0.004,0.008,\ldots,0.04\}\), we generate 10 noisy observations of the ground truth expert trajectory, with isotropic zero-mean Gaussian noise. For each noisy expert data set \(y_{\mathcal{T}}\), we minimize the negative log-likelihood objective in (11), i.e., \(\sum_{t\in\mathcal{T}}\|y_{t}-h(x_{t})\|_{2}^{2}\), where \(h(\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}^{\ell}\) maps a state \(x_{t}\) to its partial observation. As shown in Fig. 4, the loss decreases monotonically on the average. This indicates that the gradient approximation proposed in Section 5.3 provides a reliable descent direction. The inverse feedback game problem becomes challenging when there is only partial state observation and incomplete trajectory data, and the quality of inferred costs may degrade when the observation noise is high. ### Robustness, Generalization and the Ability to Infer Nonconvex Costs We continue the previous 2-vehicle example and compare Algorithm 1 and the baseline in a Monte Carlo study, where we infer the costs under 10 different levels of Gaussian noise with increasing variance. In particular, we evaluate three metrics in Fig. 5: (a) the distance between the noisy expert data and the FBNE state trajectory which results from players' inferred costs; (b) the distance between the computed FBNE state trajectory (under the players' inferred costs) and the ground truth expert data. An example of such a comparison is shown in Fig. 6. Finally, we evaluate (c) the distance between the inferred FBNE state trajectories and the FBNE state trajectory under the ground truth costs for some randomly sampled initial conditions, which is also visualized in Fig. 7. Collectively, the results demonstrate that _Algorithm 1 has better robustness and generalization performance than the open-loop baseline when the expert data follows the FBNE assumption._ To show that Algorithm 1 can infer nonconvex cost functions, we extend the previous 2-vehicle platooning example and assume that the 2-vehicle team encounters a third vehicle and the follower wants to stay close to the leader without colliding with the third vehicle. We model this scenario as a 3-vehicle game with a 12 dimensional state space and a horizon \(T=30\). The dynamics for each vehicle is the same as (12) and the costs are as follows, \[\begin{split} g_{t,\theta}^{1}(x_{t},u_{t})=& g_{1}^{1}\|p_{x,t}^{1}\|_{2}^{2}+\theta_{2}^{1}\|p_{x,t}^{2}-p_{x} ^{s}\|_{2}^{2}+\|v_{t}^{1}-2\|_{2}^{2}\\ &+\|\beta_{t}^{1}-\frac{\pi}{2}\|_{2}^{2}+\|u_{t}^{1}\|_{2}^{2} \\ g_{t,\theta}^{2}(x_{t},u_{t})=&\theta_{1}^{2}\|p_{x,t }^{2}\|_{2}^{2}+\|\beta_{t}^{2}-\frac{\pi}{2}\|_{2}^{2}+\theta_{2}^{2}\|p_{x, t}^{2}-p_{x,t}^{1}\|_{2}^{2}+\|v_{t}^{2}-2\|_{2}^{2}\\ &-\frac{1}{2}\log(\|p_{x,t}^{2}-p_{x,t}^{3}\|_{2}^{2}+\|p_{y,t}^ {2}-p_{y,t}^{3}\|_{2}^{2})+\|u_{t}^{2}\|_{2}^{2}\\ \end{split}\] \[\begin{split} g_{t,\theta}^{3}(x_{t},u_{t})=& \theta_{1}^{3}\|p_{x,t}^{3}-\frac{1}{2}\|_{2}^{2}+\|u_{t}^{3}\|_{2}^{2} \end{split}\] where the ground truth \(\theta^{*}\in\mathbb{R}^{5}\) is \([0,4,0,4,2]\). The ground truth expert state trajectory follows a FBNE strategy from the initial condition \(x_{1}=[0,1,\frac{\pi}{2},2,0.3,0,\frac{\pi}{2},2,0.5,0.5,\frac{\pi}{2},2]\), where the last four elements encode the state of the third vehicle. The target lane in the expert data is \(p_{x}^{*}=0.2\). Similar to the 2-vehicle experiment, we consider two settings, incomplete trajectory data with partial state observation and complete trajectory data with full state observation. The partial state observation includes all the states of each vehicle except for the velocity of all the vehicles, and the time indices set of the incomplete trajectory is \(\mathcal{T}=[T]\setminus\{11,12,\ldots,19\}\). The nonconvex cost of player 2 causes numerical problems in the baseline KKT OUNE solver (Kal ## 7. Conclusion In this work, we propose an efficient cost inference algorithm for inverse feedback nonlinear games, with only partial state observation and incomplete trajectory data. Empirical results show that the proposed solver converges reliably for inverse games with non-convex costs and has superior generalization performance than a state-of-the-art open-loop baseline method when the expert demonstration reflects a group of agents acting in a dynamic feedback game. There are many future directions. We can investigate under what conditions the cost can be inferred exactly in feedback games. The active and online inference are also promising directions. In addition, we are eager to extend this work to settings of closed-loop interaction. In such an extension, rather than merely inferring the objectives of observed players, this information would be used to guide the decision-making of an autonomous agent in that scene. ## Appendix Proof of Proposition 1.: Proposition 1 claims that there exists an inverse LQ game, which has isolated global minima and the induced FBNE state trajectories of those solutions match the expert demonstration. Here, we show such a counterexample, which supports the claim. Consider a \(2\)-player horizon-\(3\) LQ game with the linear dynamics \[x_{t+1}=x_{t}+u_{t}^{1}+u_{t}^{2},\;\;t\in\{1,2,3\}, \tag{16}\] and the cost \[g_{t}^{1}(x_{t},u_{t}) =\frac{1}{2}(Q^{1}\|x_{t}\|_{2}^{2}+\|u_{t}^{1}\|_{2}^{2}),\;\;t \in\{1,2\},\] \[g_{t}^{2}(x_{t},u_{t}) =\frac{1}{2}(Q^{2}\|x_{t}\|_{2}^{2}+2\|u_{t}^{2}\|_{2}^{2}),\;\;t \in\{1,2\},\] \[g_{3}^{1}(x_{3},u_{3}) =\frac{1}{2}Q^{1}\|x_{3}\|_{2}^{2},\;g_{3}^{2}(x_{3},u_{3})=\frac {1}{2}Q^{2}\|x_{3}\|_{2}^{2}. \tag{17}\] We assume that the ground truth solutions are \(Q^{1}=1\), \(Q^{2}=1\). We will show there is also one extra solution \(\hat{Q}^{1}=\frac{1}{2}\) and \(\hat{Q}^{2}=2\), which yields the same FBNE state trajectory as the ground truth for any initial condition. We follow the same definition of the variable \(\{Z_{t}^{i}\}_{t=1,i=1}^{3,2}\) as in (Brandt et al., 2016). By definition, we have \(Z_{t}^{i}\geq Q^{i}>0\), when \(Q^{1}\in\mathbb{R}_{+}\) and \(Q^{2}\in\mathbb{R}_{+}\). Following the notations in FBNE condition in Corollary 6.1 of (Brandt et al., 2016), we consider the feedback matrices \(\{P_{t}^{1}\}_{t=1,i=1}^{2,2}\), \[\begin{bmatrix}P_{t}^{1}\\ P_{t}^{2}\end{bmatrix}=\underbrace{\begin{bmatrix}1+Z_{t+1}^{1}&Z_{t+2}^{1}\\ Z_{t+1}^{2}&2+Z_{t+1}^{2}\end{bmatrix}}_{G_{t}^{i}}\begin{bmatrix}Z_{t+1}^{1 }\\ Z_{t+1}^{2}\end{bmatrix},\;\;\forall t\in\{1,2\}, \tag{18}\] where the matrix \(G_{t}^{i}\) is invertible because \(\det(G_{t}^{i})=2+Z_{t+1}^{2}+2Z_{t+1}^{1}>0\). The above analysis suggests that the FBNE state trajectory for all \(Q^{1}>0\) and \(Q^{2}>0\) are uniquely determined. We consider the time instant \(t=2\), and observe \[\begin{bmatrix}P_{t}^{1}\\ P_{t}^{2}\end{bmatrix}=\begin{bmatrix}1+Q^{1}&Q^{1}&Q^{1}\\ Q^{2}&2+Q^{2}\end{bmatrix}^{-1}\begin{bmatrix}Q^{1}\\ Q^{2}\end{bmatrix}=\frac{1}{2+2Q^{1}+Q^{2}}\begin{bmatrix}2Q^{1}\\ Q^{2}\end{bmatrix}. \tag{19}\] We then have the closed-loop dynamics \(x_{3}=(1-P_{t}^{1}-P_{2}^{2})x_{2}=\frac{2}{2+2Q^{1}+Q^{2}}x_{2}\), which yields that for two pairs of positive variables \((Q^{1},Q^{2})\) and \((\hat{Q}^{1},\hat{Q}^{2})\), a necessary condition for them to have the same FBNE trajectory is that \(2Q^{1}+Q^{2}=\hat{Q}^{1}+\hat{Q}^{2}\). We have \(Z_{2}^{1}=Q^{1}+\frac{Q^{1}+(2Q^{1})^{2}}{(2+2Q^{1}+Q^{2})^{2}}\), \(Z_{2}^{2}=Q^{2}+\frac{Q^{2}+2(Q^{2})^{2}}{(2+2Q^{1}+Q^{2})^{2}}\). Similarly, for the time instant \(t=1\), we have \(x_{2}=(1-P_{1}^{1}-P_{1}^{2})x_{1}=\frac{2}{2+2Z_{t+1}^{2}+Z_{t}^{2}}x_{1}\). A necessary condition for \((\hat{Q}^{1},\hat{Q}^{2})\) to have the same FBNE state trajectory as \((Q^{1},Q^{2})\) is that the following 2 equations are satisfied, \[2Q^{1}+Q^{2}=2\hat{Q}^{1}+\hat{Q}^{2}\] \[2\big{(}Q^{1}+\frac{Q^{1}+(2Q^{1})^{2}}{(2+2Q^{1}+Q^{2})^{2}} \big{)}+Q^{2}+\frac{Q^{2}+2(Q^{2})^{2}}{(2+2Q^{1}+Q^{2})^{2}}\] \[=2\big{(}\hat{Q}^{1}+\frac{\hat{Q}^{1}+(2\hat{Q}^{1})^{2}}{(2+2 \hat{Q}^{1}+\hat{Q}^{2})^{2}}\big{)}+\hat{Q}^{2}+\frac{\hat{Q}^{2}+2(\hat{Q}^{2 })^{2}}{(2+2\hat{Q}^{1}+\hat{Q}^{2})^{2}}. \tag{20}\] We substitute \(Q^{1}=1\), \(Q^{2}=1\) and \(\hat{Q}^{2}=3-2\hat{Q}^{1}\) into the second row of (20), which is reduced to a \(2\)-degree polynomial of \(\hat{Q}^{2}\). By the fundamental theorem of algebra (Brandt et al., 2016), there exist at most \(2\) solutions for \(\hat{Q}^{2}\). The two pairs of \((\hat{Q}^{1},\hat{Q}^{2})\) satisfying (20) are \((1,1)\) and \(\big{(}\frac{1}{2},2\big{)}\). The two global minima are isolated. Since the dimension of the Figure 8. 3-vehicle platooning scenario. The bold lines and shaded areas represent the mean values and their standard error, i.e., the variance divided by the square root of the sample size, respectively. As the noise variance growing, the converged loss value increases on the average, as shown in the red curves. However, Algorithm 1 is still able to learn a more accurate cost and has less generalization error than the baseline, as shown in the blue and yellow curves, respectively. Figure 7. Generalization performance comparison. \(p_{+}^{\gamma}\) is the target lane position that player 1 wants to guide player 2 toward. All the costs are inferred from partial observations and incomplete trajectory data, with different noise variance specified in each of the subplot. The trajectories predicted by Algorithm 1 are closer to the ground truth than the baseline. state \(x_{t}\) is \(1\), for all initial states \(x_{1}\in\mathbb{R}\), the FBNE state trajectories under the costs specified by the two pairs cost parameters \((1,1)\) and \((\frac{1}{2},2)\) coincide with each other.
2306.05159
A model to explain the Q-increase by moderate-temperature treatment in Nb SRF cavities
It is well known that moderate temperature baking of niobium cavities can improve the surface resistance. Presently, it is believed that the diffusion of oxygen into the bulk, resulting in interstitial defects, is responsible for the change. In this note we propose that the damaged surface layer remaining after dissolution of the thin niobium pentoxide may in fact be the dominant contributor to the improved the cavity quality factor by strongly pinning trapped flux lines. We propose some of experiments to test this theory.
Yegor Tamashevich, Alena Prudnikava, Jens Knobloch
2023-06-08T12:45:56Z
http://arxiv.org/abs/2306.05159v1
# A model to explain the Q-increase by moderate-temperature treatment in Nb SRF cavities ###### Abstract It is well known that moderate temperature baking of niobium cavities can improve the surface resistance. Presently, it is believed that the diffusion of oxygen into the bulk, resulting in interstitial defects, is responsible for the change. In this note we propose that the damaged surface layer remaining after dissolution of the thin niobium pentoxide may in fact be the dominant contributor to the improved the cavity quality factor by strongly pinning trapped flux lines. We propose some of experiments to test this theory. ## 1 Impact of moderate-temperature heat treatment on the RF surface Currently, the increase of the intrinsic quality \(Q_{0}\) factor of Nb SRF cavities after heat treatment at moderate temperatures (120-400 \({}^{\circ}\)C) is attributed to a specific depth profile of oxygen interstitials in niobium lattice. During the heat treatment, the niobium pentoxide layer "dissolves", and the released oxygen atoms diffuse into the bulk, forming an "oxygen-rich" layer. The oxygen interstitials are basically lattice defects and believed to increase the \(Q_{0}\) by a number of potential mechanisms: decrease of the mean free path of the normal-conducting electrons, thus decreasing BCS resistance \(R_{\text{BSC}}\), enhanced flux pinning to reduce the residual resistance \(R_{\text{res}}\), or by acting as hydrogen trapping centers to avoid hydride precipitation. Most of the recent studies measure and describe this profile of oxygen concentration and look for an optimal treatment procedure (temperature, time etc.) to tailor this profile. However, we believe an additional mechanism, plays a role. The distorted crystal lattice at the surface of niobium left after the dissolution of the niobium pentoxide during heat treatment may in fact dominate the Q-increase. A qualitative explanation is shown in Figure 1. _Figure 1.Schematic of the RF surface in superconducting Nb cavities before and after mid-T heat treatment. (Note: lengths are not to scale; the RF layer is much deeper than the pentoxide thickness)._ In the top picture, the concentration of oxygen in the near-surface layer of an unbaked cavity is shown. It consists of a natural niobium pentoxide (Nb\({}_{2}\)O\({}_{5}\)), some Nb oxides of lower oxidation states (shown simplified, see [1] for details), oxygen interstitials and bulk niobium underneath. The pentoxide layer, being dielectric, does not contribute to rf losses. Thus, most of the superconducting current flows below the pentoxide in the layer limited to ca. three times the London penetration depth. Assuming a standard 800 \({}^{\circ}\)C annealed cavity, this layer consists of almost perfect niobium with some amount of oxygen atoms in a thin layer of lower niobium oxides right below the pentoxide layer. The pentoxide layer has thickness of ca. 3.5-5 nm, the "lower Nb oxides" layer has the thickness of ca. 1 nm, and the "RF-layer" has the thickness of ca. 150 nm. The oxygen concentration in the pentoxide layer is ca. 70 at. %, in the simplified "lower Nb oxides" layer - ca. 40-50 at. %. For the simplicity, we can assume that initially are no other interstitials or defects in the RF layer. The bottom picture represents the concentration of oxygen-interstices and distorted Nb-lattice in the near-surface layer of a cavity, which was heat-treated at moderate temperature. The pentoxide layer is partially or completely dissociated, and the released oxygen diffuses into bulk niobium forming a profile of a decaying concentration. During the baking process the inner border of the pentoxide phase effectively moves towards the surface but, importantly, the lattice distortion remains even in the dissociated region because the temperature is too low to recrystallize the niobium. What remains is a metallic region with a very high defect concentration. The rate of dissociation and diffusion is strongly temperature-dependent. Thus, the pentoxide thickness and the concentration profile of the oxygen interstitials can be tailored by adjusting the baking time and temperature. In general, if the treatment is performed for several hours at moderate temperatures (300-400 \({}^{\circ}\)C), oxygen-interstitials profile extends beyond the "RF-layer" [2, 3, 4]. Oxide dissociation and diffusion is negligible at temperatures below ca. 120 \({}^{\circ}\)C. Thus, we assume that the oxygen profile remains constant after the heat treatment and the cavity is cooled down to room temperature or below. ## 2 Model for reduced surface resistance The key point of the present note is that a distorted niobium lattice remains under the pentoxide in the region where the original pentoxide dissolved. All the vacancy-defects left after the oxygen has diffused out of the pentoxide layer stay in-place and are not "cured". The concentration of these defects is very high, reaching almost 70 at %, i.e., the Nb lattice in this layer is severely damaged. The total number of these defects equals the total number of oxygen interstitials remaining in the bulk. However, since the profile may extend beyond the rf layer, the defects from the pentoxide near the surface may in fact well outnumber the oxygen interstitials _within the RF layer_. Furthermore, the concentration of these near-surface defects will be much higher than the concentration of interstitials deeper in the RF layer. As a result, we believe enhanced flux pinning at the surface will inhibit flux oscillation in the RF field and hence reduce the residual resistance. In light of the above, the layer of defects right below the pentoxide layer should have a significant (or even a dominant) contribution to the Q-increase phenomena. ## 3 Some consequences and tests of the model Several consequences of the described hypothesis can be deduced. Examples include: 1) The originally increased quality factor upon baking should continuously decrease if a cavity is subsequently subjected to air, thereby regrowing the original pentoxide layer. The newly formed pentoxide is dielectric and is not a part of the RF-layer. The number of defects in RF layer decreases and hence the surface resistance increases again. This process may require several days or even weeks before the original pentoxide thickness is re-established. A series of sequential tests of a heat-treated cavity subjected to air should demonstrate a gradual degradation of the quality factor. After the pentoxide layer is fully formed, any remaining improvement of the surface resistance (compared to untreated cavity) can be then attributed to the oxygen interstitials in the RF layer. 2) In the absence of any trapped flux, the surface resistance of the distorted lattice will depend on its thickness and defect concentration. Hence, including trapped flux there should be an optimal concentration of defects to produce a minimum of residual resistance (see Figure 2). An untreated cavity with few defects will likely have a non-minimum residual resistance (as well as BCS resistance). As the defect concentration increases due to heat treatment, flux motion is reduced by trapping, and the residual surface resistance drops. However, the residual resistance due to other effects increases and for prolonged heat treatment may eventually (at very high defect concentration) dominate. Hence a minimum surface resistance (optimal heat treatment time/temperature) is likely. Successively longer heat treatments should therefore result in the residual resistance evolving similar to Figure 2. The optimum must be found experimentally. 3) If strong flux pinning is responsible for the reduced residual resistance, then a reduction in sensitivity (\(dR_{\text{res}}\)/\(dB\)) should be observed following mid-T baking. This can be measured by applying varying external magnetic fields during prior to cavity testing. 4) Niobium with good flux expulsion characteristics (e.g., single-crystal Nb) should decrease its ability to expel flux when the pentoxide layer is reduced by moderate-temperature baking due to the increased trapping centers. Exposure to air should then re-establish the original flux expulsion performance. Such tests do not require cavities, but can be performed on samples (see e.g. [5, 6]). 5) The maximum number of defects is limited by the original thickness of the pentoxide layer. All the defects can be utilised only if the full pentoxide layer is dissociated. However, in that case the surface has no protective layer and a significant contamination can occur (especially with carbon) (see [1]). If the initial pentoxide layer can be made thicker (for example, by anodic oxidation), more defects can be introduced in to the rf layer. Currently we do not know if the total amount of defects in a natural pentoxide layer is enough to reach the minimum of \(R_{\text{res}}\). It is possible, that with the natural pentoxide we can reach, for example, Treatment#2 values in Figure 2.
2304.13142
Quantum Machine Learning Approach for the Prediction of Surface Roughness in Additive Manufactured Specimens
Surface roughness is a crucial factor influencing the performance and functionality of additive manufactured components. Accurate prediction of surface roughness is vital for optimizing manufacturing processes and ensuring the quality of the final product. Quantum computing has recently gained attention as a potential solution for tackling complex problems and creating precise predictive models. In this research paper, we conduct an in-depth comparison of three quantum algorithms i.e. the Quantum Neural Network (QNN), Quantum Forest (Q-Forest), and Variational Quantum Classifier (VQC) adapted for regression for predicting surface roughness in additive manufactured specimens for the first time. We assess the algorithms performance using Mean Squared Error (MSE), Mean Absolute Error (MAE), and Explained Variance Score (EVS) as evaluation metrics. Our findings show that the Q-Forest algorithm surpasses the other algorithms, achieving an MSE of 56.905, MAE of 7.479, and an EVS of 0.2957. In contrast, the QNN algorithm displays a higher MSE of 60.840 and MAE of 7.671, coupled with a negative EVS of -0.444, indicating that it may not be appropriate for predicting surface roughness in this application. The VQC adapted for regression exhibits an MSE of 59.121, MAE of 7.597, and an EVS of -0.0106, suggesting its performance is also inferior to the Q-Forest algorithm.
Akshansh Mishra, Vijaykumar S. Jatti
2023-04-24T11:57:10Z
http://arxiv.org/abs/2304.13142v1
Quantum Machine Learning Approach for the Prediction of Surface Roughness in Additive Manufactured Specimens ###### Abstract Surface roughness is a crucial factor influencing the performance and functionality of additive manufactured components. Accurate prediction of surface roughness is vital for optimizing manufacturing processes and ensuring the quality of the final product. Quantum computing has recently gained attention as a potential solution for tackling complex problems and creating precise predictive models. In this research paper, we conduct an in-depth comparison of three quantum algorithms - the Quantum Neural Network (QNN), Quantum Forest (Q-Forest), and Variational Quantum Classifier (VQC) adapted for regression - for predicting surface roughness in additive manufactured specimens for the first time. We assess the algorithms' performance using Mean Squared Error (MSE), Mean Absolute Error (MAE), and Explained Variance Score (EVS) as evaluation metrics. Our findings show that the Q-Forest algorithm surpasses the other algorithms, achieving an MSE of 56.905, MAE of 7.479, and an EVS of 0.2957. In contrast, the QNN algorithm displays a higher MSE of 60.840 and MAE of 7.671, coupled with a negative EVS of -0.444, indicating that it may not be appropriate for predicting surface roughness in this application. The VQC adapted for regression exhibits an MSE of 59.121, MAE of 7.597, and an EVS of -0.0106, suggesting its performance is also inferior to the Q-Forest algorithm. Quantum Computing; Quantum Machine Learning; Additive Manufacturing; Quantum States ## 1 Introduction Quantum computing, an innovative field that draws on the principles of quantum mechanics, has attracted substantial interest in recent years due to its capacity to revolutionize various industries. Unlike classical computing, which uses bits to represent information as either 0 or 1, quantum computing employs qubits that can coexist in multiple states simultaneously, thanks to the phenomena of superposition and entanglement [1-5]. This unique characteristic allows quantum computers to perform intricate calculations at an exponentially faster rate than conventional computers, creating opportunities to solve previously unsolvable problems. The manufacturing industry, characterized by its elaborate and complex processes, stands to benefit significantly from advancements in quantum computing. Potential applications of quantum computing in manufacturing cover a wide range of areas, including optimization, simulation, material science, and predictive modeling. Integrating quantum computing into these fields could lead to considerable improvements in efficiency, cost reduction, and product quality [6-9]. Quantum computing can optimize various manufacturing processes, such as scheduling, resource allocation, and supply chain management, by identifying the most efficient and cost-effective solutions. Quantum optimization algorithms, like the Quantum Approximate Optimization Algorithm (QAOA) and Quantum Adiabatic Computing, hold the potential to outperform classical optimization techniques in tackling complex, large-scale problems. Quantum computers can simulate quantum systems more efficiently than classical computers, allowing researchers to gain a deeper understanding of materials and chemicals at the atomic and molecular levels. This enhanced understanding could accelerate the discovery of new materials and the development of cutting-edge manufacturing techniques, resulting in more sustainable and efficient production processes. In the realm of material science, quantum computing can expedite the discovery and design of novel materials with customized properties by simulating their behavior at the quantum level. This could lead to the creation of advanced materials with improved strength, durability, and energy efficiency, thereby enhancing the overall quality of manufactured products [10-13]. Quantum computing can also significantly improve predictive modeling in manufacturing by providing more accurate and efficient solutions for intricate problems. Quantum machine learning algorithms, such as the Quantum Neural Network (QNN), Quantum Forest (Q-Forest), and Variational Quantum Classifier (VQC), can be utilized to predict critical parameters, like surface roughness, machine wear, and defect rates, which influence product quality and process efficiency. These algorithms offer the potential to decrease manufacturing errors, minimize downtime, and optimize maintenance schedules. In the present work, we report the first-ever implementation of Quantum Machine Learning algorithms for determining the surface roughness of additive manufactured specimens. This pioneering approach demonstrates the potential of harnessing quantum computing techniques to enhance the accuracy and efficiency of predictive modeling in the field of additive manufacturing. ## 2 Quantum Machine Learning Framework Quantum machine learning is an emerging interdisciplinary field that integrates quantum computing with machine learning, aiming to harness the unique properties of quantum systems for more efficient complex calculations and optimization problem-solving than classical computing. As this field is still in its nascent stage, its potential applications are vast and varied. To comprehend the intricate workings of quantum machine learning, it is essential to understand several fundamental concepts of Quantum bits (qubits), Quantum entanglement, and Quantum gates. In quantum computing, qubits are the basic units that can exist in a superposition of both 0 and 1 states, which is unlike classical bits that can only exist in either 0 or 1 state. Qubits are represented as complex linear combinations of the basis states \(|0\rangle\) and \(|1\rangle\) shown in Figure 1. This property enables parallelism and allows quantum computers to process extensive information concurrently. To explain this with equations, let's consider a qubit, represented as \(|\psi\rangle\) as depicted in Equation 1. \[|\psi\rangle=\alpha|0\rangle+\beta|1\rangle \tag{1}\] Here, \(\alpha\) and \(\beta\) are complex coefficients that determine the probability amplitudes of the qubit being in state \(|0\rangle\) or \(|1\rangle\), respectively. The probabilities of the qubit being in state \(|0\rangle\) or \(|1\rangle\) are given by Equation 2 and 3. \[\text{P}(|0\rangle) =|\alpha|^{2} \tag{2}\] \[\text{P}(|1\rangle) =|\beta|^{2} \tag{3}\] Since the qubit must be in either state \(|0\rangle\) or \(|1\rangle\), the sum of the probabilities must be equal to 1 as depicted in Equation 4. \[|\alpha|^{2}+\ |\beta|^{2}=1 \tag{4}\] This property of qubits allows them to exist in a superposition of states and enables quantum parallelism. In a quantum computer with n qubits, there can be \(2^{n}\) different states that can be Figure 1: Schematic representation of Quantum Superposition processed simultaneously. This is because the state of an n-qubit system can be written as Equation 5. \[|\psi_{n}\rangle=\Sigma_{i}\;c_{i}\;|i\rangle \tag{5}\] where i ranges from 0 to (2\({}^{n}\) - 1) and \(\Sigma_{i}\;|c_{i}|^{2}=1\). This parallelism allows quantum computers to perform complex computations that are infeasible for classical computers. Quantum entanglement is an exceptional phenomenon that occurs when two or more qubits become intricately interconnected, rendering them inseparable shown in Figure 2. This can be represented by the entangled state Equation 6. \[|\Psi\rangle=\alpha|00\rangle+\beta|11\rangle \tag{6}\] where \(|\Psi\rangle\) is the entangled state, \(\alpha\) and \(\beta\) are complex coefficients, and \(|00\rangle\) and \(|11\rangle\) are the basis states of the two entangled qubits. In this scenario, the state of one qubit cannot be described independently of the other, as the probabilities of the outcomes are determined by the combined state \(|\Psi\rangle\). Entanglement is a crucial resource in quantum computing, as it enables non-local correlations and augments computational capabilities. One example of this is the Bell states, which are a set of maximally entangled states that can be written as Equation 7. \[|\Phi+\rangle=(1/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! As the foundation of quantum circuits, quantum gates perform operations on qubits. Analogous to classical logic gates (e.g., AND, OR, NOT), quantum gates are reversible and can act on quantum state superpositions shown in Figure 3. The detailed mechanism of quantum machine learning is shown in Figure 4. ## 3 Quantum circuit Figure 3: Quantum circuit with two qubits, a Hadamard gate (H) acting on Qubit 1, and a CNOT gate acting on both qubits Figure 2: The entanglement between the qubits The initial step involves converting classical data into a quantum format, typically by encoding it into a quantum state or a set of qubits. This process employs quantum feature maps or embedding techniques that transform classical data into a high-dimensional Hilbert space, making it compatible with quantum computing. A quantum circuit tailored to a specific machine learning task, such as classification, regression, or clustering, is designed by selecting suitable quantum gates and arranging them sequentially to process input data and produce the desired output. The circuit's structure and complexity may vary according to the problem and required accuracy level. For supervised learning tasks, quantum algorithm parameters must be optimized to minimize a cost function that measures the discrepancy between algorithm predictions and the true labels of the training data. Quantum algorithms often employ variational techniques in which a classical optimization algorithm updates the quantum circuit parameters based on feedback from the quantum computer. Upon processing input data through the quantum circuit, results must be extracted by measuring qubits, causing their superposition to collapse into a definite classical state (0 or 1) with specific probabilities. The classical output is then decoded and interpreted to provide insights or predictions concerning the original problem. ## 3 Materials and Methods In order to maintain consistency in the model while reducing print size, material usage, and time, ASTM E8 standard geometry was adopted with a uniform 50% reduction in dimensions. The Response Surface Methodology (RSM) Design of Experiment was utilized to create 30 distinct trial conditions, each with three input parameter levels (see Figure 5). Figure 4: Framework of Quantum Machine Learning Using Ultimaker Cura software, the CAD model was sliced to generate G-code (see Figure 6). The experimental investigation was carried out using the Creality 3D FDM printer (see Figure 7), with each print assigned a unique set of settings that varied in layer height, infill density, infill pattern, bed temperature, and nozzle temperature, and using Polylactic Acid (PLA) material. An input parameter datasheet was created, and the differences in length between each model and the original CAD file were measured using a digital vernier caliper. The data obtained from the experiment has been converted into a CSV file and then imported into a Google Colab platform. This platform will be used to implement three different Quantum Machine Learning algorithms - Quantum Neural Network (QNN), Q-Forest, and Variational Quantum Classifier (VQC) - all of which have been adapted for regression analysis. The performance of these algorithms were evaluated on the basis of three metric features i.e. Mean Square Error (MSE), Mean Absolute Error (MAE) and Explained Variance Score. Figure 5: Fabricated Additive Manufactured specimens Figure 6: Schematic sketch of the specimen ## 4 Results and Discussion The results of the Surface roughness measurements obtained through various input parameter combinations are presented in Table 1. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Layer Height (mm) & Wall Thickness (mm) & Infill Density (\%) & Infill Density & Infill Patter n (\%) & Nozzle Temperature (\(\lx@math@degree\)C) & Bed Temperature (\(\lx@math@degree\)C) & Print Tempera (\(\lx@math@degree\)C) & Fan Speed (mm/s (\%) & Surface Roughnes s (\(\mu m\)) \\ \hline 0.1 & 1 & 50 & honey comb & 200 & 60 & 120 & 0 & 6.12275 \\ \hline 0.1 & 4 & 40 & grid & 205 & 65 & 120 & 25 & 6.35675 \\ \hline 0.1 & 3 & 50 & honey comb & 210 & 70 & 120 & 50 & 5.957 \\ \hline 0.1 & 4 & 90 & grid & 215 & 75 & 120 & 75 & 5.92025 \\ \hline 0.1 & 1 & 30 & honey comb & 220 & 80 & 120 & 100 & 6.08775 \\ \hline 0.15 & 3 & 80 & honey comb & 200 & 60 & 60 & 0 & 6.0684 \\ \hline 0.15 & 4 & 50 & grid & 205 & 65 & 60 & 25 & 9.27525 \\ \hline 0.15 & 10 & 30 & honey comb & 210 & 70 & 60 & 50 & 7.479 \\ \hline 0.15 & 6 & 40 & grid & 215 & 75 & 60 & 75 & 7.557 \\ \hline 0.15 & 1 & 10 & honey comb & 220 & 80 & 60 & 100 & 8.48675 \\ \hline 0.2 & 5 & 60 & honey comb & 200 & 60 & 40 & 0 & 8.4695 \\ \hline 0.2 & 4 & 20 & grid & 205 & 65 & 40 & 25 & 8.8785 \\ \hline 0.2 & 5 & 60 & honey comb & 210 & 70 & 40 & 50 & 9.415 \\ \hline 0.2 & 7 & 40 & grid & 215 & 75 & 40 & 75 & 9.71375 \\ \hline 0.2 & 3 & 60 & honey comb & 220 & 80 & 40 & 100 & 10.59625 \\ \hline 0.1 & 1 & 50 & triangl & 200 & 60 & 120 & 0 & 6.04925 \\ \hline \end{tabular} \end{table} Table 1: Experimental results Now let's discuss about the concept results obtained by the implemented algorithms individually in the below sub-sections. ### Quantum Neural Network (QNN) Quantum Neural Networks (QNNs) represent an innovative approach that merges the principles of quantum computing with classical neural network architectures to address complex problems more effectively. Utilizing quantum bits (qubits) as the primary information carriers, QNNs leverage the inherent quantum properties of superposition to allow for simultaneous representation of multiple states. At the core of QNNs lie quantum layers, which consist of sequential quantum gates. These gates are the fundamental mathematical components of quantum computing, responsible for transforming qubit states. Quantum gates are represented by unitary matrices, ensuring that the normalization of qubit states is maintained. Consequently, a quantum circuit is formed through a series of quantum gates, with the overall transformation resulting from the product of the matrices that represent the individual gates. In mathematical terms, a quantum state can be expressed as a complex vector in a Hilbert space, where each element corresponds to the probability amplitude of a distinct computational basis state. The quantum state comprising n qubits can be denoted as Equation 8. \[\ket{\psi}=\Sigma\left(\text{a}_{\text{i}}\ket{\text{i}}\right) \tag{8}\] Here, i ranges from 0 to 2\({}^{\text{n}}\) - 1, a\({}_{\text{i}}\) signifies complex coefficients, and \(\ket{\text{i}}\) represents the corresponding computational basis state. The probability of measuring the state \(\ket{\text{i}}\) is given by the square of the modulus of a\({}_{\text{i}}\). The transformation of a quantum state through a quantum gate can be depicted as a matrix-vector multiplication as depicted in Equation 9. \[\ket{\psi}=\text{U}\ket{\psi} \tag{9}\] In this equation, U represents the unitary matrix corresponding to the quantum gate, \(\ket{\psi}\) denotes the initial state, and \(\ket{\psi^{\prime}}\) symbolizes the final state after applying the gate. To determine the output of a QNN, the quantum circuit is executed on either a quantum computer or a simulator. In our present work, the quantum circuit is executed on a simulator as shown in Figure 7. The resulting output is a probability distribution over the computational basis states, which is subsequently processed by a classical neural network or other classical machine learning models to generate the final output. Figure 7: Quantum Circuit framework in the present work The QNN is defined by the function **qnn(params, x=None)**, which takes two arguments: **params** representing the network's parameters, and an optional input feature vector **x**. Within the QNN, the input features are embedded into the quantum state using a series of rotations **(qml.RX and qml.RZ)**. These rotations act on the qubits of the quantum circuit, effectively encoding the classical data into the quantum system. The quantum layers are defined in a loop that iterates through the **params** array. For each layer, rotations are applied to the qubits using the parameters in the array, followed by a series of controlled NOT (CNOT) gates that create entanglement between adjacent qubits. This entanglement allows for the exploration of a larger solution space and enhances the network's expressivity. The output of the QNN is obtained by measuring the expectation value of the Pauli-Z operator on the first qubit. This expectation value represents the network's prediction for the given input. The cost function, **cost(params, X, y)**, calculates the MSE between the QNN's predictions and the target values. It does this by iterating through the input data **X** and calling the QNN function with the current parameters and input features. The mean squared error between the predictions and the ground truth targets **y** is then computed and returned as the cost value. The QNN's parameters are initialized randomly and optimized using the gradient descent optimizer provided by the Pennylane library. The optimization is performed for 100 iterations, with the parameters updated at each step to minimize the cost function. Finally, the trained QNN is used to make predictions on the test set, and the performance is evaluated using three metrics: mean squared error (MSE), mean absolute error (MAE), and the Explained Variance Score as shown in Table 2. The plot of explained variance score vs training iterations is typically used to track the performance of a machine learning algorithm during the training process as shown in Figure 8. The explained variance score is a metric that measures the proportion of the variance in the target variable that is explained by the model. It ranges from 0 to 1, where 0 indicates that the model does not explain any variance, and 1 indicates that the model explains all of the variance. The plot of explained variance score vs training iterations can provide insights into how well the model is learning and converging to a solution. If the explained variance score is increasing with each iteration, it suggests that the model is improving and learning from the data. On the other hand, if the explained variance score is plateauing or decreasing, it suggests that the model is not improving or may be overfitting the data. \begin{table} \begin{tabular}{c|c|c} \hline MSE & MAE & Explained Variance Score \\ \hline 60.840 & 7.671 & -0.444 \\ \hline \end{tabular} \end{table} Table 2: Obtained Metrics features for QNN Algorithm ### Q-Forest Algorithm The Q-Forest algorithm, a quantum-inspired approach, is employed for clustering and classification tasks in large-scale data processing. Drawing upon principles of quantum mechanics such as quantum entanglement and quantum superposition, the algorithm enhances both efficiency and effectiveness. It begins by converting classical data into quantum states, assuming a dataset comprising N samples and d features, with each sample represented as a d-dimensional vector. Subsequently, the algorithm transforms classical data points into quantum states, generating a quantum superposition of the dataset, which enables concurrent manipulation of all data points. The algorithm calculates pairwise distances between data points using a distance metric to establish a quantum entanglement representation of the data as shown in Figure 9. These pairwise distances are encoded into quantum states, such that the entangled states depict the relationships among data points. This encoding method permits the algorithm to evaluate distances between data points more effectively than classical approaches. The core procedure of the Q-Forest algorithm entails the construction of a quantum decision tree, in which data points are recursively divided into subsets until a termination criterion is satisfied. The quantum decision tree is formed by determining the optimal split point for each node to minimize an impurity measure. Quantum parallelism is employed during this process to efficiently explore all potential split points. Upon completion of the tree, it can be utilized to cluster or classify novel data points based on their quantum entanglement representation. Figure 8: Explained variance score vs training iterations of QNN algorithm. The obtained metric features are represented in Table 3. Figure 10 shows the variation of Explained variance score with the training iterations. \begin{table} \begin{tabular}{|c|c|c|} \hline MSE & MAE & Explained Variance Score \\ \hline 56.905 & 7.479 & 0.2957 \\ \hline \end{tabular} \end{table} Table 3: Obtained Metrics features for Q-Forest Algorithm Figure 9: Calculation pairwise distances between data points using a distance metric to establish a quantum entanglement ### Variational Quantum Classifier (VQC) adapted for regression A Variational Quantum Classifier (VQC) adapted for regression is a hybrid quantum-classical machine learning algorithm that uses quantum circuits to perform regression tasks. In this approach, a parametrized quantum circuit is prepared, with adjustable parameters represented by \(\theta\). The quantum circuit acts as a feature map, encoding the input data into a high-dimensional quantum state. The encoded data is then processed by a second parametrized quantum circuit, known as the variational circuit, which is responsible for the actual regression. The output of the variational circuit is a continuous value, which is obtained by measuring the expectation value of a specific observable. To adapt VQC for regression, the loss function is tailored to measure the difference between the predicted continuous values and the actual target values. This loss function is used to optimize the parameters of the variational circuit in a classical optimization loop. By iteratively updating the circuit parameters, the VQC model learns the underlying pattern in the data and becomes capable of making predictions for unseen data points. The obtained metric features are represented in Table 4. \begin{table} \begin{tabular}{|c|c|c|} \hline MSE & MAE & Explained Variance Score \\ \hline 59.121 & 7.597 & -0.0106 \\ \hline \end{tabular} \end{table} Table 4: Obtained Metrics features for Variational Quantum Classifier (VQC) adapted for regression Figure 10: Explained Variance score vs training iterations for Q-Forest algorithm Figure 11 shows the variation of Explained variance score with the training iterations. Figure 12 shows the overall comparison of the obtained results. Our results indicate that the Q-Forest algorithm outperforms the other two algorithms in terms of both MSE and MAE. Q-Forest achieved an MSE of 56.905 and an MAE of 7.479, while the QNN and VQC algorithms recorded an MSE of 60.840 and 59.121, and an MAE of 7.671 and 7.597, respectively. Lower MSE and MAE values indicate better performance in terms of prediction accuracy, demonstrating that Q-Forest is better suited for this particular regression task compared to the other algorithms. Also, the Explained Variance Score (EVS) shows that the Q-Forest algorithm accounts for 29.57% of the total variance in the data, whereas the QNN and VQC algorithms record negative EVS values of -44.4% and -1.06%, respectively. A higher EVS value suggests that the model can better explain the variance in the dataset, and therefore, the Q-Forest algorithm demonstrates superior performance in this aspect as well. These findings suggest that the Q-Forest algorithm is a more effective approach for solving regression tasks compared to the QNN and VQC algorithms, in the context of the dataset and problem studied. However, it is essential to note that the performance of quantum algorithms can vary depending on the specific problem, dataset, and hyperparameter settings. As such, further research is necessary to explore the generalizability of our findings to different datasets and regression tasks. Additionally, it would be interesting to examine the performance of these quantum algorithms in comparison to classical machine learning algorithms, which could offer valuable insights into the advantages and limitations of quantum computing in the field of regression analysis. Figure 11: Explained Variance score vs training iterations for Variational Quantum Classifier (VQC) adapted for regression ## 5 Conclusion The primary objective of this study was to compare the performance of three quantum algorithms, QNN, Q-Forest, and VQC adapted for regression, for predicting the surface roughness of additive manufactured specimens. Based on our analysis, the Q-Forest algorithm demonstrated the best performance, with an MSE of 56.905, MAE of 7.479, and an EVS of 0.2957. This indicates that the Q-Forest algorithm not only provides more accurate predictions but also accounts for a greater proportion of the variance in the dataset compared to the other two algorithms. The QNN algorithm exhibited a higher MSE of 60.840 and MAE of 7.671, with a negative EVS of -0.444, suggesting that it may not be well-suited for predicting the surface roughness of additive manufactured specimens. Similarly, the VQC adapted for regression achieved an MSE of 59.121, MAE of 7.597, and an EVS of -0.0106, indicating that its performance is also inferior to the Q-Forest algorithm.
2305.04717
Octet baryon isovector charges from $N_f = 2 + 1$ lattice QCD
We determine the axial, scalar and tensor isovector charges of the nucleon, sigma and cascade baryons as well as the difference between the up and down quark masses, $m_u-m_d$. We employ gauge ensembles with $N_f=2+1$ non-perturbatively improved Wilson fermions at six values of the lattice spacing in the range $a\approx (0.039 - 0.098) \,$fm, generated by the Coordinated Lattice Simulations (CLS) effort. The pion mass $M_\pi$ ranges from around $430 \, $MeV down to a near physical value of $130 \, $MeV and the linear spatial lattice extent $L$ varies from $6.5\,M_{\pi}^{-1}$ to $3.0\,M_{\pi}^{-1}$, where $L M_\pi \geq 4$ for the majority of the ensembles. This allows us to perform a controlled interpolation/extrapolation of the charges to the physical mass point in the infinite volume and continuum limit. Investigating SU(3) flavour symmetry, we find moderate symmetry breaking effects for the axial charges at the physical quark mass point, while no significant effects are found for the other charges within current uncertainties.
Gunnar S. Bali, Sara Collins, Simon Heybrock, Marius Löffler, Rudolf Rödl, Wolfgang Söldner, Simon Weishäupl
2023-05-08T14:03:52Z
http://arxiv.org/abs/2305.04717v3
# Octet baryon isovector charges from \(N_{f}=2+1\) lattice QCD ###### Abstract We determine the axial, scalar and tensor isovector charges of the nucleon, sigma and cascade baryons as well as the difference between the up and down quark masses, \(m_{u}-m_{d}\). We employ gauge ensembles with \(N_{f}=2+1\) non-perturbatively improved Wilson fermions at six values of the lattice spacing in the range \(a\approx(0.039-0.098)\,\mathrm{fm}\), generated by the Coordinated Lattice Simulations (CLS) effort. The pion mass \(M_{\pi}\) ranges from around \(430\,\mathrm{MeV}\) down to a near physical value of \(130\,\mathrm{MeV}\) and the linear spatial lattice extent \(L\) varies from \(6.5\,M_{\pi}^{-1}\) to \(3.0\,M_{\pi}^{-1}\), where \(LM_{\pi}\geq 4\) for the majority of the ensembles. This allows us to perform a controlled interpolation/extrapolation of the charges to the physical mass point in the infinite volume and continuum limit. Investigating SU(3) flavour symmetry, we find moderate symmetry breaking effects for the axial charges at the physical quark mass point, while no significant effects are found for the other charges within current uncertainties. + Footnote †: preprint: (RQCD Collaboration) ## I Introduction A charge of a hadron parameterizes the strength of its interaction at small momentum transfer with a particle that couples to this particular charge. For instance, the isovector axial charge determines the \(\beta\) decay rate of the neutron. At the same time, this charge corresponds to the difference between the contribution of the spin of the up quarks minus the spin of the down quarks to the total longitudinal spin of a nucleon in the light front frame that is used in the collinear description of deep inelastic scattering. This intimate connection to spin physics at large virtualities and, more specifically, to the decomposition of the longitudinal proton spin into contributions of the gluon total angular momentum and the spins and angular momenta for the different quark flavours [1; 2] opens up a whole area of intense experimental and theoretical research: the first Mellin moment of the helicity structure functions \(g_{1}(x)\) is related to the sum of the individual spins of the quarks within the proton. For lattice determinations of the individual quark contributions to its first and third moments, see, e.g., Refs. [3; 4; 5; 6; 7] and Ref. [8], respectively. Due to the lack of experimental data on \(g_{1}(x)\), in particular at small Bjorken-\(x\), and difficulties in the flavour separation, usually additional information is used in determinations of the helicity parton distribution functions (PDFs) from global fits to experimental data [9; 10; 11; 12; 13]. In addition to the axial charge \(g_{A}\) of the proton, this includes information from hyperon decays, in combination with SU(3) flavour symmetry relations whose validity need to be checked. In this article we establish the size of the corrections to SU(3) flavour symmetry in the axial sector and also for the scalar and the tensor isovector charges of the octet baryons: in analogy to the connection between axial charges and the first moments of helicity PDFs, the tensor charges are related to first moments of transversity PDFs. This was exploited recently in a global fit by the JAM Collaboration [14; 15]. Since no tensor or scalar couplings contribute to tree-level Standard Model processes, such interactions may hint at new physics and it is important to constrain new interactions (once discovered) using lattice QCD input, see, e.g., Ref. [16] for a detailed discussion. SU(3) flavour symmetry among the scalar charges is also instrumental regarding recent tensions between different determinations of the pion nucleon \(\sigma\) term, see Ref. [17] for a summary of latest phenomenological and lattice QCD results and, e.g., the discussion in section 10 of Ref. [18] about the connection between OZI violation, (approximate) SU(3) flavour symmetry and the value of the pion nucleon \(\sigma\) term. Finally, the scalar isovector charges relate the QCD part of the mass splitting between isospin partners to the difference of the up and down quark masses. Assuming SU(3) flavour symmetry, the charges for the whole baryon octet in a given channel only depend on two independent parameters. For the proton and the axial charge, this relation reads \(g_{A}=F_{A}+D_{A}\), where in the massless limit \(F_{A}\) and \(D_{A}\) correspond to the chiral perturbation theory (ChPT) low energy constants (LECs) \(F\) and \(D\), respectively. Already in the first lattice calculations of the axial charge of the proton [19; 20], that were carried out in the quenched approximation, \(F_{A}\) and \(D_{A}\) have been determined separately. However, in spite of the long history of nucleon structure calculations, SU(3) flavour symmetry breaking is relatively little explored using lattice QCD: only very few investigations of axial charges of the baryon octet exist to date [21; 22; 23; 24; 25] and only one of these includes the scalar and tensor charges [25].
2305.12175
Psychotic Markov Blankets: Striking a Free Energy Balance for Complex Adaptation
This paper proposes a framework for optimising the adaptation and attunement of a Complex Adaptive System (CAS) with its environments. The tendency towards stability can be explained by minimising free energy but high variability, noise, and over-specialized rigidity can lead to a "stuck state" in a CAS. Without perturbation (increasing free energy), the system remains stuck and unable to adapt to changing circumstances. The paper introduces the concept of 'psychotic' Markov blankets to understand and specify factors contributing to maladjustment conditions moving away from the minimum stuck state. The paper offers directions for optimising adaptation and attunement to be applied to real-world problems, from cells to behaviour, societies and ecosystems.
Inès Hipolito
2023-05-20T11:47:28Z
http://arxiv.org/abs/2305.12175v1
# 'Psychotic' Markov Blankets: ###### Abstract This paper proposes a framework for optimising the adaptation and attunement of a Complex Adaptive System (CAS) with its environments. The tendency towards stability can be explained by minimising free energy but high variability, noise, and over-specialized rigidity can lead to a "stuck state" in a CAS. Without perturbation (increasing free energy), the system remains stuck and unable to adapt to changing circumstances. The paper introduces the concept of 'psychotic' Markov blankets to understand and specify factors contributing to maladjustment conditions moving away from the minimum stuck state. The paper offers directions for optimising adaptation and attunement to be applied to real-world problems, from cells to behaviour, societies and ecosystems. Complex adaptive systems (CAS), free energy minimisation, perturbation. ## Introduction Living beings are complex, dynamical systems (CAS) that exhibit remarkable resilience and tenacity in the face of environmental challenges. While, by the second law of thermodynamics, open systems should tend towards dissipation and disorder, living systems appear to maintain and perpetuate themselves with precision and intentionality. Adapting to an ever-changing world is a generative process that requires not only dynamically interacting with the world, but interacting with a purpose: context-specific interaction as optimising adaptation to avoid the increasing entropy (H-theorem). In order to adapt to changing environments, systems must engage in context-specific interactions that develop towards the optimisation of maintaining health and well-being. Complex Adaptive Systems (CAS) are systems that exhibit emergent behaviour arising from the interactions among their individual components. These systems are characterised by a large number of agents, which interact with one another and with their environment in a nonlinear and dynamic manner. A CAS is often studied in a variety of fields, including ecology, economics, social sciences, and computer science, among others. At the heart of CAS is the concept of generative processes, which refers to the system's ability to create new patterns of behaviour and organisation through interactions among its components. In other words, a CAS's processes are generative processes of embodied interaction with the environment. More precisely, these interactions are not simply passive responses to environmental stimuli, but rather active engagements that shape and modify the environment itself (Kim and Park, 2013; Chemero, 2009; van Geert and de Ruiter, 2022; Hipolito, van Geert and Pessoa, 2023). The embodiment of CAS is a critical aspect of the system's dynamics. Agents are abstract entities that interact with the environment, but rather physical entities that possess agency and are capable of making decisions based on their perceptions of the environment. This embodiment allows agents to be situated within the environment, and to perceive, interpret, and act upon environmental stimuli in contextually appropriate ways. As CAS interact with their environment, they generate complex feedback loops that can lead to self-organisation and emergent behaviours. These feedback loops occur when the actions of one agent affect the environment, which in turn affects the actions of other agents. This can result in the emergence of patterns of behaviour and organisation that were not present in the system's initial state. Understanding and modelling CAS is a challenging task because of the system's inherent complexity. A large number of interacting agents and the nonlinear dynamics of their interactions make it difficult to capture the full range of possible behaviours and outcomes. Even small changes in the initial conditions or parameters of the system can lead to significant changes in its behaviour and outcomes, making it challenging to predict the system's behaviour over time. Furthermore, because CAS is characterised by emergent behaviours and self-organisation, it is often impossible to predict these behaviours based solely on the properties of the individual components or the rules governing their interactions. Emergent behaviours and self-organisation arise from the interactions among agents and their environment, which are often context-dependent and influenced by factors that are difficult to observe or measure. CASs processes are generative processes of embodied interaction with the environment, while generative models refer to mathematical models that capture the statistical regularities in the environment that give rise to observed behaviour emerging from embodied interaction. The main difference between the two is that generative model are typically formulated in terms of probabilistic relationships between hidden causes and observed sensory data, while generative processes refer to the actual behaving living system. While generative models are useful for making predictions and inferring the causes of sensory data, they do not necessarily capture the full complexity of the underlying biological processes that generate the sensory data. Because of the lack of computational tractability power, they employ dimensionality reduction a technique used in machine learning to reduce the number of variables in a dataset while still retaining as much information as possible. This is often done to simplify models and make them more manageable, as models with fewer variables are less complex and easier to work with. Generative models are used as a simplification of the more complex generative processes that are thought to underlie perception and action. In other words, while generative models are a useful tool for making predictions and understanding the statistical structure of the environment because they differ in levels of complexity, they are not the same as the actual bio-behavioural processes. Adaptability and not low-dimension models of it, where complexity and, thereby, uncertainty, emerges Here lies the main issue in studying and developing real-world solutions for the optimisation of CAS, and specifically how to measure uncertainty in CAS. Even if generative models and processes can be correlated because they entail different levels of complexity, they are irreducible to one another. This paper develops a framework for the optimisation of CAS adaptation and attunement to their environment. Drawing from the Free Energy Principle, we stipulate that to remain alive a system must interact via purposefully context-specific actions that optimise environmentally adjusted development. The paper then begins by explaining that this tendency towards stability (i.e. seek preferred states in state space) can be seen as minimising free energy through active inference. It then theories, that, however, within an interaction's dynamical geometry high variability or noise, and over-specialized rigidity can impede adaptation and lead to a "stuck state" (i.e. a situation where a complex adaptive system (CAS) becomes overly specialised and rigid, hindering its ability to adapt to changing circumstances. The system's policy is optimised for a specific context but is not flexible enough to adapt to changes, leading to a state of low variability and high predictability. It then introduces the perspective that without perturbation (increasing the free energy), the system remains in the stuck state indefinitely. Finally, an important question is how much and what type of perturbation is required to move a CAS away from the minimum stuck state, which must be defined for a specific scale and the biological, psychological, and social factors unique to a Complex Adaptive System (CAS). We employ the construct of 'psychotic' Markov blankets can help understand and specify the variables and factors contributing to maladjustment conditions and offer treatment directions for the optimisation of living systems' adaptation and attunement to their environment from the neurobiological to psychological scales. Modelling Embodied Attunement for Optimisation for Contextualised Action Adapting means dynamically adjusting in an attunement way to the environment. It is a trajectory of being where a living system is able to dynamically interact with and adapt to its environment through optimised actions that are contextualised to the specific situation. Embodied attunement involves a reciprocal relationship between the body and the environment, where the body is attuned to the environment and the environment is attuned to the body. This means that our embodied interactions with the environment are not solely based on our sensory experiences, but also on our emotional experiences and our bodily movements and sensations. The embodiment of agents within Complex Adaptive Systems (CAS) is a critical aspect of the system's dynamics, as it enables physical entities to interact with their environment and make decisions based on their perceptions. This embodiment allows agents to be situated within the environment and interpret and act upon environmental stimuli in contextually appropriate ways. As CAS interact with their environment, they generate complex feedback loops that can lead to emergent behaviours and self-organization. These feedback loops occur when the actions of one agent affect the environment, which, in turn, affects the actions of other agents. The emergence of self-organization and the creation of emergent properties in CAS are a result of the interactions between agents and their environment, which lead to the emergence of patterns of behaviour and organization that were not present in the system's initial state. Therefore, it is crucial to consider the embodiment of agents within CAS to better understand their dynamics and the emergence of complex behaviours and organizations in such systems. The concept of multistable interaction is central to the understanding of attunement as it refers to the ability of a living system to maintain multiple stable states of being in response to different environmental conditions (Pisarchik and Hramov, 2022). This means that the system is not rigidly locked into a single state but can adapt and shift between states as needed: context-specific actions that are optimised for achieving a specific goal to maintain health and well-being by adapting to the ever-changing environmental demands. This perspective emphasises the importance of flexibility, adaptability, and context-specificity in achieving and maintaining a state of health. It suggests that the key to health is not a fixed or static state, but rather a dynamic and adaptable process that involves ongoing interaction and optimisation with the environment - a form of embodied attunement. Context-specific actions are optimised for achieving a specific goal to adjust and attune to the environment in an optimal way to a context-specific situation. Adapting to the ever-changing environmental demands can be described as a tendency towards a stable point, where an agent strives to reach an optimal state that is conducive to its survival and flourishing. Drawing from complex and dynamical systems theory, living systems can be understood in terms of trajectory, attractors, and repellors. Trajectory refers to the path that a living system takes through its development over time. Attractors and repellors, on the other hand, are states of the system that represent stable or unstable patterns of behaviour, respectively (Westley et al., 2013; Guckenheimer and Holmes, 2015; Kappel and Helbing, 2019; Levin, 2019; Prokopenko and Ay, 2021). In the context of a CAS development, attractors can be thought of as the different states that the system tends to settle into as it matures. For example, crawling, walking, and running are different attractor states for the locomotion of an animal. These attractor states are stable, meaning that the system tends to remain in them once it has achieved them (Thelen and Smith, 2003; Witherington and Boom, 2019; Iverson, 2021). The tendency of systems to gravitate towards stable attractors can be explained by the principle of minimising free energy. Free energy is the difference between a system's predicted state and its actual state. When a system perceives its environment as uncertain or unpredictable, free energy increases, indicating a mismatch between the predicted and actual states of the system. At the level of individual agents, this translates to interactions with the environment to seek states that maintain their integrity. These agents are open systems that adjust and are attuned to their surroundings. Adaptive behaviour can be understood as active inference, where agents select actions that are most likely to lead to preferred outcomes while minimising the cost or surprise associated with sensory inputs. According to the FEP, living systems can be viewed as information-processing systems that strive to minimise their free energy. Free energy can be understood as the difference between the sensory information received from the environment and the predictions made by the internal models of the agent. The goal of the agent is to reduce this difference by acting upon the environment to make it more consistent with its internal models, which results in a reduction of the free energy. This process of minimising free energy can be understood in the context of active inference, where an agent selects actions that will reduce the discrepancy between its internal model and the sensory input from the environment. Active inference involves the integration of sensory information and prior beliefs to make predictions about the future and select actions that are expected to bring the sensory input closer to the predicted outcome. Repellers, on the other hand, represent unstable patterns of behaviour. They are points in the system's trajectory that the system tends to avoid because they lead to undesirable outcomes. For example, if an animal has a tendency to fall over when it tries to walk too fast, then walking too fast would be a repeller for that animal. In this case, the animal's trajectory would tend to steer it away from the walking-too-fast state, and towards a more stable attractor state, such as walking at a moderate pace. For adaptation, it is important to have a good balance between attractors and repellers. Attractions and repellers are not objectively "positive" or "negative", but it is the networked interaction between them that is relevant to understand the behaviour of a system in general, and to conjecture about its adaptive or adequate interaction with the environment. Driving these dynamics is purposeful action for the maintenance of the system. If for some reason, the system does not purposefully optimise context-specific action for its maintenance (minimise free energy), there is a risk that its trajectory will develop into a stuck state. A stuck state is a state where action optimisation is maximised. While specialisation can be useful, in a stuck state the system is not flexible to adapt to changes. To adapt to the environment, it's important for a system to balance its attractions and repulsions. These forces aren't inherently "good" or "bad," but rather, it's the interaction between them that determines a system's behaviour and ability to adapt. The purpose of this interaction is to maintain the system, and if the system fails to do so by purposefully optimizing its actions for its environment, it risks becoming stuck in a fixed state. Specialization can be helpful, but in overspecialisation, the system loses its ability to adapt to changes. A stuck state is a state in which a system's behaviour becomes rigid and inflexible, limiting its ability to adapt to new or changing situations. In this state, the system has maximized its optimization over-specialized rigidity can hinder adaptation for its maintenance and is no longer flexible to adapt to changes. Behaviour can tend to converge towards the attracting point, which represents a stable pattern of behaviour that minimises the CAS uncertainty. This stable pattern is maintained for a period of time, indicating that the CAS has developed a policy that is optimised for this particular context. While optimisation is a natural tendency of Figure 1: This time series that represents a Complex Adaptive System (CAS) moving towards three attracting points (y1, y2, and y3) over time. The system dynamics are defined by a simple linear function that calculates the difference between the current behaviour and the nearest attracting point. The plot shows the behaviour initially far from the attracting points and moving towards the closest one, but eventually getting stuck in a local minimum around one of the attracting points, indicating repetitive or rigid behaviour. The plot helps visualize how an individual’s behaviour may be influenced by their environment and the tendency to seek stability in complex environments. purposeful action, over-optimisation Specialization can be advantageous, but when taken to the extreme, it can lead to a lack of adaptability within a system. This can result in a "stuck state" where the system's behaviour becomes rigid and inflexible, limiting its ability to adjust to new or changing situations (fig 2) However, if this state becomes too stable it will become a stuck state, which is a state that will not be conducive to developing towards other possible stable points (fig. 2). To facilitate the release of a system from a stuck state, a perturbation can be applied to increase the free energy and shift the system away from the minimum point. As illustrated in Figure 2, B, the introduction of a perturbation causes the system to move away from the minimum point of free energy and explore other regions of the surface plot. In this state, the system has optimized its performance, and overspecialization can impede adaptation by maintaining inflexibility and rigidity. Thus, it is crucial to strike a balance between specialization and flexibility to ensure a system's ability to adapt to changing circumstances. Thus, it is crucial to consider the robustness of a CAS's policy when designing interventions or treatments aimed at supporting their behaviour. In Figure 3, when a high level of perturbation is introduced to the system at times 20 and 40, the CAS's behaviour deviates significantly from the attracting point and moves towards other attracting points, highlighting the importance of flexibility in behaviour and policy. Figure 2: **A,** the surface plot diagram depicts two variables on the x and y axes, while the z-axis represents the “free energy” of the system. A “stuck state” occurs when the system reaches a minimum level of free energy. In this case, the minimum point of free energy is at (0,0), which corresponds to the bottom of the “bowl” in the surface plot. **B.** If the system is in a “stuck state,” it means that it is located at or close to this minimum point and is incapable of moving away from it independently. Consequently, the use of stable points for maintaining rigid or repetitive behaviour can be disrupted (by increasing free energy) to bring about therapeutic change, as the system moves from a local minimum to a new stable state with improved performance. Nonetheless, although stable points may provide a sense of security, they can also constrain an individual's capacity to adapt to new or evolving circumstances. An intuitive example is this rigidity that makes it difficult for people with general mental health conditions, such as Autism Spectrum Conditions to participate in new experiences or interact with others in a dynamic social setting (for detail see Hipolito and White, 2023). A critical open question pertains to the amount and type of perturbation required to break free from the stuck state, which must be defined for a specific scale and the biological, psychological, and social factors unique to a Complex Adaptive System (CAS). The subsequent section proposes the usefulness of the concept of 'psychotic' Markov blankets to comprehend and specify the variables and factors contributing to the maladjustment condition. Markov blankets to understand and specify factors contributing to maladjustment conditions, and discusses the importance of defining the amount and type of perturbation required to move a CAS away from the minimum stuck state. As Markov blankets are scale-free, they can define the quantity and nature of perturbation required for personalized intervention, therefore affording therapeutic and clinical application. The following sections then apply this framework to the nested neurobiological and psychological scales of life. Figure 3: In Figure 3. The time series plot describes the behaviour of a system exhibiting multistability. Multistability refers to a system having multiple stable equilibrium points, also known as attracting points, that the system can settle into. In the case of the CAS depicted in the plot, there are three stable states that the system can settle into, depending on the initial conditions and any perturbations applied. At times 20 and 40, perturbations are introduced, causing the system to move away from the starting points. ## 2 Perturbing Stuck States via Psychotic1 Markov Blankets Footnote 1: The term ‘psychotic’ is often used in clinical settings to describe a state of mind that is characterised by a significant departure from reality. However, it is important to note that this term is not meant to be taken literally, but rather as a means of conveying the idea of a loss of touch with reality. The use of this term is intended to capture the sense of disconnection and insulation from the external environment that can be observed in individuals who are experiencing significant mental health challenges. It is worth noting that when no perturbation is introduced, the system remains indefinitely in a state of rigidity, also known as a "stuck state." This indicates that the individual's policy has been optimised for a specific context but lacks the flexibility to adjust to changes in the environment. A stuck state can be further described through the formalism of Markov blankets, what we shall call a 'psychotic' Markov blanket. The Markov blanket is a statistical tool used to define a system's boundaries through conditional dependence or independence relationships, applicable to any self-organising system. It allows for the modelling of dependencies and dynamics between a system and its environment, emphasising the importance of understanding the interdependent relationship between them. The Markov blanket provides a framework for understanding the role of the environment in shaping a system's behaviour while maintaining its autonomy, by defining a set of variables that surround the system's internal states and labelling all other external variables. This set of variables is determined such that the internal states become conditionally independent from the external variables. b = Markov blanket of \(\mu\) Mathematically speaking, the following equation defines the set of variables 'b' that make the internal states '\(\mu\)' conditionally independent from all other external variables '\(\eta\)'. Simply put, if one knows the values of 'b', then predicting the behaviour of '\(\mu\)' becomes possi e, and no additional information from '\(\eta\)' would be necessary to improve this prediction. Therefore, the equation can be written as: \[p(\mu\mid b,\eta)\ =\ p\ (\mu\mid b) \tag{1}\] The equation (1) includes the conditional probability distribution of '\(\mu\)' given both 'b' and '\(\eta\)', which is represented as p(\(\mu\mid\) b, \(\eta\)), and the conditional probability distribution of '\(\mu\)' given only 'b', which is represented as p(\(\mu\mid\) b). In a dynamic system, equation (1) indicates that the average rate of change of each component in a Markov blanketed system can be influenced by just two other types of states to maintain the equation's underlying structure. \[\mu =f_{\mu}(\mu,s,a)\] \[a =f_{a}(\mu,s,a)\] \[\eta =f_{\eta}(\eta,s,a)\] \[s =f_{s}(\eta,s,a) \tag{2}\] Equation 2 states that the internal and active states of a system are dependent on the blanket states, which include internal, sensory, and active states. Similarly, the external and sensory states of a system are dependent on the blanket states, which include external, sensory, and active states. This implies that the current state of a system is determined by the interactive dynamics between internal, sensory, and active states, while the current state of the environment is determined by the dynamics between external, sensory, and active states. Consequently, the internal and external states of a system indirectly influence each other in a reciprocal manner. Figure **2A** illustrates this reciprocal influence. Active inference is a theoretical framework that aims to explain how biological systems maintain their internal states by minimizing prediction errors or surprise. To illustrate, consider a Complex Adaptive System (CAS) with a single internal state variable whose goal is to avoid a fixed point in state space. By using active inference, the CAS minimizes Figure 3: Internal (purple) and external (orange) states are conditionally independent of one another. Mathematically, this means that they reciprocally influence each other by the dynamical influences within blanket states, i.e. sensory (green) and active (blue) states. A balanced influence between blanket states translated into adapted behaviour. **B**. Should the reciprocal influences within blanket states be broken (red dotted arrow), then internal states will be insulated, i.e. they will not be influenced by the environment (external states) for flexibility in adapting to changing circumstances. its prediction error and maintains its internal state away from the fixed point. This behaviour can be modelled as a Markov decision process (MDP), where the behaviour at each time step depends on the current state of the system and a policy that maps the state to action. The goal is to minimize free energy, which measures the difference between actual and expected sensory input. Purposeful contextual sensitive action is taken to minimise uncertainty. This can be modelled by a Markov Decision Process MDP consisting of a set of states and actions, with probabilities t t describe the transition from one state to another based on the policy. By choosing an action that moves the system towards one of the attracting points, the policy reduces the uncertainty about the environment and thus minimises free energy. The policy can be optimised using techniques from reinforcement learning, which involves learning a mapping from states to actions that maximise a reward signal. An MDP is a stochastic control process that consists of a set of states, a set of actions, and a set of probabilities that describe the transition from one state to another, given a particular action. The probabilities that describe the transition from one state to another are determined by the policy that maps the state to action. The policy can be seen as a way to optimise the behaviour of the system in order to minimise free energy. However, if the internal state of the CAS fails to move towards the prior mean over time (fig 3 B) by engaging in activities that affords the minimisation of uncertainty, it could imply that the organism is not adapting to changes in the environment or is not sustaining a healthy relationship with it. This suggests that the organism's beliefs about the environment are not being updated effectively to enable adaptation to changes in the environment. This can have negative consequences, such as the inability to cope with environmental changes or failure to maintain a sustainable relationship with the environment. Figure 5: The diagram depicts Markov Decision Process (MDP) with three states and three actions. The states represent stable equilibrium points, and actions represent attracting points for autistic people to reduce uncertainty about their environment and minimize free energy. At each time step, the system is in one of the three states, and the CAS can take one of the three actions. The transition probabilities indicate the likelihood of moving from one state to another when taking a specific action. For example, taking action 1 while in state 1 has a 0.9 probability of remaining in state 1 and a 0.05 probability of moving to state 2 or 3. In conclusion, the system is insulated in a stuck state. 'Psychotic' Markov blanket provides a formal way to define the states and factors contributing to the stuck state and understand how an individual's cognitive processes and behaviours are influenced by their interaction with the environment. Interpreting a "stuck state" in mental health conditions as a Markov blanket unbalances between sensory and active states can provide insights into the underlying mechanisms of the condition and guide targeted interventions to restore balance and promote recovery. Specifically, identifying the amount and type of perturbation necessary for moving a CAS away from a stuck state towards multistability or multistable interaction of embodied attunement. Further, the next section proposes that the insulation in a stuck can be overcome by applying a perturbation to the Markov blanket. In other terms, by increasing the free energy in the system. This should move a CAS away from its dynamics conforming with those described as a 'psychotic' Markov blanket, thereby away from a stuck state. To sum up, over-specialized rigidity can lead to a "stuck state" which can be understood through the construct of a "psychotic" Markov blanket. Interventions can help a system overcome the stuck state and move towards multistability by applying sufficient perturbation. Such interventions can focus on altering attractors or repellers in the system or promoting greater adaptability to the environment. The construct of a "psychotic" Markov blanket offers a formal definition of the factors contributing to the stuck state, and understanding these factors can guide targeted interventions for promoting recovery by restoring balance. Identifying the amount and type of perturbation necessary for moving a complex adaptive system (CAS) away from a stuck state towards multistability is crucial. The next section proposes that increasing free energy in the system through perturbation can overcome the insulation in a stuck state associated with a "psychotic" Markov blanket. This can help CAS develop more adaptive patterns of behaviour, cognition, and emotion, and navigate their environment with greater ease as we shall see in the next sections, from microscale cellular levels to behaviour. ## 3 Cellular CAS At the **cellular level,** living organisms face the challenge of maintaining a stable internal environment despite external environmental fluctuations. This is accomplished through homeostatic mechanisms that serve to minimise free energy by preserving a constant internal environment. For example, temperature regulation, osmoregulation, and pH regulation are all mechanisms that help to maintain cellular stability. These mechanisms can be seen as context-specific actions that are optimised to sustain cellular health and well-being by adapting to the changing environmental demands. The active inference process allows the cell to act upon the world to achieve preferred states, minimising free energy and maintaining stability. This highlights the importance of context-specific actions and the need for adaptation to achieve and maintain health at the cellular level. At the cellular level, the actions that are involved in maintaining a stable internal environment may include the uptake and excretion of certain molecules, regulation of metabolic processes, and communication with neighbouring cells. For example, cells may actively pump ions across their membrane to regulate the concentration of ions in the cytoplasm, which is important for maintaining proper cellular function. In the brain, systems and network neuroscience show that neural activity is highly integrated (Sporns, 2013; Wasserman and Wasserman, 2023), the concept of Markov blankets is an important aspect of systems neuroscience and provides a framework for understanding the interactions between different levels of organisation in the brain. Markov blankets in the brain demarcate boundaries of couplings from pairs of neurons to cortical columns and brain-wide networks. The presence of Markov blankets in the brain enables partitions into single neurons, brain regions, and brain-wide networks, which can be used to study the connectivity and function of the brain at multiple scales (Hipolito et al., 2021; Friston et al., 2021). Markov blankets can be useful in understanding the complex interactions between different parts of the brain, such as in the case of the relationship between the prefrontal cortex and the basal ganglia in decision-making. By identifying the Markov blanket of each region, researchers can create more accurate models of how these regions interact and influence each other. Conversely, evidence shows that occurrences of lack of communication between neural cells may be associated with psychopathological conditions (Kaiser et al., 2015; van den Heuvel and Sporns, 2019), such as schizophrenia, which is known as the Dysconnection Hypothesis (Friston et al., 2016; Limong et al., 2023; Sapienza et al, 2023). The dysconnection hypothesis proposes a failure of functional integration in neuronal systems dependent on long-range connections, isolating abnormalities in brain function: the underlying cause of dysconnectivity in schizophrenia is a specific impairment of Figure 4: **A.** The Neuronal Markov blankets figure shows a pair of neurons separated by a Markov blanket, with constants A acting as connectivity strengths from the active state of one neuron to the external state of the other and from the sensory states of the latter to the internal states of the former. The sigma function converts potentials to firing rates. The structure has a unique feature whereby the sensory states can arise from many external states while the active states depend only on the conductance of the neuron being depolarized. The figure also distinguishes between excitatory and inhibitory influences using different arrowhead shapes. **B.** This section describes cortical micro-circuitry and its representation using Markov blankets. The upper schematic shows the connectivity of the canonical microcircuit consisting of four cell populations with a specific pattern of connectivity. The second row illustrates the Markov blankets that underlie the separation into distinct cortical regions. The final row shows a separation into a network of regions, with the middle two regions acting to insulate the far left and right regions. The dynamics of each neural population obey the equations given in Fig. 3A, where the likelihood mappings specify which populations are connected to one another. Feedforward connections originate predominantly from superficial layers, and feedback connections from deep layers. **C.** The image in this figure represents a Markov blanket of networks, where the connections between nodes in different networks are treated as dependencies between states. The networks themselves act as the active, sensory, internal, and external states, loosely structured around resting-state fMRI studies. The visual networks are treated as internal states that influence active states, while the default mode network represents sensory states mediating the influence between internal and external states. The assignment of these states is equally valid if reversed. For detail see Hipolito et al., 2021). synaptic plasticity, which results from aberrant modulation of NMDAR function by DA, ACh, and 5-HT Perturbation to move a CAS away from a stuck state (i.e. dysfunction of a 'psychotic' Markov blanket) for multistable integration could involve various methods such as (1) neurofeedback training, brain stimulation techniques (e.g. transcranial magnetic stimulation or transcranial direct current stimulation), or pharmacological interventions that target specific neurotransmitter systems known to affect functional connectivity; (2) Brain stimulation techniques can be targeted to specific regions or networks in the brain to enhance functional connectivity and reduce dysfunction. (3) Pharmacological interventions may also be used to enhance functional connectivity by targeting specific neurotransmitter systems. For example, drugs that modulate the levels of dopamine, glutamate, or GABA in the brain have been shown to affect functional connectivity and may be useful in treating dysconnection syndromes. By learning to modulate their brain activity, individuals may be able to improve functional integration and reduce dysconnection. Models can further investigate how multidimensional landscape determines the fate of a cell (Saez, Briscoe and Rand, 2022; as well as applying complexity-inspired frameworks, (1) complexity, (2) criticality, (3) controllability, and (4) coordination to analyse resting-state neuroimaging data and how they can provide insight into the organisation and dynamics of brain activity (Hancock et al., 2022). ## 4 Mental health and well-being as a CAS Studying mental health as a CAS, not as symptoms, mental health can be conceptualised as complex biopsychosocial systems that can tend towards a "stuck state". These conditions affect a person's thinking, feeling, behaviour, or mood and deeply impact day-to-day living, often affecting their ability to relate to others. However, it's important to note that mental health problems are not categorical idealizations, but rather complex biopsychosocial processes that unfold in individuals over time (Cramer et al., 2010; McNally, 2016; Fried et al., 2017; Cacioppo and Cacioppo, 2018; Roefs et al., 2022; Bringmann et al., 2023). Mental health as a CAS involves recognizing that mental health conditions are a product of a complex biopsychosocial system that includes both bottom-up (genetic and endocrinological) and top-down (social and psychological) influences. When this system is disrupted by adverse events, such as trauma or chronic stress, it can result in a state of "stuckness" where the individual's ability to adapt and recover is compromised. When an individual is in a "stuck state", it can be interpreted as an unbalance in the Markov blanket between sensory and active states. Sensory states involve the individual's perception and processing of external stimuli, while active states involve the individual's ability to take action and respond to the environment. An unbalance between these two states can result in a situation where the individual is unable to respond appropriately to external stimuli, leading to a "stuck state". For example, in individuals with post-traumatic stress disorder (PTSD), the Markov blanket may be unbalanced towards sensory states, where they may experience hyperarousal and hypervigilance to potential threats. This unbalance may result in the individual being unable to differentiate between real and perceived threats, leading to a "stuck state" where they are unable to respond appropriately to their environment. A psychotherapeutic intervention that aims to perturb a mental condition as a stuck state has the overarching goal of promoting recovery and restoring balance to the individual's mental health. The specific aim of this type of intervention is to disrupt the individual's existing patterns of thought and behaviour that may be contributing to their stuck state and encourage adaptation and growth. In a mental health condition, an individual feels trapped or unable to make progress towards their goals or desires. This can manifest in a variety of ways, such as feeling constantly sad or anxious, experiencing intrusive thoughts or memories, or feeling disconnected from one's environment or loved ones. A psychotherapeutic intervention that aims to perturb this stuck state seeks to help the individual break free from these patterns and develop new ways of thinking and behaving that are more adaptive and supportive of their mental health. The specific aims of a perturbation-based intervention will vary depending on the individual and their unique circumstances. For example, an intervention for depression may aim to increase the individual's sense of agency and control over their environment, while an intervention for anxiety may focus on reducing avoidance behaviours and increasing tolerance for uncertainty. The overall goal, however, is to create a perturbation or disruption that can help the individual move beyond their stuck state and towards a more adaptive and fulfilling way of life. Ultimately, perturbation-based interventions must be carefully tailored to the individual's needs and abilities. The goal is to create a challenge that is manageable and supportive of the individual's growth, rather than overwhelming or triggering. A skilled psychotherapist can help guide the individual through this process and provide support and guidance as they navigate the challenges of perturbation-based interventions. ## Conclusion In conclusion, this paper presents a novel framework for optimising the adaptation and attunement of complex adaptive systems (CAS) with their environments. The paper highlights the tendency towards stability in CAS through active inference, but also identifies the risks of over-specialization and rigidity leading to a "stuck state." To address this, the paper introduces the concept of 'psychotic' Markov blankets and emphasises the importance of perturbation to increase free energy and enable CAS to adapt to changing circumstances. The paper provides directions for applying this framework to real-world problems at various levels of complexity, from cells to societies and ecosystems. Overall, this framework has the potential to guide targeted interventions and promote optimal adaptation and attunement in CAS across a wide range of domains. **Glossary 1: Complex Systems Theory** **Attractors:** States towards which a system tends to move, and which it tends to remain in. **Complex system:** a system composed of interconnected and interdependent parts that exhibit emergent behaviour and are difficult to predict or control. **Emergence:** The appearance of new properties or patterns at higher levels of organization that cannot be explained solely by the properties or behaviours of the system's individual components. **Feedback loops:** Mechanisms by which a system's outputs are fed back into its inputs, creating a cycle of cause and effect. **Multistability:** The property of a system to have more than one stable state or attractor, which can result in alternative outcomes depending on initial conditions. **Nonlinearity:** the property of a system where the relationship between cause and effect is not proportional or additive, making it difficult to predict the behaviour of the system. **Repellers:** States away from which a system tends to move, and which it tends to avoid. **Stable states:** States that a system can achieve and maintain, which can be either attractors or repellers. **Trajectory:** the path that a system follows over time as it evolves and changes. This can refer to the position, state, or behaviour of individual components within the system, or to the overall behaviour of the system as a whole. **Glossary 2: Free Energy Principle** **Active inference:** The idea that organisms actively seek to minimize the free energy of their sensory states by making predictions about the world and adjusting their behaviour to match those predictions. **Free energy expectation:** The prediction generated by an organism's internal model of the world, which is compared to the actual sensory input it receives. **Free energy minimization:** The process by which an organism reduces the discrepancy between its internal model of the world and the sensory information it receives, in order to minimize the amount of free energy in its system. **Free Energy Principle:** The idea that organisms, including the brain, are driven by a fundamental imperative to minimize free energy in their internal states, by maintaining their internal models in alignment with the external world. **Free energy:** a measure of the amount of internal uncertainty or disorder in a system that a system seeks to minimize over time by converging towards stable patterns or attractors. **Generative models:** A mathematical framework that allows for the generation of predictions about the world based on internal models of the environment. **Generative processes:** The processes by which an organism generates predictions about the sensory input it expects to receive from the environment, based on its internal model of the world. **Markov blankets:** The boundary that separates a system from its environment, defining what is internal and what is external to the system. **Policy:** a set of rules or strategies that govern the behaviour of a system or agent in a particular context.
2304.05720
Towards a more comprehensive open-source model for interdisciplinary smart integrated energy systems
The energy transition has recently experienced a further acceleration. In order to make the integration of renewable energies as cost-effective, secure and sustainable as possible and to develop new paradigms for the energy system, many energy system models have been developed in research in the past to evaluate the solutions. While model identification and dissemination of results are widely discussed in the literature, a detailed view of the methodology is often missing. This paper addresses this topic and proposes a methodology to build a comprehensive, publicly accessible database for modeling a multi-modal integrated energy system. The focus hereby is dynamic modeling of low- and medium-voltage grids consisting of prosumers, battery storages, heat pumps and electric cars. In addition, a district heating network is parameterized to match the electricity grid. Modelica and the TransiEnt-Library serves as the modeling tool. The methodology for creating the grid models is available via GitLab. A study case that uses the methodology to analyze the congestion situation within a medium-voltage distribution grid is presented.
Béla Wiegel, Tom Steffen, Davood Babazadeh, Christian Becker
2023-04-12T09:23:20Z
http://arxiv.org/abs/2304.05720v1
Towards a more comprehensive open-source model for interdisciplinary smart integrated energy systems ###### Abstract The energy transition has recently experienced a further acceleration. In order to make the integration of renewable energies as cost-effective, secure and sustainable as possible and to develop new paradigms for the energy system, many energy system models have been developed in research in the past to evaluate the solutions. While model identification and dissemination of results are widely discussed in the literature, a detailed view of the methodology is often missing. This paper addresses this topic and proposes a methodology to build a comprehensive, publicly accessible database for modeling a multi-modal integrated energy system. The focus hereby is dynamic modeling of low- and medium-voltage grids consisting of prosumers, battery storages, heat pumps and electric cars. In addition, a district heating network is parameterized to match the electricity grid. Modelica and the TransitEnt-Library serves as the modeling tool. The methodology for creating the grid models is available via GitHub. A study case that uses the methodology to analyze the congestion situation within a medium-voltage distribution grid is presented. energy system modeling, integrated energy system, multi-modal, open-source, database, Modelica + Footnote †: 979-8.3503-2682-5/23/$1.00 ©2021 IEEE ## I Introduction The transition towards a modern and carbon-free energy system accelerates worldwide. While the shutdown of single centralized conventional power plants and their substitution by wind and photovoltaic power plants is feasible in the current system, the comprehensive enrollment of renewable energies requires a paradigm change in construction and operation concepts of the future energy system [1]. Integrated energy systems, also known as multi-modal energy systems, couple the electricity, heat and gas grids together. This holds the possibility of multiple energy provisioning paths, reinforcing the system evolution. For the concept of Smart Integrated Energy Systems (SIES), the complexity is additionally expanded by aspects of Information and Communication Technology (ICT) investigated as a component necessary for good functioning [2, 3]. In order to understand and support the holistic design of complex systems of this kind - both in terms of structural and operational planning - modeling is an important tool. Therefore, many different energy system models were developed in the past, representing lots of concepts and use cases. For each model to be developed, the research question must be posed in advance to clearly design the requirements and create the model-specific methodology. While research questions and gap identification is project-specific, the work also consists of recurring tasks. Despite macroscopic differences, modeling often uses similar approaches and algorithms to solve the research question. This already starts with data collection and preparation, and continues with model building and evaluation methods. So that recurring tasks in the multitude of research projects do not have to be tackled in future projects again, the open research initiative has formed in the research community to make details of data collection and model-specific methodologies and implementation freely available. In Germany, a association called the National Research Data Infrastructure (nfdi) [4] with the sub-group _nfdi-energy_ has founded with the aim of endorsing the research cycle and utilizing community services [5]. Also the European EERAdata and EnerMaps projects address this topic. In the ERIGrid projects, harmonizing scenario development and model validation is promoted [6]. A main aspect in context of developing and publishing open source methodologies is the FAIR principle: findability, accessibility, interoperability and reusability. These principles are facilitated in current research [7, 8], and especially challenges in data1 provision using open source tools in [9]. Footnote 1: The importance of data is also emphasized during the annual global _Love Data Week_. In this context, the aim of this work is to develop a comprehensive energy system description in the form of a open source data and modeling framework that is suitable for the goal of generating detailed grid models with their node interconnecting energy conversion plants. In addition to modeling the electric power grid, a create a District Heating Network (DHN) suitably parameterized to existing benchmark models for the electric power grid. While usually the dissemination of the modeling and research question investigation is the main part, this paper acts one step before, depicted in Fig. 1. Data and models are published and maintained via GitLab [10]. The main contributions are: * Developing a comprehensive open-source database description for modeling of integrated energy systems at distribution level * Creation of a DHN appropriately parameterized to existing benchmark models for electricity grids * Tool for generation of models for dynamic simulation of integrated distribution systems Thus, the rest of the paper is organized as follows: section II gives an overview of existing approaches in public available modeling tools for integrated energy systems, section III provides the data and modeling framework description and concepts for model creation which is applied in a use case in section IV, and section V gives final remarks and an outlook. ## II State and challenges in open modeling of integrated energy systems Over the past decades, a large variety of energy system models have been created. Diverse requirements for the models in the complex environment of energy system research have led to manifold approaches. The ontology of models can be described with categories of spatial and temporal resolution, modeled energy carriers, physical detail, market representation and much more [11]. Over the years, constantly updated review articles classified the models and place them in context. Thereby, it becomes apparent that integrated energy systems, which couple different energy and application sectors, are becoming more and more important, especially in open source modeling [12]. In order to analyze the physical energy system, two main methods have been established in current research: optimization and simulation [13]. Optimization is suitable for planning a system configuration or an operational schedule that satisfies certain requirements. Objective functions with different weightings of the energy policy triangle consisting of economic efficiency, sustainability and reliability resp. resiliency are applied for this purpose. Methodologically, mixed-integer linear programming is often used here, or also dynamic programming [13]. Simulation aims to analyze time-dependent and mostly coupled processes and their interactions. In the case of quasi-stationary simulation, balance equations are used in which at each point in time the system is considered steady state. If storage-related time-dependent processes, e.g. in the form of an oscillation or transient processes within or across sectors are to be investigated, dynamic simulation in which transient processes are furthermore represented via numerical solution of differential equation systems are to be chosen. Only this approach gives insight in interactions between interconnected and controlled components. For optimization and simulation, it is necessary to capture the power system in the form of a mathematical model. In the planning and analysis of future energy systems, it is necessary in a preliminary step to design a system description in the form of energy system scenarios. Scenarios represent possible development paths and form the basis for the creation of the models used for simulation and optimization. Especially in interdisciplinary collaboration, which has many touch points in the planning phase, a common understanding of the energy system is of high importance so that consistent and concurring models can be created. In this context, multi-modality as a concept of the modern integrated energy system and its analysis requires the formation of grid benchmark models that are parameterized together with their energy conversion plants, such as heat pumps, combined heat and power or electrolyzers in order to be able to analyze their interactions during operation properly. In ERIGrid and ERIGrid 2.0, a EU-wide transnational program with lab-access and knowledge sharing in context of SIES, concepts for harmonizing energy system validation approaches based on modeling and simulation are developed [6]. Benchmark models for the electricity sector are established and well known. The SimBench project provides documented electricity grids and time series for load and consumption on all relevant voltage levels [14], although some parts of the methodology are not fully explained. The Cigre benchmark system is another well established methodology and described in [15]. For the heat sector, different models for DHN exist, e.g. in the _pandapipes_ library, for investigation of different research tasks, but there is no open benchmark model existing. This is accompanied by the lack of a benchmark system for the multi-modal resp. integrated energy system, which is a severe lack in literature. Taking this up, the following section describes a methodology to create a multi-modal test system for multi-modal grids, especially on distribution level. ## III Open Database Description The overall goal of this paper is to provide a freely available basis for modeling of lower-level integrated energy systems. Although the explained models are designed for dynamic simulation, it is also possible to use the same database to build linear optimization models, which was successfully tested. The special feature here is that the methodology as a whole is freely available, especially the data basis and the models itself, and large parts of the physical structure of the integrated energy system are covered. This is achieved by using the freely available TransiEnt-Library [16], which contains component models of electricity, gas and heat grids and their most important coupling elements using differential-algebraic equation systems. The library is implemented in the modeling language Modelica [17]. With the dynamic energy system modeling, a bottom-up approach is chosen to gain a Fig. 1: Scope of this paper within the research cycle. high degree of detail and represent the physical behavior of all components. The following described methodology takes this requirement as a basis. ### _Scenario Generation_ The overall methodology for creating bottom-up energy system models on living quarter level is depicted in Fig. 2. To obtain an overarching system description and prosumer parameterization, information on future energy systems is gathered in the form of a collection of different scenarios at first. Main sources for this are the TYNDP studies by european grid operators [18], scientific studies [19] or results in form of exploratory scenarios of the before mentioned optimization models, e.g. REMod [20]. The qualitative was former published in [3]. After this, representative grid topologies for the electricity grid are underlain since these are well established in research. The SimBench [14] benchmark model is the first chosen, but open-access Cigre or IEEE benchmark models are suitable as well. With a defined grid topology, the building structure is designed. Assumptions from the scenarios together with ISO standards - or in case of Germany DIN standards - are used as a basis for the parameterization of variables relevant for the model. Important variables here are the demand for electrical energy, the ground area of the building, thermodynamic properties such as heat transfer coefficients and their area fractions, and the resulting Nominal Heat Load (NHL) for the reference ambient temperature. With the defined building structure, a DHN with its topology is parameterized. Since well established benchmark models for DHNs are scarce, a separate methodology was build. The following steps describe the procedure roughly: First, from different cycle-free grid topologies one is selected. Currently, a topology based on the power grid or a minimum spanning forest with the vertex spacing as weighting factor are available. Based on the NHL values of each vertex and a comparatively low nominal supply temperature for modern DHNs in the nominal design case, the maximum required mass flow per vertex is calculated, which specifies the edge flows and thus the nominal width of the pipes at a defined maximum flow velocity. All remaining components, especially Battery Electric Storage (BES), Electric Heat Pump (EHP), Battery Electric Vehicle (BEV), Photovoltaic Plant (PV) and properties for sensors like the Smart Meter (SM) components are designed afterwards. Fig. 3 gives an overview of the existing modules in the so called Scenario Generator, which constructs the scenario based on the qualitative scenario definition and different modules for the components of the modeled energy system. Technically, every aspects is implemented in single Python scripts, connected in a overall Scenario Generator script [10]. Scenario studies are possible this way by analyzing the effect of input data in the scenario description. ### _Database description_ During the design phase mentioned before, all processed data is written into a database, called CyEntEE Database (CDB). In this case, a SQL database as back end is used, but any other back end like other database engines or file-based storages like JSON- or XML-files can be implemented. Advantage of a SQL database is the possibility for using in a co-simulation environment in which the modeling and simulation does not only take place on one single computer, but concurrently on multiple simulation platforms that need the same system information. The modeling database described was created as part of a project in a way that, in addition to modeling the physical energy system, also considers aspects of ICT and decentralized markets [3] in form of a SIES. In order to simulate the interactions between these different modeling entities, a co-simulation platform, which is part of current investigation, is developed. All components of the platform require data about the system configuration at simulation time. For this case, a relational database management system implementing SQL2 was used as a back end to allow simultaneous read and write operations across multiple workstations, and always provide fast data access. Footnote 2: In this case _MariaDB_ as open-source database management system. In the case of the living quarter described in this context, the Entity Relationship Model (ERM) is shown in Fig. 4. The tabular structure is shown exemplary in Fig. 5. Fig. 3: Scenario Generator: Overview of the modules contained in the repository for database and model creation. Fig. 2: Methodology to create integrated test models. The scenario description is used to gain qualitative and – where accessible – quantitative information about the energy grid, which is quantified with all relevant components afterwards. ### _Model Generation_ As stated before, the database holds the possibility to export and combine the scenario information in various different modeling environments, such as _pandapower_, _PyPSA_, _Matlab_ and various more. The simulation modeling approach with highest detail supported are dynamic models, which is choosen in this paper. Linear optimization models are also successfully implemented using the CDB. This models, in contrast to quasi stationary or steady-state models, captures within time based simulations the time dependence of the state variables and their derivatives. The CDB is prepared for the generation of those models. In order to make the conversion of the database contents into dynamic models particularly easy, a python module was implemented within the Scenario Generator, see Fig. 3. This module effortlessly export and combine the scenario information within the CDB into a model within the modeling language Modelica. Modelica is a language for modeling of cyber-physical systems, which supports acausal connection of components described by differential and algebraic equations [17]. This module includes the logic to extract the realistic load profiles, study-based driving profiles, associated weather data as well as the topology including the detailed household specification from the database. The extracted information is then linked to existing and parameterizable Modelica models from the _CyEntEE Models_ package, which is planned to be released with this publication within the TransitLibrary [16]. The model with the most significance in this package is the _Prosumer_ model, which is the representation of future households with the capability to at different points in time produce as well as consume power. The _Prosumer_ is described in higher detail in [21]. This model consists in full configuration of six main components, a PV, BES, one or more BEVs, an EHP, inflexible load and SM. Additionally, the module offers to choose which of the elements BEV and BES on household level are externally controllable. In the model itself this is realized by means of _RealInputs_ from the Modelica Standard Library, which offer the external specification of target values while simulating. Furthermore, one can choose to enable the SM models full potential meaning normal distributed Gaussian noise with no systematic deviation, known mean and known standard deviation, which is applied to the measurement outputs from the SM model. Fig. 6 and Fig. 7 show the Prosumer models composition as well as the example parameter set for the configuration of household heating systems. The individually configured Prosumers are then integrated in grid structures, as mentioned in section III. The graphical representation of a simple 13 Prosumer low voltage grid in Dymola, which is a commercial modeling and simulation environment based on Modelica, is given in Fig. 8. The grid model is based on the topology, meaning cable parameters, buses, household position and transformer, given by the SimBench dataset. The Prosumer configuration is obtained from the scenario generation, see section III-A, and is therefore, based on the stochastic realization of the model, different for each Prosumer. ### _Model preparation for Co-Simulation_ According to the goal of the database to obtain common system descriptions usable for different kinds of research studies the preparation of dynamic models for Co-Simulation purposes was investigated and implemented. This is achieved by means of the Functional Mock-up Interface (FMI) Standard [22]. "The Functional Mock-up Interface is a free standard that defines a container and an interface to exchange dynamic simulation models using a combination of XML files, binaries and C code, distributed as a ZIP file." [23]. The transformation from dynamic model within Modelica to the Functional Mock Fig. 4: Entity Relationship Model for the table structure describing one living quarter. Each household, having a load profile, is connected by one line and by zero or one pipe to a node in the grid model, which itself is connected to a cell associated with a grid layer. Fig. 5: Exemplary entity relationship model for the relationship between the household, node and line table. Details about the table structure are given in the code [10]. Fig. 6: Structure of the Prosumer model from the _CyEntEE Models_ package. Fig. 7: Example parameter configuration for Prosumer models household heating system from the _CyEntEE Models_ package. up Unit (FMU) is again achieved by a python module within the Scenario Generator, see Fig. 3. ## IV Use Case To show the potential given by the database, a use case model in form of a rural medium voltage ring based on the SimBench _MV-rural-2-no-switches_ with detailed rural low voltage sub grids, generated from the SimBench _LV-rural-1_ and _LV-rural-2_ was generated within Modelica. Fig. 9 shows the schematic representation of this model, where the green grid represents the medium voltage ring with its lines and buses. The dark blue points and lines represent the buses within the underlying low voltage grids. They are connected by the grey lines, which represent a transformer with on spatial expansion. One bus, especially in the larger low voltage grid on the far left side, can connect more than one Prosumer to the grid. The single red point represents the interface, in form of an 630 MVA transformer to the high voltage grid which is in this simulation modeled as boundary condition. The grids specifications, meaning share of BEV, PV and other components, is based on the _Distributed Energy Scenario_ from [3]. The weather data is taken from the Deutsche Wetterdienst (DWD) for Hamelin, a city in Lower Saxony in Germany, from the year 2020. The used load- and electric vehicle driving profiles are study-based generated and selected. When investigating the situation in this grid, by means of the bus voltages given in Fig. 10, line loadings given in Fig. 11 and transformer loading given in Fig. 12, within a 48-hour simulation based on the historical weather from April 2020, it can be seen that even in an scenario with air temperature of about 20\({}^{\circ}\)C and with high PV generation, future medium and low voltage grids will likely face congestion situations on a daily bases. Especially in the evening hours, between 5 and 10pm, recurrently increasing power increases can be observed due to high share of BEVs. ## V Conclusion and Outlook In modern energy systems, different energy grids are coupled since renewable energy arises mostly as electricity, and the other energy grids (heat, gas and transportation) have to be supplied by the electricity sector, leading to the concept of integrated resp. multi-modal energy systems. The purpose of this paper is to describe a database and developed models to analyze the operation of an integrated power system at living quarter level. Since research is increasingly moving towards making data and models available, all developed algorithms and modeling tools are made open source. Public available data is used to create a qualitative scenario description of the physical energy system, which then is converted to a quantitative scenario description in different modules. The grid module parameterizes the energy grids electricity and heat, and the prosumer module parameterizes the relevant building energy components BES, EHP, BEV and PV, its thermodynamics and measuring devices. Finally, the model generation module transfers the database using the Modelica Language and Transit-Library into a mathematical model. In a use case as an exemplary scenario the congestion situation in a low- and medium-voltage grid is analyzed. Due to the modular structure and high level of detail of Fig. 8: 13-Household rural low voltage grid with topology based on SimBench _LV-rural1–0-no_sw_. Fig. 10: Voltages in per unit at low and medium voltage buses in the rural medium voltage ring scenario. Fig. 9: Rural medium voltage ring based on the SimBench _MV-rural-2-no-switches_ with detailed rural low voltage sub cells, generated from the SimBench _LV-rural-1_ and _LV-rural-2_. Blue points are prosumers at low voltage level. the model description, users can build different scenarios and, for example, investigate the penetration of renewable energies, interactions of different controller structures or new operating strategies. In the future, aim is to further expand the described scenario generator and to specify and validate the created models within the framework of the ERIGrid 2 project.
2305.05409
Polynomial convexity of compacts that lies in certain Levi-flat hypersurfaces in $\mathbb{C}^2$
In this paper, we first prove that the totally real discs lying in certain Levi flat hypersurfaces are polynomially convex. As applications we prove that the totally real discs lying in the boundary of certain polynomial polyhedra are polynomially convex. We also provide an if and only if condition for polynomial convexity of totally real discs lying in the boundary of Hartog's triangle. We also provide sufficient conditions on general compact subsets lying on those hypersurfaces for polynomial convexity.
Sushil Gorai, Golam Mostafa Mondal
2023-05-09T12:56:49Z
http://arxiv.org/abs/2305.05409v1
# Polynomial convexity of compacts that lies in certain Levi-flat hypersurfaces in \(\mathbb{C}^{2}\) ###### Abstract. In this paper, we first prove that the totally real discs lying in certain Levi flat hypersurfaces are polynomially convex. As applications we prove that the totally real discs lying in the boundary of certain polynomial polyhedra are polynomially convex. We also provide an if and only if condition for polynomial convexity of totally real discs lying in the boundary of Hartog's triangle. We also provide sufficient conditions on general compact subsets lying on those hypersurfaces for polynomial convexity. Key words and phrases:Polynomial convexity; totally real disc, plurisubharmonic functions 2010 Mathematics Subject Classification: Primary: 32E20 ## 1. Introduction Let \(K\) be a compact subset of the complex Euclidean space \(\mathbb{C}^{n}.\) Let \(\mathcal{C}(K)\) be the space of continuous complex valued functions on \(K\) and \(\mathcal{P}(K)\) be the space of all uniform limits of polynomials on \(K\). The polynomial convex hull of a compact set \(K\) is denoted by \(\widehat{K}\) and define by \(\widehat{K}:=\{z\in\mathbb{C}^{n}:|p(z)|\leq\sup_{K}|p|,\forall_{p}\in\mathbb{ C}[z_{1},\cdots,z_{n}]\}.\)\(K\) is said to be polynomially convex if \(\widehat{K}=K\). We say a compact \(K\subset\mathbb{C}^{n}\) is rationally convex set if \(K=\{z\in\mathbb{C}^{n}:|f(z)|\leq\sup_{K}|f|,\forall f\in Rat(K)\},\) where \(Rat(K)\) is the collection of all rational functions in \(\mathbb{C}^{n}\) with poles outside \(K\). One of the fundamental question in the theory of uniform algebras is to characterize compact subset \(K\) of \(\mathbb{C}^{n}\) for which \[\mathcal{P}(K)=\mathcal{C}(K). \tag{1.1}\] Note that \(\mathcal{P}(K)\) and \(\mathcal{C}(K)\) both are commutative Banach algebra and if \(\mathcal{P}(K)=\mathcal{C}(K)\) then their maximal ideal spaces are also same. From the theory of uniform algebra, we know that the maximal ideal space of \(\mathcal{C}(K)\) can be identified with \(K,\) and that of \(\mathcal{P}(K)\) can be identified with \(\hat{K}.\) From the above discussion we can see that polynomial convexity arises naturally in the study of uniform approximation. Lavrentiev [10] showed that for \(K\subset\mathbb{C}\), \(\mathcal{P}(K)=\mathcal{C}(K)\) if and only if \(\widehat{K}=K\) and \(int(K)=\emptyset.\) No such characterization is known in the higher dimension, and it is generally challenging to decide which compacts of \(\mathbb{C}^{n}\) satisfy (1.1). So it is natural to consider the problem for particular classes of compact sets. In this paper, we consider compact subsets of totally real submanifolds of \(\mathbb{C}^{2}.\) Recall that a \(C^{1}\)-smooth submanifold \(M\) of \(\mathbb{C}^{n}\) is said to be _totally real_ at \(p\in M\) if \(T_{p}M\cap iT_{p}M=\{0\},\) where the tangent space \(T_{p}M\) is viewed as a real linear subspace of \(\mathbb{C}^{n}.\) The manifold \(M\) is said to be totally real if it is totally real at all points of \(M\). Totally real submanifolds play an important role because of the following reasons. 1. Such manifolds are locally polynomially convex (see [11]). 2. Let \(K\) be a compact subset of a totally real manifold \(M.\) Then any continuous function on \(K\) can be uniformly approximated by a holomorphic (in a neighborhood of \(K\)) function on \(K.\) If \(K\) is polynomially convex, then the holomorphic function can be replaced by the polynomial (see [8]). Hence, (1.1) holds in this case. For polynomially convex set \(K\), there are several papers, for instance see [2, 3, 14, 18, 20], that describe situations when (1.1) holds. A closed _totally real disc_ is a compact subset of a \(\mathcal{C}^{1}\) totally real submanifold, diffeomorphic to the closed planar disc. A totally real disc in \(\mathbb{C}^{2}\) may not be polynomially convex as the following example shows. **Example 1.1** (Wermer [8]).: Let \(M:=\{(z,f(z))\in\mathbb{C}^{2}:z\in\mathbb{C}\},\) where \[f(z)=-(1+i)\bar{z}+iz\bar{z}^{2}+z^{2}\bar{z}^{3}.\] Then \(M\) is totally-real. Let \(K:=\{(z,f(z))\in\mathbb{C}^{2}:z\in\overline{\mathbb{D}}\}\subset M.\) Then \(K\) is not polynomially convex. Studies of polynomial convexity of totally-real discs which are of graph form is done by Wermer [20], O'Farrell and Preskenis [12, 13] and Duval [5]. Duval's theorem is particularly interesting. We mention it here: **Result 1.2** (Duval).: _Let \(M=\{(z,f(z))\in\mathbb{C}^{2}:z\in\overline{\mathbb{D}}\}\), where \(f\) is \(\mathcal{C}^{1}\)-smooth in a neighbourhood of \(\mathbb{D}\). Assume \(\left|\frac{\partial f}{\partial\overline{z}}(a)\right|>\left|\frac{ \partial f}{\partial z}(a)\right|\) for all \(a\in\overline{\mathbb{D}}\). Then \(M\) is polynomially convex._ The study of totally real discs, which are not, in general, of graph form, appeared in the context of removable singularities of CR-functions. A deep result in this direction is due to Joricke [9]. **Result 1.3** (Joricke).: _Any \(\mathcal{C}^{2}\)-smooth totally real disc in the boundary of the unit ball in \(\mathbb{C}^{2}\) is removable._ Combining a result due to Lupacciolu and Stout (see [17]) it is evident that any \(\mathcal{C}^{2}\)-smooth totally real disc in \(\partial\mathbb{B}^{2}\) is polynomial convex. Stout mentioned in [17] that a similar argument will work for the \(\mathcal{C}^{2}\)-smooth totally real discs that lie in the boundary of a strictly pseudoconvex domain \(\Omega\) such that \(\overline{\Omega}\) is polynomially convex. In [1] Alexander proved the following result: **Result 1.4** (Alexander).: _Any \(\mathcal{C}^{2}\)-smooth totally real disc contained in \(\{(z_{1},z_{2})\in\mathbb{C}^{2}:|z_{1}|=1\}\) is polynomially convex_ After the works [9] and [1] were done, Joricke, in the problem book [7], asked the following question: **Question 1.5**.: _Which closed, totally real discs in \(\mathbb{C}^{2}\) are polynomially convex?_ The hypersurface that Alexander considered in [1] is Levi flat. This motivates us to look at some more examples of Levi-flat hypersurfaces and totally real discs lying in them. Recall that A hypersurface is said to be Levi flat if its Levi form vanishes identically. We also give partial answers to Joricke's question in some cases. Our first result demonstrates a class of totally real discs in \(\mathbb{C}^{2},\) lying in certain nonsingular Levi-flat hypersurfaces, that are polynomially convex. These generalizes Alexander's result. More precisely, we present: **Theorem 1.6**.: _Let \(\Omega\) be a Runge domain in \(\mathbb{C}^{2}\) and \(h\) be a holomorphic function on \(\Omega.\) Let \(M:=\{z\in\Omega:|h(z)|=1\}\) with \(dh(z)\neq 0\) for all \(z\in M.\) Then every totally real disc in \(M\) is polynomially convex._ We now mention a corollary which is somewhat interesting. **Corollary 1.7**.: _Let \(\Omega\) be a Runge domain in \(\mathbb{C}^{2}\) and \(g\) be a holomorphic function on \(\Omega.\) Let \(M:=\{z\in\Omega:\mathsf{Reg}(z)=0\}\) with \(dg(z)\neq 0\) for all \(z\in M.\) Then every totally real \(\mathcal{C}^{2}\)-smooth smooth disc in \(M\) is polynomially convex._ Here we give another class of domains in \(\mathbb{C}^{2},\) which lies outside the class of strictly pseudoconvex domains for which the totally real discs are polynomially convex. If \(p_{1},\cdots,p_{l}\) are holomorphic polynomials in \(\mathbb{C}^{n},\) then \[\mathfrak{D}_{l}:=\{z\in\Omega:|p_{1}(z)|<1,\cdots,|p_{l}(z)|<1\},l\in\mathbb{ N},\] is known as a polynomial polyhedron in \(\mathbb{C}^{2}\). Let us define * \(\Sigma_{j}:=\{z\in\mathbb{C}^{n}:|p_{j}(z)|=1\};\) * \(\Sigma_{J_{k}}:=\Sigma_{j_{1}}\cap\cdots\cap\Sigma_{j_{k}},\) where \(J_{k}=\{j_{1},\cdots,j_{k}\}.\) Clearly, the topological boundary \(\partial\mathfrak{D}_{l}\) of \(\mathfrak{D}_{l}\) is contained in \(\cup_{j}\Sigma_{j}.\) The polynomial polyhedron \(\mathfrak{D}_{l}\) is said to be _complex non-degenerate_ if for any increasing collection \(j_{1}<j_{2}<\cdots<j_{k},k\leq n,\) \[dp_{j_{1}}\wedge\cdots\wedge dp_{j_{k}}(z)\neq 0\ \forall z\in\Sigma_{J_{k}}.\] **Corollary 1.8**.: _Any totally real disc that lies in the non-singular part of the boundary of a complex non-degenerate polynomial polyhedron in \(\mathbb{C}^{2}\) is polynomially convex._ The following is also a simple corollary of Theorem 1.6. It deals with compact subsets that lies in the zero set of pluriharmonic functions. This needs simple connectedness of the domain as an assumption. **Corollary 1.9**.: _Let \(\Omega\) be a simply connected Runge domain in \(\mathbb{C}^{2}\) and \(h\) be a real valued pluriharmonic function on \(\Omega.\) Let \(M:=\{z\in\Omega:h(z)=0\}\) with \(dh(z)\neq 0\) for all \(z\in M.\) Then every totally real \(\mathcal{C}^{2}\)-smooth disc in \(M\) is polynomially convex._ Another situation which quite different from the above is as follows: **Theorem 1.10**.: _If \(h:\mathbb{C}\to\mathbb{R}\) be a smooth map and \(E_{h}:=\{x+iy\in\mathbb{C}:h(x,y)=0\}.\) If \(dh\neq 0\) on \(E_{h},\) then every \(\mathcal{C}^{2}\)-smooth totally real disc in \(M:=E_{h}\times\mathbb{C}\) is polynomially convex._ In Corollary 1.7 if we take \(\Omega=\mathbb{C}^{2}\) and \(g(z,w)=iz\) then we get: **Corollary 1.11**.: _Let \(M=\mathbb{R}\times\mathbb{C}\). Then every totally real \(\mathcal{C}^{2}\)-smooth disc in \(M\) is polynomially convex._ Corollary 1.11 also follows from Corollary 1.9 and Theorem 1.10 independently. In the second part of the paper, we will consider general compact subsets that lie inside certain Levi-flat hypersurfaces. The compacts now may not be totally real. We need some definition to state our results in this part. The following definitions can be found in Stolzenberg [16]. **Definition 1.12**.: Let \(K\) be a compact subset of \(\mathbb{C}^{n}.\) If \(f:K\to\mathbb{C}\setminus\{0\}\) is of the form \(f=\exp(\psi)\) for some map \(\psi:K\to\mathbb{C}\) we say that \(\ln(f)\) is defined and \(\psi\) is a branch of \(\ln(f).\) **Definition 1.13**.: Let \(K\) be a compact subset of \(\mathbb{C}^{n}.\)\(K\) is said to be _simply-coconnected_ if, for every map \(f:K\to\mathbb{C}\setminus\{0\},\)\(\ln(f)\) is defined. **Remark 1.14** (Stolzenberg).: \(K\) is simply-coconnected if and only if \(\check{H}^{1}(K;\mathbb{Z})=0.\) Hence, any contractible set is simply coconnected. **Definition 1.15**.: Let \(V\) be an analytic variety in \(\mathbb{C}^{n}.\)\(V\) is said to be a _Runge variety_ if for every \(K\subset V,\)\(\widehat{K}\subset V.\) **Definition 1.16**.: \(K\) is said to be _polynomially convex in dimension one_ if, for every one-dimensional Runge variety \(V\) such that \(K\cap V\) is compact, \(K\cap V\) is polynomially convex. We are now in a position to present our first theorem in this part. **Theorem 1.17**.: _Let \(P\) be a holomorphic polynomial in \(\mathbb{C}^{2}.\) Let \(M:=\{(z_{1},z_{2})\in\mathbb{C}^{2}:|P(z_{1},z_{2})|=1\}\) with \(dP(z)\neq 0\) for all \(z\in M,\) and \(K\subset M\) be compact, simply-coconnected. If \(P^{-1}\{c\}\cap K\) is polynomially convex for all \(c\in\partial\mathbb{D},\) then \(K\) is polynomially convex._ We also present a result analogous to Theorem 1.10 for general compact sets. **Theorem 1.18**.: _Let \(h:\mathbb{C}\to\mathbb{R}\) be a smooth map and \(E_{h}:=\{x+iy\in\mathbb{C}:h(x,y)=0\}\) with \(dh\neq 0\) on \(E_{h}.\) Let \(K\) be a connected simply-coconnected compact subset of \(M:=E_{h}\times\mathbb{C}\) such that each fiber \(K_{c}=\{w\in\mathbb{C}:(c,w)\in K\}\) is polynomially convex. Then \(K\) is polynomially convex._ We now turn our attention to singular Levi-flat hypersurfaces. A singular hypersurface is said to be Levi flat if its regular part is Levi flat. We now provide an example of a non-polynomially convex compact subset which totally real except the singular point of the hypersurface. It shows that the singularities in the Levi-flat hypersurface have a role in determining polynomial convexity. **Example 1.19**.: Let us consider \(M:=\{(z,w)\in\mathbb{C}^{2}:|z|=|w|\},\)\(K:=\{(z,w)\in\mathbb{C}^{2}:|z|=\mathsf{R}\mathsf{e}w,0\leq\mathsf{R} \mathsf{e}w\leq 1,\mathsf{Im}w=0\}\subset M.\) Then \(K\) is not polynomially convex because \(\{(z,w)\in\mathbb{C}^{2}:|z|\leq 1,\mathsf{R}\mathsf{e}w=1,\mathsf{Im}w=0\}\subset \widehat{K}.\) Note that \(K\) is contractible. Let \(\Omega\) be a Runge domain in \(\mathbb{C}^{2}\) and \(h:\Omega\to\mathbb{C}\) be a holomorphic function and \(M:=\{z\in\Omega:|h(z)|=1\}.\) Let \(M^{*}:=\{z\in G:|h(z)|=1\}\) be the regular part of \(M,\) where \(G:=\Omega\setminus\{z\in\Omega:|h(z)|=1,dh(z)=0\}.\) By \(M_{sing},\) we denote the singular part of \(M.\) **Theorem 1.20**.: _Let \(K\) be a totally real disc in \(M^{*}.\) Then \(K\) is polynomially convex if and only if \(\widehat{K}\cap M_{sing}=\emptyset.\)_ As an application of Theorem 1.20 we provide a necessary and sufficient condition for totally real discs in a particular type of Levi-flat hypersurface of the form \(\{(z_{1},z_{2})\in\mathbb{C}^{2}:\mathsf{Re}(z_{1}^{m}+z_{2}^{n})=0\},\)\(m,n\geq 2,\) to be polynomially convex. For \(n=2,m=2\) it is one of the normal form of the Levi-flat quadrics in the list given in [4]. **Corollary 1.21**.: _Every totally real disc \(K\) in the singular Levi-flat hypersurface \(M:=\{(z_{1},z_{2})\in\mathbb{C}^{2}:\mathsf{Re}(z_{1}^{m}+z_{2}^{n})=0\}\) is polynomially convex if and only if \((0,0)\notin\widehat{K}.\)_ Next we come back to the Levi-flat hypersurface \(\{(z,w)\in\mathbb{C}^{2}:|z|=|w|\}\) and provide an if and only if condition for a totally real disc lying there to be polynomially convex. This hypersurface also provides another normal form of Levi-flat quadrics in \(\mathbb{C}^{2}\) in the list of [4]. **Theorem 1.22**.: _Every totally real disc \(K\) in \(M:=\{(z,w)\in\mathbb{C}^{2}:|z|=|w|\}\setminus\{(0,0)\}\) is polynomially convex if and only if \(\widehat{K}\cap\{zw=0\}=\emptyset.\)_ We now state a corollary in the setting of the boundary of Hartogs triangle. **Corollary 1.23**.: _Every totally real disc \(K\) in the nonsingular part of the boundary of the Hartogs triangle \(\{(z_{1},z_{2})\in\mathbb{C}^{2}:|z_{1}|<|z_{2}|<1\}\) is polynomially convex if and only if \((0,0)\notin\widehat{K}.\)_ A proof of Corollary 1.23 follows from Result 1.4 and Theorem 1.22. ## 2. Technical Results In this section we first collect some results from the literature those will be used in our proofs. The following result can be found in [16, Corollary 27]. **Result 2.1** (Stolzenberg).: _Every rationally convex simply-coconnected set is polynomially convex in dimension one._ We also make use of the following result [16, Page 269, assertion (12)]. **Result 2.2** (Stolzenberg).: _Let \(K\) be a compact subset of the purely one-dimensional analytic variety \(V\) in \(\mathbb{C}^{n}\) such that \(\widehat{K}\subset V.\) Then \(K\) is rationally convex._ The following result is also from Stolzenberg [16]. **Result 2.3** (Stolzenberg).: _Let \(X\) be a compact subset of \(\mathbb{C}^{n}.\) If \(X\) is simply-coconnected and there is a function, \(f\), holomorphic in a neighborhood of \(\widehat{X}\) such that \(f(X)\cap f(\widehat{X}\setminus X)=\emptyset,\) then \(X\) is polynomially convex._ The following is due to Stolzenberg (see [6, Lemma 2.3]). **Result 2.4** (Stolzenberg).: _Let \(E\) be a compact subset of \(\mathbb{C}^{n}.\) Assume that \(\mathcal{P}(E)\) contains a function \(\phi\) such that \(\phi(E)\) has empty interior and \(\mathbb{C}\setminus\phi(E)\) is connected. Then \(E\) is polynomially convex if and only if \(\phi^{-1}(c)\cap E\) is polynomially convex for each \(c\in\phi(E).\)_ The following result from Stout's book will also be useful in our proofs. **Result 2.5**.: _([19, Lemma 1.6.18]) Let \(X\) be a compact, polynomially convex subset of \(\mathbb{C}.\) Then every point of \(\partial X\)(boundary of \(X\)) is a peak point for the algebra \(\mathcal{P}(X).\)_ The following result is due to Samuelsson and Wold [15, Proposition 4.7]. We will use this result crucially in our proofs. **Result 2.6** (Samuelsson-Wold).: _Let \(X\) be a compact subset of \(\mathbb{C}^{n}\) and \(F:\mathbb{C}^{n}\rightarrow\mathbb{C}^{m}\) be the uniform limit on \(X\) of entire functions. Let \(K=F(X).\) If \(\alpha\in K\) is a peak point for the algebra \(\mathcal{P}(K),\) then_ \[F^{-1}\{\alpha\}\cap\widehat{X}=F^{-1}\widehat{\{\alpha\}}\cap X.\] Next we mention couple of standard result from the theory of ordinary differential equations. Consider the system of differential equation \[\begin{cases}\frac{dx}{dt}&=F(x,y)\\ \frac{dy}{dt}&=G(x,y).\end{cases} \tag{2.1}\] **Result 2.7** (Poincare).: _Every closed path of the system (2.1) necessarily surrounds at least one critical point._ **Result 2.8** (Poincare-Bendixson).: _Let \(\Omega\) be a bounded region of the phase plane together its boundary, and assume that \(\Omega\) does not contain any critical points of the system (2.1). If \(V:=(x(t),y(t))\) is a path of (2.1) that lies in \(\Omega\) for all \(t\geq t_{0},\) then \(V\) is either itself a closed path or it spirals toward a closed path as \(t\rightarrow\infty.\) Thus in either case the system (2.1) has closed path in \(\Omega.\)_ Next we state and prove a few lemmas that will be used in our proofs. The first one is easy but useful in our context. **Lemma 2.9**.: _Let \(\Omega\) be a Runge domain in \(\mathbb{C}^{n}\) and \(f\) be a holomorphic function on \(\Omega.\) Then every level set of \(f\) is a Runge variety._ Proof.: Let \(Z_{c}:=\{z\in\Omega:f(z)=c\},\) where \(c\in\partial\mathbb{D}.\) Let \(K\) be any compact subset of \(Z_{c}.\) We claim that the polynomial convex hull \(\widehat{K}\) of \(K\) is also subset of \(Z_{c}.\) Since, \(\Omega\) is Runge, \(\widehat{K}\Subset\Omega.\) Suppose that \(\alpha\in\Omega\setminus Z_{c}.\) Then, \(f(\alpha)\neq c.\) Define \(g(z):=f(z)-c.\) Therefore, \[|g(\alpha)|>0=\sup_{K}|g|.\] Since \(\Omega\) is Runge, \(g\) can be approximated by holomorphic polynomials on \(K.\) Hence \(\alpha\notin\widehat{K}\) i.e. \(\widehat{K}\subset Z.\) Let \(M\) be a three-dimensional smooth submanifold of \(\mathbb{C}^{2}\) and \(N\) be a two-dimensional totally real submanifold of \(M\) with (possibly empty) boundary. A field of lines on \(N\) is defined as follows: for every \(p\in N,\) \[L_{p}=T_{p}N\cap T_{p}^{\mathbb{C}}M.\] Here, \(T_{p}^{\mathbb{C}}M\) is the unique complex line through \(p\) that is tangent to \(M\) (where \(T_{p}^{\mathbb{C}}M=T_{p}M\cap iT_{p}M\)). The curves that are always tangent to the lines \(L_{p}\) are the leaves of the foliation of \(N\) that is defined by this line field. In other words, a curve \(\gamma:(a,b)\to M\) is included in a leaf of the foliation if and only if the derivative \(\gamma^{\prime}(t)\) falls on the line \(L_{\gamma(t)}\) for every \(t\in(a,b).\) This gives a foliation of class \(\mathcal{C}^{1}\) of \(N,\) also known as the _characteristic foliation_ of \(N\) (see Stout [19, Page 258] for details). **Lemma 2.10**.: _Let \(\Omega\subset\mathbb{C}^{2}\) be a domain and \(\rho:\Omega\to\mathbb{R}\) is \(\mathcal{C}^{2}\)-smooth and \(d\rho\neq 0\) on \(M:=\{z\in\Omega:\rho(z)=0\}.\) Let \(K\) be a totally real \(\mathcal{C}^{2}\)-smooth disc in the 3-dimensional manifold \(M.\) Then there exists a characteristic foliation on \(K.\)_ Proof.: Since \(K\) is a totally real disc in \(M,\) there exists a neighborhood \(U\) of \(\overline{\mathbb{D}}\) and a totally real smooth submanifold \(N\) of \(\mathbb{C}^{2}\) and a \(C^{2}\)-smooth diffeomorphism \(\phi:U\to N\) such that \(\phi(\overline{\mathbb{D}})=K.\) By shrinking \(U\) (if necessary), we can assume \(U\) is also contractible and we take \(\triangle:=\phi(U).\) Therefore, \(\triangle\) is also a \(\mathcal{C}^{2}\)-smooth totally real disc in \(M\) which contains \(K.\) We take \[\phi(t,s)=(\phi_{1}(t,s),\phi_{2}(t,s),\phi_{3}(t,s),\phi_{4}(t,s)).\] Therefore, \[d\phi|_{(t,s)}=\begin{pmatrix}\frac{\partial\phi_{1}}{\partial t}(t,s)&\frac{ \partial\phi_{1}}{\partial s}(t,s)\\ \frac{\partial\phi_{2}}{\partial t}(t,s)&\frac{\partial\phi_{2}}{\partial s}( t,s)\\ \frac{\partial\phi_{3}}{\partial t}(t,s)&\frac{\partial\phi_{3}}{\partial s}(t,s)\\ \frac{\partial\phi_{4}}{\partial t}(t,s)&\frac{\partial\phi_{4}}{\partial s}( t,s)\end{pmatrix}_{4\times 2}=\begin{pmatrix}A(t,s)&B(t,s)\end{pmatrix},\] where \[A(t,s)=\begin{pmatrix}\frac{\partial\phi_{1}}{\partial t}(t,s)\\ \frac{\partial\phi_{2}}{\partial t}(t,s)\\ \frac{\partial\phi_{3}}{\partial t}(t,s)\\ \frac{\partial\phi_{4}}{\partial t}(t,s)\end{pmatrix}\text{ and }B(t,s)= \begin{pmatrix}\frac{\partial\phi_{1}}{\partial s}(t,s)\\ \frac{\partial\phi_{2}}{\partial s}(t,s)\\ \frac{\partial\phi_{3}}{\partial s}(t,s)\\ \frac{\partial\phi_{4}}{\partial s}(t,s)\end{pmatrix}.\] Let \(\lambda=a(t,s)\frac{\partial}{\partial t}+b(t,s)\frac{\partial}{\partial s}\) be a non-vanishing vector field on \(U.\) Now we compute, for smooth \(f\) on \(K,\) \[d\phi(\lambda)(f)=\lambda(f\circ\phi)\ =\left(a(t,s)\frac{\partial}{ \partial t}+b(t,s)\frac{\partial}{\partial s}\right)(f\circ\phi). \tag{2.2}\] Now, by chain rule, we get that \[\frac{\partial(f\circ\phi)}{\partial t}=\left(\frac{\partial\phi_{1}}{\partial t }\frac{\partial}{\partial x}\bigg{|}_{\phi(t,s)}+\frac{\partial\phi_{2}}{ \partial t}\frac{\partial}{\partial y}\bigg{|}_{\phi(t,s)}+\frac{\partial\phi_ {3}}{\partial t}\frac{\partial}{\partial u}\bigg{|}_{\phi(t,s)}+\frac{\partial \phi_{4}}{\partial t}\frac{\partial}{\partial v}\bigg{|}_{\phi(t,s)}\right)(f),\] and \[\frac{\partial(f\circ\phi)}{\partial s}=\left(\frac{\partial\phi_{1}}{ \partial s}\frac{\partial}{\partial x}\bigg{|}_{\phi(t,s)}+\frac{\partial \phi_{2}}{\partial s}\frac{\partial}{\partial y}\bigg{|}_{\phi(t,s)}+\frac{ \partial\phi_{3}}{\partial s}\frac{\partial}{\partial u}\bigg{|}_{\phi(t,s)}+ \frac{\partial\phi_{4}}{\partial s}\frac{\partial}{\partial v}\bigg{|}_{\phi(t,s)}\right)(f).\] Since \(\lambda\) is a non-vanishing vector-field on \(U\) and \(\phi\) is a diffeomorphism, therefore, \(\nu=a(t,s)A(t,s)+b(t,s)B(t,s)\) is also a non-vanishing vector field on \(\triangle\). Now we will see that the above vector field \(\nu\) gives a \(\mathcal{C}^{1}\)-smooth characteristic foliation on \(K.\) To see this we do some computations here: Let \(\nu_{\alpha}\in T^{\mathbb{C}}_{\alpha}M.\) Then \[\langle J(a(t,s)A(t,s)+b(t,s)B(t,s),\nabla\rho)\rangle=0\,\] \[\implies a(t,s)\langle JA(t,s),\nabla\rho\rangle+b(t,s)\langle JB (t,s),\nabla\rho\rangle=0,\] where \(J\) is the standard complex structure on \(\mathbb{C}^{2},\) and \(\langle,\rangle\) is the standard real inner product. A natural choice for the functions \(a\) and \(b\) is the following: \[\begin{cases}a(t,s)=\langle JB(t,s),\nabla\rho\rangle\ \ \text{and}\\ b(t,s)=-\langle JA(t,s),\nabla\rho\rangle.\end{cases} \tag{2.3}\] From (2.3), we get that \[a(t,s)=-\frac{\partial\phi_{2}}{\partial s}\frac{\partial\rho} {\partial x}+\frac{\partial\phi_{1}}{\partial s}\frac{\partial\rho}{\partial y }-\frac{\partial\phi_{4}}{\partial s}\frac{\partial\rho}{\partial u}+\frac{ \partial\phi_{3}}{\partial s}\frac{\partial\rho}{\partial v}\ \text{and}\] \[b(t,s)=\frac{\partial\phi_{2}}{\partial t}\frac{\partial\rho}{ \partial x}-\frac{\partial\phi_{1}}{\partial t}\frac{\partial\rho}{\partial y }+\frac{\partial\phi_{4}}{\partial t}\frac{\partial\rho}{\partial u}-\frac{ \partial\phi_{3}}{\partial t}\frac{\partial\rho}{\partial v}.\] Since \(\phi\) is \(C^{2}\)-smooth and \(\rho\) is \(C^{\infty}\) smooth, therefore from the above calculation we get that \(a(t,s)\) and \(b(t,s)\) are smooth functions. We assign each point \(\alpha\in K\) to \(\nu_{\alpha}\in\mathcal{L}_{\alpha}:=T_{\alpha}K\cap T^{\mathbb{C}}_{\alpha}M.\) Since \(K\) is totally real disc in three dimensional manifold \(M,\) dimension of \(\dim_{\mathbb{R}}\mathcal{L}_{\alpha}=1\ \forall\alpha\in K.\) Therefore, the above assignment is smooth one dimensional distribution on \(\triangle.\) Since every one dimensional distribution is integrable and hence gives the characteristic foliation on \(K.\) **Lemma 2.11**.: _If \(\rho(z):=|h(z)|^{2}-1,\) where \(h\) is a holomorphic function in a neighborhood of \(K\) in the Lemma 2.10, then \(h\) is constant along each leaf of the characteristic foliation of \(K.\)_ Proof.: Let the integral curve \(\gamma(t):=(\gamma_{1}(t)+i\gamma_{2}(t),\gamma_{3}(t)+i\gamma_{4}(t)):(a,b) \rightarrow\triangle\) in \(\triangle\) be a leaf of the characteristic foliation on \(\triangle.\) Then \(\gamma^{\prime}(t)\in T_{\gamma(t)}K\cap T^{\mathbb{C}}_{\gamma(t)}M\ \forall t\in(a,b).\) Note that \(\nabla\rho|_{\gamma(t)}=\left(\frac{\partial\rho}{\partial x},\frac{\partial \rho}{\partial y},\frac{\partial\rho}{\partial u},\frac{\partial\rho}{\partial v }\right)|_{\gamma(t)}\) is the normal to \(M\) at \(\gamma(t).\) Since \(\gamma^{\prime}(t)\in T^{\mathbb{C}}_{\gamma(t)}M,\)\(i\gamma^{\prime}(t)\in T_{\gamma(t)}M.\) Therefore, \[\langle\gamma^{\prime}(t),\nabla\rho|_{\gamma(t)}\rangle=0=\langle i\gamma^{ \prime}(t),\nabla\rho|_{\gamma(t)}\rangle.\] Hence \(\langle\gamma^{\prime}(t),\nabla\rho|_{\gamma(t)}\rangle=0\) implies \[\gamma^{\prime}_{1}(t)\frac{\partial\rho(\gamma(t))}{\partial x}+\gamma^{\prime }_{2}(t)\frac{\partial\rho(\gamma(t))}{\partial y}+\gamma^{\prime}_{3}(t)\frac{ \partial\rho(\gamma(t))}{\partial u}+\gamma^{\prime}_{4}(t)\frac{\partial\rho( \gamma(t))}{\partial v}=0, \tag{2.4}\] and \(\langle i\gamma^{\prime}(t),\nabla\rho|_{\gamma(t)}\rangle=0\) implies \[-\gamma^{\prime}_{2}(t)\frac{\partial\rho(\gamma(t))}{\partial x}+\gamma^{ \prime}_{1}(t)\frac{\partial\rho(\gamma(t))}{\partial y}-\gamma^{\prime}_{4}( t)\frac{\partial\rho(\gamma(t))}{\partial u}+\gamma^{\prime}_{3}(t)\frac{ \partial\rho(\gamma(t))}{\partial v}=0. \tag{2.5}\] We get from (2.4) and (2.5) that \[\left(\gamma^{\prime}_{1}(t)+i\gamma^{\prime}_{2}(t)\right) \frac{\partial\rho(\gamma(t))}{\partial z}+\left(\gamma^{\prime}_{3}(t)+i \gamma^{\prime}_{4}(t)\right)\frac{\partial\rho(\gamma(t))}{\partial w}+\left( \gamma^{\prime}_{1}(t)-i\gamma^{\prime}_{2}(t)\right)\frac{\partial\rho( \gamma(t))}{\partial\bar{z}}\\ +\left(\gamma^{\prime}_{3}(t)-i\gamma^{\prime}_{4}(t)\right)\frac {\partial\rho(\gamma(t))}{\partial\bar{w}}=0\] and \[i\left(\gamma^{\prime}_{1}(t)+i\gamma^{\prime}_{2}(t)\right) \frac{\partial\rho(\gamma(t))}{\partial z}+i\left(\gamma^{\prime}_{3}(t)+i \gamma^{\prime}_{4}(t)\right)\frac{\partial\rho(\gamma(t))}{\partial w}-i \left(\gamma^{\prime}_{1}(t)-i\gamma^{\prime}_{2}(t)\right)\frac{\partial\rho (\gamma(t))}{\partial\bar{z}}\\ -i\left(\gamma^{\prime}_{3}(t)-i\gamma^{\prime}_{4}(t)\right) \frac{\partial\rho(\gamma(t))}{\partial\bar{w}}=0.\] This implies \[\left(\gamma^{\prime}_{1}(t)+i\gamma^{\prime}_{2}(t)\right) \frac{\partial\rho(\gamma(t))}{\partial z}+\left(\gamma^{\prime}_{3}(t)+i \gamma^{\prime}_{4}(t)\right)\frac{\partial\rho(\gamma(t))}{\partial w}=0\] \[\implies \left(\gamma^{\prime}_{1}(t)+i\gamma^{\prime}_{2}(t)\right)\frac{ \partial h(\gamma(t))}{\partial z}+\left(\gamma^{\prime}_{3}(t)+i\gamma^{ \prime}_{4}(t)\right)\frac{\partial h(\gamma(t))}{\partial w}=0\] \[\implies d(h\circ\gamma)(t)=0\] \[\implies h\circ\gamma=c,\text{ where c is some constant.}\] Therefore, we can say that each integral curve of \(\nu\) in \(K\) lies in \(h^{-1}\{c\}\cap K\) for some constant \(c\). ## 3. Nonsingular Levi-flat hypersurfaces In this section we provide proofs of Theorem 1.6, Corollary 1.7, Corollary 1.8, Corollary 1.9, Theorem 1.17, Corollary 1.11 and Theorem 1.18. We first define \(\rho(z):=h(z)\overline{h(z)}-1\) and \(K\) be a smooth totally real disc in \(M:=\{z\in\Omega:\rho(z)=0\}.\) Since \(dh(z)\neq 0,\)\(M\) is a real \(3\)-dimensional submanifold of \(\Omega\subset\mathbb{C}^{2}.\) Then the complex dimension of the complex tangent space \(T^{\mathbb{C}}_{\alpha}M\) of \(M\) is \(1\) for each \(\alpha\in M.\) Proof of Theorem 1.6.: We break our proof of Theorem 1.6 into the following steps. **Step I: Showing that each fiber \(\boldsymbol{h^{-1}\{c\}\cap K}\) is polynomially convex.** Since \(dh|_{\alpha}\neq 0\) on \(h^{-1}\{c\},\)\(\{z\in\Omega:h(z)=c\}\) is a complex manifold of pure dimension \(1\). In view of Lemma 2.9, we can say that \(h^{-1}\{c\}\) is a Runge variety for each \(c\in\partial\mathbb{D},\) and we denote \(K_{c}:=h^{-1}\{c\}\cap K.\) The following two cases hold. **Case I:**\(\check{H}^{1}(K_{c},\mathbb{Z})=0.\) This implies \(K_{c}\) is simply-coconnected (see Remark 1.14). Since \(h^{-1}\{c\}\) is a Runge variety, by using Result 2.2, we get that \(K_{c}\) is rationally convex. Therefore, by using Result 2.1, we can say that \(K_{c}\) is polynomially convex in dimension one. Since \(h^{-1}\{c\}\) is a one-dimensional Runge variety, therefore, \(K_{c}\cap h^{-1}\{c\}=K_{c}\) is polynomially convex. **Case II:**\(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0.\) The leaves of the characteristic foliation of \(\triangle\) are curves \(\eta\) such that for all \(t,\)\(\eta^{\prime}(t)\in T_{\eta(t)}K\cap T_{\eta(t)}^{\mathbb{C}}M.\) On the disc \(\triangle\), there are nowhere vanishing vector fields \(V\) (see Lemma 2.10) such that the vector \(V_{x}\) is tangent to the leaf through \(x\) of the characteristic foliation at all points \(x\in\triangle.\) Thus, the solution curves of the differential equation \(x^{\prime}=V_{x}\) are contained in leaves of the foliation. According to the Poincare-Bendixson theorem (Result 2.8), all of the integral curves for this foliation are arcs, homeomorphic to the interval \([0,1],\) with endpoints on \(\partial\triangle.\) In fact, let \(\gamma\) be a integral curve for this foliation. If \(\gamma\) lies in \(\triangle\), then there exists a closed integral curve \(\widetilde{\gamma}\) for a non-vanishing vector field \(\widetilde{V}\) on \(U,\) (where \(\phi(U)=\triangle\)). Therefore, by Result 2.8, either \(\widetilde{\gamma}\) is closed or it spirals toward a closed path as \(t\to\infty.\) Therefore, by Result 2.7, there exists a critical point in \(U.\) This is a contradiction because \(\widetilde{V}\) is nowhere vanishing vector field on \(U.\) Note that \(\triangle\) contain all fibers \(h^{-1}\{c\}\cap K.\) Since \(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0,\)\(\triangle\setminus K_{c}\) is not connected. Take \(\alpha\in K\setminus K_{c}\) and \(\gamma\) is integral curve passing through \(\alpha.\) We claim that \(\gamma\cap K_{c}=\emptyset:\) if possible, assume that \(\gamma\cap K_{c}\neq\emptyset\) and \(\xi\in\gamma\cap K_{c}.\) Since each integral curve lies in \(h^{-1}\{c_{1}\}\) for some constant \(c_{1}\in\partial\mathbb{D},\) let \(\gamma\subset h^{-1}\{c_{1}\}.\) Then we have \(h(\alpha)=c_{1}.\) On the other hand \(h(\xi)=c\) and \(\xi\in\gamma.\) But we know that \(h\) is constant along each integral curve (by Lemma 2.11). This is a contradiction to the assumption that \(\gamma\cap K_{c}\neq\emptyset.\) Therefore, \(\gamma\cap K_{c}=\emptyset.\) This proves the claim. Next we claim that \(\gamma\cap K_{c}=\emptyset\) is not possible. By Poincare-Bendixson theorem (Result 2.8) each integral curve \(\gamma\) through \(\alpha\) joins \(\alpha\) to the boundary of \(\triangle.\) But \(\triangle\setminus K_{c}\) is not connected, hence \(\gamma\cap K_{c}=\emptyset\) is not possible. Therefore, \(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\) is not possible, i.e. **Case II** can not happen. **Step II: Completing the proof.** We claim that \(h(K)\cap h(\widehat{K}\setminus K)=\emptyset.\) If possible, assume that \(h(K)\cap h(\widehat{K}\setminus K)\neq\emptyset.\) Let \(\alpha\in h(K)\cap h(\widehat{K}\setminus K).\) Then there exist \(\eta\in K\) and \(\xi\in\widehat{K}\setminus K\) such that \(h(\eta)=\alpha\) and \(h(\xi)=\alpha.\) We claim that \(\xi\in h^{-1}\widehat{\{\alpha\}\cap K}\) that is the fiber \(h^{-1}\widehat{\{\alpha\}\cap K}\) is not polynomially convex. Let \(Y=h(K).\) Since \(h(K)\subset\partial\mathbb{D},\) each point of \(Y\) is a peak point for the uniform algebra \(\mathcal{P}(Y),\) then by Result 2.6, we obtain that \[h^{-1}\{\alpha\}\cap\widehat{K}=h^{-1}\widehat{\{\alpha\}\cap K}.\] Therefore, \[\xi\in h^{-1}\{\alpha\}\cap\widehat{K}=h^{-1}\widehat{\{\alpha\}\cap K}.\] Hence, \(h^{-1}\{\alpha\}\cap K\) is not polynomially convex. This is a contradiction to **Step I**. Hence, \(h(K)\cap h(\widehat{K}\setminus K)=\emptyset.\) Therefore, by Result 2.3, \(K\) is polynomially convex. Proof of Corollary 1.7.: Proof of Corollary 1.7 follows from Theorem 1.6. To see this, we define \(h(z):=e^{g(z)}\) on \(\Omega.\) Then \(\{z\in\Omega:|e^{g(z)}|=1\}=\{z\in\Omega:\mathsf{Re}g(z)=0\}=M.\) Since \(\Omega\) is Runge and \(e^{g(z)}\) is holomorphic on \(\Omega,\) therefore every totally real smooth disc contained in \(\{z\in\Omega:|e^{g(z)}|=1\}\) is polynomially convex. This proves the theorem. Proof of Corollary 1.9.: Proof of Corollary 1.9 easily follows from Corollary 1.7. Since \(\Omega\) is simply-connected, there exists a holomorphic function \(P\) on \(\Omega\) such that \(h(z)=\mathsf{Re}P(z).\) Using Corollary 1.7, we can say that every smooth totally real disc in \(\{z\in\Omega:\mathsf{Re}P(z)=0\}\) is polynomially convex. Hence, every smooth totally real disc in \(\{z\in\Omega:h=\mathsf{Re}P(z)=0\}\) is polynomially convex. Proof of Theorem 1.17.: Since \(K\) is polynomially convex in dimension one, \(h^{-1}\{c\}\cap K\) is polynomially convex for all \(c\in\partial\mathbb{D}.\) Hence \(h(K)\cap h(\widehat{K}\setminus K)=\emptyset\) (see **Step II** of the proof of Theorem 1.6). Again \(K\) is simply-coconnected, therefore, by Result 2.3, we can say that \(K\) is polynomially convex. Proof of Theorem 1.10.: First, we claim that integral curves for the characteristic foliation for \(M\) are the curves contained in \(P^{-1}\{c\}\) for some \(c,\) where \(P:\mathbb{C}^{2}\to\mathbb{C},(z,w)\mapsto z\) is the projection map. To see this, let \((x,y,u,v)\) be the coordinates of \(\mathbb{R}^{4}=\mathbb{C}^{2}.\) Let us define \(\rho:\mathbb{C}^{2}\to\mathbb{R}\) by \(\rho(x,y,u,v)=h(x,y),\) then \(M=\rho^{-1}\{0\}.\) The normal vector to \(M\) at \((x,y,u,v)\) is given by \[\nabla\rho|_{(x,y,u,v)}=\left(\frac{\partial\rho}{\partial x},\frac{\partial \rho}{\partial y},\frac{\partial\rho}{\partial u},\frac{\partial\rho}{ \partial v}\right)\bigg{|}_{(x,y,u,v)}=\left(\frac{\partial h}{\partial x}, \frac{\partial h}{\partial y},0,0\right). \tag{3.1}\] Let \(\gamma(t)=(x(t),y(t),u(t),v(t))\) be a integral curve for the characteristic foliation for \(M.\) Since \(\gamma^{\prime}(t)\in T_{\gamma(t)}^{\mathbb{C}}M,\)\(i\gamma^{\prime}(t)\in T_{\gamma(t)}M,\) therefore, \[\langle\gamma^{\prime}(t),\nabla\rho|_{\gamma(t)}\rangle=0=\langle i\gamma^{ \prime}(t),\nabla\rho|_{\gamma(t)}\rangle.\] Hence \(\langle\gamma^{\prime}(t),\nabla\rho|_{\gamma(t)}\rangle=0\) implies \[\frac{\partial h}{\partial x}x^{\prime}(t)+\frac{\partial h}{\partial y}y^{ \prime}(t)=0, \tag{3.2}\] and \(\langle i\gamma^{\prime}(t),\nabla\rho|_{\gamma(t)}\rangle=0\) implies \[-\frac{\partial h}{\partial x}y^{\prime}(t)+\frac{\partial h}{\partial y}x^{ \prime}(t)=0. \tag{3.3}\] From (3.2) and (3.3), we get that \[\begin{pmatrix}\frac{\partial h}{\partial x}&\frac{\partial h}{\partial y}\\ \frac{\partial h}{\partial y}&-\frac{\partial h}{\partial x}\end{pmatrix} \begin{pmatrix}x^{\prime}(t)\\ y^{\prime}(t)\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix}.\] Since \(\left(\frac{\partial h}{\partial x}\right)^{2}+\left(\frac{\partial h}{ \partial y}\right)^{2}\neq 0\) on \(M,\) therefore \(x^{\prime}(t)=0=y^{\prime}(t).\) This implies \(x(t)=c_{1},\ y(t)=c_{2}\) for some \((c_{1},c_{2})\in E_{h}.\) Hence \(\gamma\subset P^{-1}\{c\}\) which proves our claim. Let \(K\) be a smooth totally real disc in \(M.\) Following the proof of Theorem 1.6, we can prove that each fiber \(K_{c}=P^{-1}(c)\cap K=(\{c\}\times\mathbb{C})\cap K\) is polynomially convex. To see this, let us define \(K_{c}:=P^{-1}\{c\}\cap K\) and \(V_{c}=\{(z,w)\in\mathbb{C}^{2}:z=c\}.\) **Case I:**\(\check{H}^{1}(K_{c},\mathbb{Z})=0.\) This implies \(K_{c}\) is simply-coconnected (see Remark 1.14). Since \(V_{c}\) is a Runge variety, by using Result 2.2, we get that \(K_{c}\) is rationally convex. Therefore, by using Result 2.1, we can say that \(K_{c}\) is polynomially convex in dimension one. Since \(V_{c}\) is a one-dimensional Runge variety, therefore, \(K_{c}\cap V_{c}=K_{c}\) is polynomially convex. **Case II: \(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\)**. By the similar argument as Theorem 1.6 (**Step I**, **Case II**), \(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\) can not happen. Therefore, \(K_{c}=P^{-1}\{c\}\cap K\) is polynomially convex. \(K\) is polynomially convex then follows from Result 2.3. Proof of Corollary 1.11.: Let \(\Omega=\mathbb{C}^{2}\) and \(f(z_{1})=e^{-iz_{1}},\) Then \(M:=\{(z_{1},z_{2})\in\mathbb{C}^{2}:|e^{-iz_{1}}|=1\}:=\mathbb{R}\times\mathbb{C}.\) Proof of Corollary 1.11 now follows from Theorem 1.6. Alternatively, if we take \(h(z_{1},z_{2})=\mathsf{Im}z_{1},\) then \(\mathbb{R}\times\mathbb{C}=\{z\in\mathbb{C}^{2}:h(z)=0\}.\) Now proof follows from Theorem 1.10. Proof of Theorem 1.18.: Let \(P:\mathbb{C}^{2}\to\mathbb{C},P(z,w)=z\) be the projection map. Since \(K\) is polynomially convex in dimension one, \(P^{-1}\{c\}\cap K\) is polynomially convex for all \(c\in\mathbb{C}.\) We now claim that each point of \(Y:=P(K)\) is a peak point for the algebra \(\mathcal{P}(Y).\) First, we show that \(Y\subset\{\alpha\in\mathbb{C}:h(\alpha)=0\}:\) Let \(\alpha\in Y,\) then there exists \((\xi,\eta)\in K\subset E_{h}\times\mathbb{C}\) such that \(P(\xi,\eta)=\alpha.\) This implies \(\xi=\alpha\) and since \((\xi,\eta)\in E_{h}\times\mathbb{C},\)\(h(\xi)=0\) i.e., \(h(\alpha)=0.\) Therefore, \(Y\subset\{\alpha\in\mathbb{C}:h(\alpha)=0\}.\) Since \(dh\neq 0\) on \(E_{h},\) hence \(E_{h}\) is a smooth submanifold of \(\mathbb{C}\) of real dimension one. Note that \(Y=P(K)\) is a connected subset of \(E_{h}.\) **Case I**: \(\mathbb{C}\setminus P(K)\) is connected, then \(P(K)\) is polynomially convex. By Result 2.5, we can say that each point of \(\partial P(K)=P(K)\) is a peak point for the algebra \(\mathcal{P}(P(K)).\) **Case II**: \(\mathbb{C}\setminus P(K)\) is not connected. Then \(\widehat{P(K)}\) is the union of \(P(K)\) with all bounded components of \(\mathbb{C}\setminus P(K).\) Since \(P(K)\) is connected, hence \(\partial\widehat{P(K)}=P(K).\) By Result 2.5, we can say that each point of \(P(K)\) is a peak point for the algebra \(\mathcal{P}(P(K)).\) By Result 2.6, we get that \[P^{-1}\{c\}\cap\widehat{K}=\widehat{P^{-1}\{c\}\cap K}\] for all \(c\in\mathbb{C}.\) Therefore, \(P(K)\cap P(\widehat{K}\setminus K)=\emptyset.\) Since \(K\) is simply-coconnected, by Result 2.3, \(K\) is polynomially convex. ## 4. Singular Levi-flat hypersurfaces In this section we provide the proofs of the theorems concerning singular Levi-flat hypersurfaces. First we give a proof of Theorem 1.20. Proof of Theorem Theorem 1.20.: If \(K\) is polynomially convex, then obviously \(\widehat{K}\cap M_{sing}=\emptyset.\) We now prove the converse part. Since \(dh|_{\alpha}\neq 0,\) on \(h^{-1}\{c\}\cap G,\) then \(\{z\in G:h(z)=c\}\) is a complex manifold of pure dimension \(1.\) By Lemma 2.10, there exist characteristic foliation \(K,\) and by Lemma 2.11, we get that the function \(h(z,w)\) is constant along each leaf of the characteristic foliation of \(\triangle.\) **Step I: Showing that each fiber \(h^{-1}\{c\}\cap G\cap K\) is polynomially convex.** We define \(K_{c}:=P^{-1}\{c\}\cap G\cap K\) and \(V_{c}=\{z\in\mathbb{C}^{2}:h(z)=c\}.\) Since \(K\cap M_{sing}=\emptyset,\) therefore \(K_{c}=V_{c}\cap K.\) Hence, it is enough to show that \(V_{c}\cap K\) is polynomially convex. **Case I:**\(\check{H}^{1}(K_{c},\mathbb{Z})=0.\) This implies \(K_{c}\) is simply-coconnected (see Remark 1.14). It is easy to see that \(V_{c}\) is a Runge variety. Then by using Result 2.2, we get that \(K_{c}\) is rationally convex. Therefore, by using Result 2.1, we can say that \(K_{c}\) is polynomially convex in dimension one. Since \(V_{c}\) is a one-dimensional Runge variety, therefore, \(K_{c}\cap V_{c}=K_{c}\) is polynomially convex. **Case II:**\(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\). By the similar argument as Theorem 1.6 (**Step I**, **Case II**), \(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\) can not happen. Therefore, \(K_{c}=P^{-1}\{c\}\cap K\) is polynomially convex. **Step II: Completing the proof:** Now we show that \(K\) is polynomially convex. Note that \(h\) is holomorphic in \(\mathbb{C}^{2}.\) We claim that \(h(K)\cap h(\widehat{K}\setminus K)=\emptyset.\) If possible, assume that \(h(\widehat{K}\setminus K)\neq\emptyset.\) Let \(\alpha\in h(K)\cap h(\widehat{K}\setminus K).\) Then there exist \(\eta\in K\) and \(\xi\in\widehat{K}\setminus K\) such that \(h(\eta)=\alpha\) and \(h(\xi)=\alpha.\) We claim that \(\xi\in h^{-1}\widehat{\{\alpha\}\cap K}\) that is the fiber \(h^{-1}\widehat{\{\alpha\}\cap K}\) is not polynomially convex. Let \(Y=h(K).\) Since \(h(K)\subset\partial\mathbb{D},\) each point of \(Y\) is a peak point for the uniform algebra \(\mathcal{P}(Y),\) then by Result 2.6, we obtain that \[h^{-1}\{\alpha\}\cap\widehat{K}=\widehat{h^{-1}\{\alpha\}\cap K}.\] Therefore, \[\xi\in h^{-1}\{\alpha\}\cap\widehat{K}=\widehat{h^{-1}\{\alpha\}\cap K}.\] Hence \(\widehat{h^{-1}\{\alpha\}\cap K}\) is not polynomially convex. This is a contradiction. Hence \(h(K)\cap h(\widehat{K}\setminus K)=\emptyset.\) Therefore, by Result 2.3, \(K\) is polynomially convex. Proof of Corollary 1.21.: We define the holomorphic function \(h:\mathbb{C}^{2}\rightarrow\mathbb{C}\) by \(h(z_{1},z_{2}):=e^{z_{1}^{m}+z_{2}^{n}}.\) Then \(\{(z_{1},z_{2})\in\mathbb{C}^{2}:|h(z_{1},z_{2})|=1\}=M\) and \(M_{sing}=\{(0,0)\}.\) Now the proof follows from Theorem 1.20. We now give a proof of Theorem 1.22. Proof of Theorem 1.22.: If \(K\) is polynomially convex, then obviously \(\widehat{K}\cap\{zw=0\}=\emptyset.\) We now prove the converse part. Consider the domain \(G:=\mathbb{C}^{2}\setminus\{(z,w)\in\mathbb{C}^{2}:zw=0\}\) and holomorphic function \(P(z,w)=\frac{z}{w}.\) Then \(M:=\rho^{-1}\{0\},\) where \(\rho(z,w)=\frac{z}{w}\frac{\overline{z}}{w}-1.\) Note that since \(dP|_{\alpha}\neq 0,\) on \(P^{-1}\{c\},\) then \(\{z\in G:P(z)=c\}\) is a complex manifold of pure dimension \(1.\) By Lemma 2.10, there exist characteristic foliation \(K,\) and by Lemma 2.11, we get that the function \(h(z,w)=\frac{z}{w}\) is constant along each leaf of the characteristic foliation of \(\triangle.\) **Step I: Showing that each fiber \(P^{-1}\{c\}\cap K\) is polynomially convex.** We define \(K_{c}:=P^{-1}\{c\}\cap K\) and \(V_{c}=\{(z,w)\in\mathbb{C}^{2}:z=cw\}.\) Since \(K\cap\{(z,w)\in\mathbb{C}^{2}:zw=0\}=\emptyset,\) therefore \(K_{c}=V_{c}\cap K.\) Hence, it is enough to show that \(V_{c}\cap K\) is polynomially convex. **Case I:**\(\check{H}^{1}(K_{c},\mathbb{Z})=0.\) This implies \(K_{c}\) is simply-coconnected (see Remark 1.14). Since \(V_{c}\) is a Runge variety, by using Result 2.2, we get that \(K_{c}\) is rationally convex. Therefore, by using Result 2.1, we can say that \(K_{c}\) is polynomially convex in dimension one. Since \(V_{c}\) is a one-dimensional Runge variety, therefore, \(K_{c}\cap V_{c}=K_{c}\) is polynomially convex. **Case II:**\(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\). By the similar argument as Theorem 1.6 (**Step I**, **Case II**), \(\check{H}^{1}(K_{c},\mathbb{Z})\neq 0\) can not happen. Therefore, \(K_{c}=P^{-1}\{c\}\cap K\) is polynomially convex. **Step II: Completing the proof:** Now we show that \(K\) is polynomially convex. By assumption, we have \(\widehat{K}\cap\{(z,w)\in\mathbb{C}^{2}:zw=0\}=\emptyset,\) therefore, \(P\) is holomorphic in a neighborhood of \(\widehat{K}.\) We claim that \(P(K)\cap P(\widehat{K}\setminus K)=\emptyset.\) If possible, assume that \(P(\widehat{K}\setminus K)\neq\emptyset.\) Let \(\alpha\in P(K)\cap p(\widehat{K}\setminus K).\) Then there exist \(\eta\in K\) and \(\xi\in\widehat{K}\setminus K\) such that \(P(\eta)=\alpha\) and \(P(\xi)=\alpha.\) We claim that \(\xi\in p^{-1}\widehat{\{\alpha\}\cap K}\) that is the fiber \(P^{-1}\widehat{\{\alpha\}\cap K}\) is not polynomially convex. Let \(Y=P(K).\) Since \(P(K)\subset\partial\mathbb{D},\) each point of \(Y\) is a peak point for the uniform algebra \(\mathcal{P}(Y),\) then by Result 2.6, we obtain that \[P^{-1}\{\alpha\}\cap\widehat{K}=\widehat{P^{-1}\{\alpha\}\cap K}.\] Therefore, \[\xi\in P^{-1}\{\alpha\}\cap\widehat{K}=P^{-1}\widehat{\{\alpha\}\cap K}.\] Hence \(\widehat{P^{-1}\{\alpha\}\cap K}\) is not polynomially convex. This is a contradiction. Hence \(P(K)\cap P(\widehat{K}\setminus K)=\emptyset.\) Therefore, by Result 2.3, \(K\) is polynomially convex. Let \(h:\mathbb{C}\rightarrow\mathbb{R}\) be a smooth function and \(E_{h}:=\{z\in\mathbb{C}:h(z)=0\}.\) Let \(E_{h}^{*}:=\{z\in G:h(z)=0\}\) be the regular part of \(E_{h},\) where \(G:=\mathbb{C}\setminus\{z\in\mathbb{C}:h(z)=0,dh(z)=0\}.\) By \((E_{h})_{sing},\) we denote the singular part of \(E_{h}.\) **Theorem 4.1**.: _Let \(K\) be a totally real disc in the singular hypersurface \(E_{h}^{*}\times\mathbb{C}\subset\mathbb{C}^{2}.\) Then \(K\) is polynomially convex._ Proof.: Proof follows from Theorem 1.10. **Remark 4.2**.: Let \(M:=\{z\in\mathbb{C}^{2}:\rho(z)=0\}\) be a singular Levi-flat quadratic real hypersurface in \(\mathbb{C}^{2}\) with the following normal form (see [4]): 1. \(\rho(z)=(z_{1}^{2}+2z_{1}\bar{z}_{1}+\bar{z}_{1}^{2})\) 2. \(\rho(z)=(z_{1}^{2}+2\lambda z_{1}\bar{z}_{1}+\bar{z}_{1})\)\(\lambda\in(0,1)\) 3. \(\rho(z)=\mathsf{Re}z_{1}\mathsf{Re}z_{2}\) In the first case, \(M\) is non singular. In the second case the singular set is \(\{o\}\times\mathbb{C}\) and in the third case the singular set is \(\{(iy_{1},iy_{2})\in\mathbb{C}^{2}:\;\;y_{j}\in\mathbb{R}\}\). If a totally real disc lie in \(M^{*}\), the non-singular part of \(M\), then, by Theorem 4.1, it is polynomially convex. ## 5. Examples and concluding remarks In this section we provide some examples supporting the theorems in this paper. **Example 5.1**.: Let \(E_{a,b}:=\{z\in\mathbb{C}:a^{2}x^{2}+b^{2}y^{2}=1\}\) (\(a,b\in\mathbb{R}\)), and \(M=E_{a,b}\times\mathbb{C}\) be a hypersurface \(\mathbb{C}^{2}\). Then, by Theorem 1.10, every totally real smooth disk in \(M\) is polynomially convex. **Example 5.2**.: Let us consider \(M:=\{(z_{1},z_{2})\in\mathbb{C}^{2}:|z_{2}+\phi(z_{1})|=1\}\), \(K:=\{(z_{1},z_{2})\in\mathbb{C}^{2}:z_{2}=e^{i(\mathsf{Re}z_{1})^{2}}-\phi(z_ {1}),|z_{1}|\leq 1\}\subset M,\) where \(\phi\) is a polynomial in \(z_{1}.\) Using Theorem 1.17, we can show that \(K\) is polynomially convex. Note that \(K\) is not totally real (at \((0,1-\phi(0)).\)\(K\) is the image of the closed unit disc under the smooth map \(\xi\to(\xi,e^{i(\mathsf{Re}\xi)^{2}}-\phi(\xi)),\) which is essentially the graph of \(e^{i(\mathsf{Re}z_{1})^{2}}-\phi(z_{1})\) over the closed unit disc. Hence, \(K\) is simply-coconnected (in fact, contractible). It remains to show that each fiber \(P^{-1}\{c\}\cap K\) is polynomially convex, where \(P(z_{1},z_{2})=z_{2}+\phi(z_{1})\). We compute: \[P^{-1}\{c\}\cap K= \{(z_{1},z_{2})\in\mathbb{C}^{2}:z_{2}+\phi(z_{1})=c,w=e^{i( \mathsf{Re}z_{1})^{2}}-\phi(z_{1}),|z_{1}|\leq 1\}\] \[= \{(z_{1},z_{2})\in\mathbb{C}^{2}:z_{2}+\phi(z_{1})=c,e^{i(\mathsf{ Re}z_{1})^{2}}=c=e^{it},|z_{1}|\leq 1,t\in[0,2\pi]\}\] \[= \{(z_{1},z_{2})\in\mathbb{C}^{2}:z_{2}+\phi(z_{1})=c,\mathsf{Re}z _{1}=\pm t,|z_{1}|\leq 1\}\] \[= \mathsf{Gr}_{h}(L_{1}\cup L_{2}),\] where \(h=c-\phi(z_{1})\), \(L_{1}\cup L_{2}\) is polynomially convex. Therefore, being a graph of a polynomial over polynomially convex set, \(P^{-1}\{c\}\cap K\) is polynomially convex. **Example 5.3**.: Let \(P(z_{1},z_{2})=\frac{1}{2}(z_{1}+z_{2})\) and \(M:=\{(z_{1},z_{2})\in\mathbb{C}^{2}:|P(z_{1},z_{2})|=1\}.\) Let us consider \(\phi:\overline{\mathbb{D}}\to\mathbb{C}^{2}\) be a map defined by \(\phi(t,s)=(\cos t+t,\sin s+s,\cos t-t,\sin s-s).\) Then \(\phi\) is a diffeomorphism and hence \(K:=\phi(\overline{\mathbb{D}})\) is a simply-coconnected compact subset of \(M.\) Note that \(P^{-1}\{c\}\cap K=\{(c+\overline{\xi},c-\overline{\xi}):|\xi|\leq 1\}\) is polynomially convex. Using Theorem 1.17, we conclude that \(K\) is polynomially convex. **Example 5.4**.: Let \(h:\mathbb{C}\to\mathbb{R}\) be defined by \(h(x,y)=x^{2}+y^{2}-1\) and \(\phi:\overline{\mathbb{D}}\to\mathbb{C}^{2}\) be defined by \(\phi(t,s)=(\cos t,\sin t,(t-3)^{3},(s-2)^{3}).\) Then \(\phi\) is a diffeomorphism. Hence \(K:=\phi(\overline{\mathbb{D}})\) is a simply-coconnected compact subset of \(E_{h}\times\mathbb{C}.\) Let \(P(z,w)=z\) be the projection map. Then \(P^{-1}\{c\}\cap K=\{(z,w):z=c,w=((t-3)^{3},(s-2)^{3})),t^{2}+s^{2}\leq 1\}\) is polynomially convex for all \(c\in\mathbb{C}.\) Using Theorem 1.18, we conclude that \(K\) is polynomially convex. We now make some concluding remarks. The hypersurfaces considered in this paper are all globally Levi-flat, i.e. pseudoconvex from both sides. In all the non-singular hypersurfaces considered here, any totally real disc lying in them turned out to be polynomially convex. This encourages us to make our first conjecture: **Conjecture 5.5**.: _Any totally real disc lying in a non-singular Levi-flat hypersurface in \(\mathbb{C}^{2}\) is polynomially convex._ In view of Corollary 1.21, Theorem 1.22 and Remark 4.2 we infer that the totally real discs lying in the Levi-flat quadrics, which are the normal form given by Burns and Gong [4], are polynomially convex if the polynomial hull of a totally real disc does not have an intersection with the singular set. This allows us to make the next conjecture. **Conjecture 5.6**.: _Any totally real disc that \(K\) lying in a singular Levi-flat hypersurface is polynomially convex if \(\widehat{K}\) does not have a non-empty intersection with the singular set._ In the third hypersurface of Remark 4.2 the singular set is of the form \(S:=\{(iy_{1},iy_{2})\in\mathbb{C}^{2}:\quad y_{j}\in\mathbb{R}\}\), which is a totally real subspace of maximal dimension in \(\mathbb{C}^{2}\). Hence, any compact subset lying in \(S\) is polynomially convex. A totally real disc can lie there. Hence, we ask the following question: **Question 5.7**.: _Characterize the totally real discs in singular Levi-flat hypersurfaces in \(\mathbb{C}^{2}\) which are polynomially convex._ **Acknowledgements.** The first named author is supported partially by a MATRICS Research Grant (MTR/2017/000974) and by a Core Research Grant (CRG/2022/003560) of SERB, Dept. of Science and Technology, Govt. of India. The work of the second named author is partially supported by an INSPIRE Fellowship (IF 160487), Dept. of Science and Technology, Govt. of India, and by a research grant of SERB (30121582), Dept. of Science and Technology, Govt. of India.
2306.04247
The scaling relations of galaxies back in time: the road toward virialization
Context. The structural scaling relations (SSRs) of galaxies, i.e. the observed correlations between effective radius, effective surface intensity and velocity dispersion, are important tools for understanding how evolution proceeds. Aims. In this paper we aim to demonstrate that the evolution of the SSRs back in time is governed by the combination of the virial theorem (VT) and the relation $L=L'_0 \sigma^{\beta(t)}$, where the parameters $\beta$ and $L'_0$ vary with time and from galaxy to galaxy. Methods. Using the WINGS database for the galaxies at redshift $z=0$ and the Illustris-1 and Illustris-TNG databases of artificial galaxies, for the galaxies up to redshift $z=4$, we analyse the SSRs back in time and, by means of simple algebraic expressions for $L'_0$ and $\beta$ (functions of time and other physical quantities), we derive the expected paths followed by galaxies in the various SSRs toward the distributions observed at $z=0$. Results. The distribution of galaxies in the SSRs is ultimately related to the evolution in luminosity and velocity dispersion that are empirically mirrored by the $L=L'_0 \sigma^{\beta(t)}$ law. Furthermore, the $\beta$ parameter works as a thermometer of the virialization of a galaxy. This parameter can assume either positive or negative values, and its absolute value attains high values when the galaxy is close to the virial condition, while it tends to zero when the galaxy is far from it. Conclusions. As the SSRs change with time, the method we are proposing allows us to decipher the temporal evolution of galaxies.
Mauro D'Onofrio, Cesare Chiosi
2023-06-07T08:38:49Z
http://arxiv.org/abs/2306.04247v2
# The scaling relations of galaxies back in time: ###### Abstract Context:The structural scaling relations (SSRs) of galaxies, i.e. the observed correlations between effective radius, effective surface intensity and velocity dispersion, are important tools for understanding how evolution proceeds. Aims:In this paper we aim to demonstrate that the evolution of the SSRs back in time is governed by the combination of the virial theorem (VT) and the relation \(L=L_{\rm v}^{\prime}(t)\sigma^{\beta(t)}\), where the parameters \(\beta\) and \(L_{\rm v}^{\prime}\) vary with time and from galaxy to galaxy. Methods:Using the WINGS database for the galaxies at redshift \(z=0\) and the Illustrs-1 and Illustrs-TNG databases of artificial galaxies, for the galaxies up to redshift \(z=4\), we analyse the SSRs back in time and, by means of simple algebraic expressions for \(L_{\rm v}^{\prime}\) and \(\beta\) (functions of time and other physical quantities), we derive the expected paths followed by galaxies in the various SSRs toward the distributions observed at \(z=0\). Results:The distribution of galaxies in the SSRs is ultimately related to the evolution in luminosity and velocity dispersion that are empirically mirrored by the \(L=L_{\rm v}^{\prime}(t)\sigma^{\beta(t)}\) law. Furthermore, the \(\beta\) parameter works as a thermometer of the virialization of a galaxy. This parameter can assume either positive or negative values, and its absolute value attains high values when the galaxy is close to the virial condition, while it tends to zero when the galaxy is far from it. Conclusions:As the SSRs change with time, the method we are proposing allows us to decipher the temporal evolution of galaxies. ## 1 Introduction The structural scaling relations (SSRs) of galaxies, i.e. the mutual correlations between the main measured structural parameters, such as the effective radius \(R_{e}\), the effective surface intensity \(I_{e}\), the total stellar mass \(M_{s}\), the luminosity \(L\), and the central velocity dispersion \(\sigma\), have been recognized long ago as important tools for understanding the evolution of these stellar systems and for deriving fundamental cosmological information (see e.g., Faber & Jackson 1976; Kormendy 1977; Dressler et al. 1987; Djorgovski & Davis 1987). In particular, the SSRs of early-type galaxies (ETGs), that are much easier to obtain, have been used in the past as distance indicators for measuring the Hubble constant (see e.g., Dressler 1987), for testing the expansion of the Universe (see e.g., Pahre et al. 1996), for mapping the velocity fields of galaxies (see e.g., Dressler & Faber 1990; Courteau et al. 1993), and for measuring the variation of the mass-to-light ratio across time (see e.g., Prugniel & Simien 1996; van Dokkum & Franx 1996; Busarello et al. 1998; Franx et al. 2008). Among the various SSRs, the Fundamental Plane (FP) relation for the ETGs (Djorgovski & Davis 1987; Dressler et al. 1987) \(\log R_{e}=a\log\sigma+b\log I_{e}+c\), is probably the most studied one of the last 30 years. The tilt of this relation with respect to the prediction of the virial theorem (VT) has been the subject of many studies (see among many others, Faber et al. 1987; Ciotti 1991; Jorgensen et al. 1996; Cappellari et al. 2006; D'Onofrio et al. 2006; Bolton et al. 2007), invoking different physical mechanisms at work. We remember for example: i) the systematic change of the stellar mass-to-light ratio (\(M_{s}/L\)) (see e.g., Faber et al. 1987; van Dokkum & Franx 1996; Cappellari et al. 2006; van Dokkum & van der Marel 2007; Holden et al. 2010; de Graaff et al. 2021a); ii) the structural and dynamical non-homology of ETGs (see e.g., Prugniel & Simien 1997; Busarello et al. 1998; Trujillo et al. 2004; D'Onofrio et al. 2008); iii) the dark matter content and distribution (see e.g., Ciotti et al. 1996; Borriello et al. 2003; Tortora et al. 2009; Taranu et al. 2015; de Graaff et al. 2021a); iv) the star formation history (SFH) and initial mass function (IMF) (see e.g., Renzini & Ciotti 1993; Chiosi et al. 1998; Chiosi & Carraro 2002; Allanson et al. 2009); v) the effects of environment (see e.g., Lucey et al. 1991; de Carvalho & Djorgovski 1992; Bernardi et al. 2003; D'Onofrio et al. 2008; La Barbera et al. 2010; Ibarra-Medel & Lopez-Cruz 2011; Samir et al. 2016); vi) the effects of dissipation-less mergers (Nipoti et al. 2003); vi) the gas dissipation (Robertson et al. 2006); viii) the non regular sequence of mergers with progressively decreasing mass ratios (Novak 2008); ix) the multiple dry mergers of spiral galaxies (Taranu et al. 2015). A similar long list can be compiled for the small intrinsic scatter of the FP (\(\approx 0.05\) dex in the V-band), where among the claimed possible physical causes, we have: 1) the variation in the formation epoch; 2) the dark matter content; 3) the metallicity or age trends; 4) the variations of the mass-to-light ratio \(M/L\) (see e.g., Faber et al. 1987; Gregg 1992; Guzman, R. et al. 1993; Forbes et al. 1998; Bernardi et al. 2003; Reda et al. 2005; Cappellari et al., 2006; Bolton et al., 2008; Auger et al., 2010; Magoulas et al., 2012), etc.. Despite all these efforts, it is still unclear today why the FP is so tight and uniform when seen edge-on, while in its projections (i.e., in the \(I_{e}-R_{e}\), the \(I_{e}-\sigma\) and \(R_{e}-\sigma\) planes) the distribution of galaxies presents well defined structures, where regions with large clumps of objects and big scatter are observed together with regions where no galaxies are present (the so called Zone of Exclusions, ZOE), and where clear non linear distributions are well visible. The mutual dependence of the SSRs, the peculiar shape of the observed distributions and the link among the various FP projections have never found a single and robust explanation in which the tilt and the scatter of the FP are understood. The same difficulties are encountered when we consider one particular projection of the FP: the Faber-Jackson relation (FJ) (Faber & Jackson, 1976), i.e. the correlation observed between the total luminosity \(L\) and the central velocity dispersion \(\sigma\). Even in this case the observed trend is not that predicted by the VT. In addition to this, it has been shown that the \(L-\sigma\) relation is not consistent with the distribution observed in the \(I_{e}-R_{e}\) plane, in the sense that it is not possible to transform one space into the other (and viceversa) adopting the observed classical correlations (see e.g., D'Onofrio & Chiosi, 2021). In other words, the underlying questions behind the nature of the SSRs of galaxies are: can we find a single explanation for the tilt of the FP and the shapes of the distributions observed in its projections? Is it possible to reconcile the FJ relation and the \(I_{e}-R_{e}\) plane? Is it possible to account for the mutual relationship among the various projections of the FP? How are these planes linked to each other? How the SSRs change going back in time? Why the FP is so tight? A new perspective to simultaneously explain the tilt of the FP and the observed distributions of galaxies in the FP projection planes has been advanced by D'Onofrio et al. (2017). The novelty of their approach is based on the assumption that the luminosity of galaxies follows a relation of the form: \[L(t)=L_{0}^{\prime}(t)\sigma(t)^{\beta(t)}. \tag{1}\] where \(t\) is the time, \(\sigma\) the velocity dispersion, and the proportionality coefficient \(L_{0}^{\prime}\) and the exponent \(\beta\) are all function of time and, even more importantly, they can vary from galaxy to galaxy. This empirical relation is formally equivalent to the FJ relation for ETGs, but has a profoundly different physical meaning. In this relation \(\beta\) and \(L_{0}^{\prime}\) are free time-dependent parameters that can vary considerably from galaxy to galaxy, according to the mass assembly history and the evolution of the stellar content of each object. The new relation mirrors the effects of the evolutionary history of a galaxy on its luminosity and stellar velocity dispersion, parameters that can both vary across time because galaxies evolve, merge, and interact. In previous papers on this subject we called attention on some of the advantages offered by the joint use of the VT and the \(L=L_{0}^{\prime}(t)\sigma^{\beta(t)}\) law (D'Onofrio et al., 2017, 2019, 2020; D'Onofrio & Chiosi, 2021, 2022, 2023). Accepting the idea of a variable \(\beta\) parameter, taking either positive or negative values, yields a simple explanation of the shifts of galaxies along the SSRs. Furthermore it allows us to understand the physical reasons for the observed distributions of galaxies in the various projection planes. This approach seems to be the correct one because it is able to simultaneously account for: i) the tilt of the FP, ii) the existence of the ZoE, and iii) the shifts of galaxies in the FP projections that are closely connected with the variations of \(\sigma\) and \(L\) through the \(\beta\) parameter. In the present study we take advantage of what we learned from joining the VT and the law \(L=L_{0}^{\prime}(t)\sigma^{\beta(t)}\) to analyze how galaxies move along the SSRs at high redshift. To this aim, since current observational data at high redshifts are not enough for our aims, we adopt the data of the Illustris-1 (Vogelsberger et al., 2014, 2014) and the Illustris-TNG (Springel et al., 2018; Nelson et al., 2018; Pillepich et al., 2018) simulations from \(z=0\) up to \(z=4\) and look at the possible changes in the properties of galaxies suggested by the simulations. The paper is organized as follows: in Sec. 2 we briefly describe the samples of galaxies (both real and simulated) we have used in our work, we present the basic SSRs at \(z=0\) and we explain why one can trust in the simulated data at higher redshift. In Sec. 3 we summarize the basic equations of the problem and in Sec. 4 we show how the SSRs change with redshift and how the \(\beta\) parameter is able to account of the observed distributions at each epoch. In Sec. 5 we discuss the \(\beta\) parameter as a thermometer of the virialization condition. In Sec. 6 we discuss the history of mass assembly for a few test galaxies and investigate how \(\beta\) changes as function of time and history of mass assembly. In Sec. 7 we present our conclusions. Finally, in Appendix A we present a toy model of dry and wet mergers to estimate the variation of a galaxy luminosity as consequence of merger and companion star formation. For the sake of internal consistency with the previous studies of this series, in our calculations with the Illustris-1 database we adopt the same values of the \(A\)-CDM cosmology used by (Vogelsberger et al., 2014, 2014): \(\Omega_{m}=0.2726\), \(\Omega_{\Lambda}=0.7274\), \(\Omega_{b}=0.0456\), \(\sigma_{8}=0.809\), \(n_{s}=0.963\), \(H_{0}=70.4\,km\,s^{-1}\,Mpc^{-1}\). Slightly different cosmological parameters are used by for the Illustris-TNG simulations: \(\Omega_{m}=0.3089\), \(\Omega_{\Lambda}\)= 0.6911, \(\Omega_{b}=0.0486\), \(\sigma_{8}=0.816\), \(n_{s}=0.967\), \(H_{0}=67.74\,km\,s^{-1}\,Mpc^{-1}\)(Springel et al., 2018; Nelson et al., 2018; Pillepich et al., 2018). Since the systematic differences in \(M_{s},R_{e},L,I_{e}\), and \(\sigma\) are either small or nearly irrelevant to the aims of this study, no re-scaling of the data is applied. ## 2 Observational data and model galaxies The observational data used here are the same adopted in our previous works on this subject (see, D'Onofrio & Chiosi, 2022, 2023). The data at redshift \(z\sim 0\) have been extracted from the WINGS and Omega-WINGS databases (Fasano et al., 2006; Varela et al., 2009; Cava et al., 2009; Valentinuzzi et al., 2009; Moretti et al., 2014; D'Onofrio et al., 2014; Gullieuszik et al., 2015; Moretti et al., 2017; Cariddi et al., 2018; Biviano et al., 2017). The samples used here have not the same dimension in each plot because the spectroscopic database is only a sub-sample of the whole optical photometric sample (containing \(\sim 32700\) galaxies). For this reason in some of our plots we can appreciate the distribution of the whole photometric sample, while in others only the subsamples with available measured stellar velocity dispersion or available stellar masses are visible. The subsample with measured stellar masses \(M_{s}\) contains approximately 1200 galaxies. The masses were estimated by Fritz et al. (2007) by means of the spectral synthesis analysis. This provided the measurements of the stellar masses and of the star formation rate (SFR) at different cosmic epochs (among many others quantities). Footnote 1: The measured parameters for the real galaxies are always shown in our plots with the black dots. For the reason just explained in each plot, containing real observations, the number of galaxies is not always the same. The cross-match between the spectroscopic and photometric samples gives here only 480 ETGs with available masses and velocity dispersions. The sample span a magnitude range from \(M_{V}\sim-16\) to \(M_{V}\sim-23\) mag, a central velocity dispersion range from \(\sigma\sim 50\) to \(\sigma\sim 300\) km sec\({}^{-1}\) and masses from \(10^{8.5}\) to \(10^{12}\) solar masses1. Footnote 1: The measured parameters for the real galaxies are always shown in our plots with the black dots. For the reason just explained in each plot, containing real observations, the number of galaxies is not always the same. The morphological types of the galaxies were measured with the software MORPHOT for the whole photometric dataset. The final morphological type \(T\) is quite robust, coming from the combination of different approaches (see Fasano et al. 2012, for more details). The error on the measured parameters is \(\simeq 20\%\). These are not shown in our plots, because they are much lower than the Figure 1: Comparison between the data of Illustris-1 and Illustris-TNG100. The black lines mark the TNG data, while the red ones the Illustris-1 data. The solid line refers to the data at \(z=0\), while the dashed lines to the data at \(z=4\). From top left to bottom right we have: the effective radius \(R_{e}\) (enclosing half the total luminosity or half the total stellar mass (for TNG); the effective surface brightness \(I_{e}\); the central star velocity dispersion \(\sigma\); the stellar mass-to-light ratio (\(M^{\prime}/L\)); the total luminosity (in solar units) and the total stellar mass (in solar units). observed range of variation of the structural parameters in the SSRs. The small size of the errors does not affect the whole distribution of galaxies. Furthermore, no quantitative analysis has been made here, such as fits of data or statistical evaluations. The sample of real data at \(z\sim 0\) is used only to demonstrate that the simulated galaxies quite well reproduce the SSRs of the local objects and therefore there are good reasons to trust in the simulation when we look at the behavior of the SSRs at much higher redshift. The analysis of the SSRs at high redshift is unfortunately still difficult for galaxies above \(z\sim 1.0\), because the observational surveys at these redshifts contain only few and sparse data. Some empirical evidences however exists for a varying tilt of the FP with redshift (see e.g., di Serego Alighieri et al., 2005; Beifiori et al., 2017; Oldham et al., 2017; de Graaff et al., 2021). Given such difficulties we decided to perform our analysis of the SSRs at high redshift using the database of artificial galaxies provided by the Illustris-1 and Illustris-TNG simulations. The hydrodynamic simulations, like the Illustris databases, are today the best models available to compare theory with observations, despite the fact that several problems still bias their results. The first set of artificial galaxies, named Illustris-1 appeared on 2014 (see e.g. Vogelsberger et al., 2014; Genel et al., 2014; Nelson et al., 2015). Later on, a number of works demonstrated that Illustris-1 suffer from a number of problems: it yields an unrealistic population of ETGs with no correct colours, it lacks morphological information, the sizes of the less massive galaxies are too big, and the star formation rates are not always comparable with observations (see e.g., Snyder et al., 2015; Bottrell et al., 2017; Nelson et al., 2018; Rodriguez-Gomez et al., 2019; Huertas-Company et al., 2019; D'Onofrio & Chiosi, 2023). In addition to this, there is the claim in the literature that Illustris-1 does not produce a realistic red sequence of galaxies due to insufficient quenching of the star formation with too few red galaxies (Snyder et al., 2015; Bottrell et al., 2017; Nelson et al., 2018; Rodriguez-Gomez et al., 2019), while Illustris-TNG produces a much better result (Nelson et al., 2018; Rodriguez-Gomez et al., 2019). There is also the problem of the insufficient number of red galaxies with respect to the observed population of ETGs. For what concern the internal structure of the Illustris-1 galaxies, Bottrell et al. (2017) measured the Sersic index, the axis ratio and the radii, and found that too few bulge-dominated objects are produced in tension with observations. In contrast, the Illustris-TNG galaxies have much better internal structural parameters (Rodriguez-Gomez et al., 2019). For this reason Illustris-1 was superseded in 2018 by Illustris-TNG (Springel et al., 2018; Nelson et al., 2018; Pillepich et al., 2018). In this work we considered only the subsample named Illustris-TNG-100, which is briefly referred to below as Illustris-TNG. This sample has approximately the same volume and resolution of Illustris-1 and it used the same initial condition (updated for the different cosmology) adopted by Illustris-1. Among the many tabulated quantities provided for the galaxies of Illustris-1, we worked in particular with the V-band photometry, the mass and half-mass radii of the stellar particles (i.e., integrated stellar populations), for the most massive clusters, for which Cartesian comoving coordinates (x\({}^{*}\), y\({}^{*}\), z\({}^{*}\)) are available. We have analyzed in our previous papers the projected light and mass profiles using the z\({}^{*}\)=0 plane as a reference plane. Starting from the V magnitudes and positions of the stellar particles, we computed the effective radius \(R_{e}\), the radial surface brightness profile in units of \(r/R_{e}\), the best-fit Sersic index, and the line-of-sight velocity dispersion. The values of \(R_{e}\) were calculated considering only the star particles inside the friend-of-friends (FoFs) of galaxies and the galaxies inside the FoFs of clusters. We have set z\({}^{*}\)=0 to project the coordinates of the stellar particles inside galaxies so that the velocity dispersion is calculated along the z\({}^{*}\)-axis. The sample does not contain galaxies with masses lower than \(10^{9}\) solar masses at \(z=0\) because for these objects it was impossible to derive \(R_{e}\). The total stellar mass has been used here. The data-set for each value of the redshift extracted from the Illustris-1 simulation and used here contains \(\sim 2400\) galaxies of all morphological types. A full description of this data-set was given in Cariddi et al. (2018) and D'Onofrio et al. (2020). From the TNG-100 dataset we selected the first 1000 objects, ordered with decreasing stellar masses, coming out from the online Search Galaxy/Subhalo Catalog2. In this case we used the half-mass stellar radius instead of the effective radius \(R_{e}\). This radius is not so different from the effective radius and its use does not change in any way the conclusions reached here. The data have been extracted at redshift \(z=4\), \(z=3\), \(z=2\), \(z=1\) and \(z=0\) in order to be consistent with those used for Illustris-1. Footnote 2: See [https://www.tng-project.org/data/](https://www.tng-project.org/data/) The choice of using both Illustris-1 and Illustris-TNG has the following reasons: i) we want to be consistent with our previous works on this subject; ii) the differences in \(M_{s}\), \(R_{e}\), \(I_{e}\), \(L\), and \(\sigma\) of Illustris-1 and Illustris-TNG do not bias significantly the results on the values of the \(\beta\) and \(L_{0}^{\prime}\) parameters of the \(L=L_{0}^{\prime}\sigma^{\beta}\) law (see D'Onofrio & Chiosi, 2023); iii) the two data samples are in some way complementary, since Illustris-TNG has better measurements of the half-mass radii of less massive galaxies, while Illustris-1 seems much rich in massive objects; iv) the two simulations agree in the physical parameters of the massive objects. The detailed analysis of the differences between Illustris-1 and Illustris-TNG data has not been addressed here because there are already several studies on this subject (see e.g., Pillepich et al., 2018, 2018, 2019; Hoerta-Company et al., 2019). One of the issues of major tension between the two suites of models concerns the radii of the low mass galaxies (roughly of \(M_{s}\leq 5\,10^{10}\,M_{\odot}\) where the Illustris-TNG radii are about a factor of two smaller that those of Illustris-1 while above it they are nearly equal (Pillepich et al., 2018, 2018, 2019; Rodriguez-Gomez et al., 2019; Huertas-Company et al., 2019). Figure 1 shows the distributions of the Illustris-1 (red lines) and Illustris-TNG100 (black lines) data for several parameters used here at two redshift epochs: \(z=0\) (solid lines) and \(z=4\) (dashed lines). From the figure we see that the effective radii of Illustris-1 are systematically a bit larger than those of TNG100. Another significant difference is found in the distribution of the total luminosity and total stellar masses. As already said, the Illustris-1 sample does not contain objects with masses lower than \(10^{9}\) solar masses at \(z=0\). It follows that the distribution of masses and luminosities appears different for the TNG sample: it is much smooth and flatter than that of Illustris-1 which seems to be peaked at 9 dex approximately for the objects at \(z=0\). The range covered by luminosities and masses however is quite similar. The other parameters appear more or less superposed. We will see later that such differences do not compromise the analysis done here as well as the main conclusions. The intrinsic problems of the simulations are of little relevance for our analysis because: i) we do not make use of the color of galaxies and of the SFRs; ii) we have demonstrated (see, D'Onofrio & Chiosi, 2023) that the two samples of Illustris-1 and Illustris-TNG procedure similar distributions of the \(\beta\) and \(L_{0}^{\prime}\) parameters of the \(L=L_{0}^{\prime}\sigma^{\beta}\) law; iii) we will show here that the SSRs at high redshift of the two samples are very similar; iv) the point mass view of the galaxies adopted here secures that our analysis is not too much affected by the problems of the simulations. For both Illustris-1 and Illustris-TNG we did not extract information on the morphology of the galaxies. For this reason in our comparison ETGs and late-type galaxies (LTGs) are mixed in our plots. This choice originates from the observation that the SSRs of ETGs and LTGs are almost identical. This is clearly seen in Figure 2 showing the \(R_{e}\)-\(M^{*}\) (left panel) and the \(I_{e}-R_{e}\) (right panel) for the ETGs (open circles) and LTGs (filled black circles). The two distributions are very well superposed in both diagrams. The only exception is that very large \(R_{e}\) are observed only for the most massive ETGs. This is only partially in agreement with Fig. 11 of Huertas-Company et al. (2019), showing that ETGs and LTGs follow quite similar trends, but with small systematic differences for the two morphological types. The data of the WINGS database do not suggest any significant difference in the SSRs of LTGs and ETGs. We believe that the effective radii \(R_{e}\) measured in their work are affected by a systematic bias due to the method used to derive \(R_{e}\). While for the WINGS galaxies the effective radius was measured as the circle enclosing half the total luminosity, in the Huertas-Company work it was used the semi-major axis of the best fitting Sersic model. This choice can likely introduce a systematic effect due to the inclination of the galaxies and the intrinsic shape of the light profiles. In any case the inclusion of LTGs is a potential source of bias. We remark in addition that the completeness of the samples is not critical for the conclusions drawn here. In fact we do not attempt any statistical analysis of the data nor we fit any distribution to derive correlations. The data are only used to qualitatively show how the distribution of galaxies in the various planes can change with the different cosmic epochs and how the \(L=L_{0}^{\prime\prime}\)0 law and the \(\beta\) parameter can at least qualitatively account for the variations expected/observed across time. The kind of analysis carried out here is indeed somehow independent of the level of precision reached by the models of the different sources, because we are mainly interested in presenting the method for deciphering the information encrypted in the observational data of the SSRs. The only hypothesis made here is that we can trust the results of simulations at high redshifts. This hypothesis is based on the fact that the simulations are able to reproduce some features of the distributions seen in the FP projections at redshifts \(z\sim 0\) and the tilt of the FP at \(z\sim 1\) (see below). The artificial galaxies match quite well the observations, reproducing the position of the brightest cluster galaxies and the existence of the Zone of Exclusion (ZoE). All this makes us confident that the simulations produce galaxies with luminosities, stellar masses and effective radii not too far from those of real galaxies. Footnote 0: The symbols \(M_{\rm r}\) and \(M^{*}\) both used in this work always refer to the total stellar mass in solar units. Figure 3 shows the four main important SSRs for the WINGS and Illustris data. The left upper panel plots the stellar mass \(M^{*}\) versus the effective radius \(R_{e}\)3. The WINGS data (black dots), the Illustris-1 data (red dots) and the Illustris-TNG data (blue dots) at \(z=0\) are well visible. We note that the \(R_{e}\)-\(M^{*}\) relation is clearly non linear. The galaxies of small masses are distributed nearly horizontally, while the brightest galaxies follow a tail with a slope close to 1. The real and simulated data nicely superimpose each other over the same range of mass, even if the effective radii of Illustris-1 for the less luminous galaxies are systematically greater than the observational ones. In contrast, Illustris-TNG gives much smaller radii for the low mass galaxies. This is a well known fact already discussed in our papers of this series (see e.g., D'Onofrio et al. 2020; D'Onofrio and Chiosi 2022, 2023). Both observations and simulations suggest the presence of the tail for the brightest ETGs, in which radii and masses are almost identical in observations and simulations. The different number of objects in the tail is due to different volumes sampled and to the way in which the sample have been created: the WINGS and Illustris-1 datasets include only objects from clusters of galaxies, where large ETGs are frequent, while Illustris-TNG takes galaxies from the general field. In addition, the total volume of the surveys is different for WINGS, Illustris-1 and Illustris-TNG. Footnote 3: The symbols \(M_{\rm r}\) and \(M^{*}\) both used in this work always refer to the total stellar mass in solar units. Figure 2: The \(R_{e}\)-\(M^{*}\) (left panel) and the \(I_{e}-R_{e}\) (right panel) planes for the WINGS galaxy sample. Late type galaxies (LTGs) (black dots) and ETGs (open circles) share the same distributions in these SSRs. The effective radius \(R_{e}\) is given in kpc in the \(R_{e}\)-\(M^{*}\) plane and in pc in the \(I_{e}-R_{e}\) plane. Masses are given in solar units. The right upper panel of Fig. 3 shows the \(I_{e}-R_{e}\) plane obtained with the same data (here the sample of WINGS galaxies is much smaller than in Fig. 2 because only the subsample is involved). Also in this case the most important fact to note is that the simulations correctly reproduce the presence of the tail for the brightest ETGs that is clearly separated from the cloudy distribution of the less luminous galaxies. This tail, already seen in the original paper of Kormendy (1977), has a slope close to \(-1\) (that predicted by the VT) and has been attributed to the peculiar evolution of the brightest galaxies that grow in mass by minor mergers (among others, see e.g., Capaccioli et al. 1992). Our conclusion is therefore that both simulations catch the presence of some peculiar features of the \(I_{e}-R_{e}\) plane: the cloudy distribution of the faint galaxies, the tail formed by the brightest ETGs and the ZoE, i.e. the region totally empty of galaxies above the dashed black line in Fig. 2. The lower panels of Fig. 3 show the other projections of the FP: the \(I_{e}-\sigma\) and \(R_{e}-\sigma\) planes. Again we observe that both simulations are quite well superposed to the observational data. In particular the simulations are able to reproduce the curvature observed in the two distributions. The good agreement between observations and simulations at \(z=0\) is a good starting point. It tell us that simulations are able to reproduce the main features of the SSRs at \(z=0\). However, since our aim is to use simulations to infer the possible behavior of the SSRs at higher cosmic epochs, we need at least one further proof that simulations are able to catch the structural parameters of galaxies at much higher redshifts. To prove this we have used the data of di Serego Alighieri et al. (2005), who have analyzed the FP at \(z\sim 1\). In our Fig. 4 we can appreciate that the FP at this redshift epoch coming from the Illustris data (red and blue circles as before) is well in agreement with the observed one (black filled circles). The tilt of the plane is in practice identical with a value for the \(a\) coefficient lower than 1 (for the Coma cluster the tilt provides \(a\sim 1.2\)). The tilt is different with respect to that measured for the local clusters and this is an indication that there is an evolution of the structural parameters that seems to be reproduced by the simulations. Even in this case we note that the radii of the galaxies in the simulations are a bit systematically larger than those measured for the real galaxies, but this does not change the FP tilt of the simulated galaxies. Probably, since the total luminosity of the galaxies is quite well reproduced by the simulations, the combination of \(I_{e}\) and \(R_{e}\) is correct and Figure 3: The \(R_{e}\)-\(M^{*}\) plane and the three different projections of the FP. The black dots are the WINGS observational data. The red and blue dots are the data extracted from Illustris-1 and Illustris TNG-100 respectively. The galaxies with \(\log(R_{e})<2.5\) are not plotted here for better showing the bulk of the galaxy distribution. the different effective radius simply change \(I_{e}\) in such a way that the galaxy shift along the FP and not orthogonally to it. In concluding this section we observe that the artificial galaxies in the simulations are in quite good agreement with the real galaxies for what concern the main structural parameters even at much larger redshifts. The differences with real galaxies mainly concern the stellar content, the colors and the star formation rates, but these differences do not seem to plague the general behavior of the SSRs. For this reason we believe that it is possible to extract some information on the evolution of galaxies, by looking at the distributions of galaxies in the SSRs. When high redshift data will be available in good number, we could better compare observations and simulations and extract useful information on the evolution of galaxies. ## 3 The basic equations of our framework Before starting the discussion of the main SSRs predicted for the most far cosmic epochs, it is important to summarize here the main conclusions drawn by D'Onofrio & Chiosi (2022, 2023) using the combination of the VT and the \(L=L_{0}^{\prime}\sigma^{\beta}\) law4. This combination is the key novelty of their approach and a necessary premise for understanding what follows. The two equations representing the VT and the \(L=L_{0}^{\prime}\sigma^{\beta}\) law are: Footnote 4: From here on we drop the time notation for simplicity. \[\sigma^{2} = \frac{G}{k_{v}}\frac{M_{s}}{R_{e}}\] \[\sigma^{\beta} = \frac{L}{L_{0}^{\prime}}=\frac{2\pi l_{e}R_{e}^{2}}{L_{0}^{ \prime}}. \tag{2}\] In these equations \(\beta\) and \(L_{0}^{\prime}\) are free time-dependent parameters that depend on the peculiar history of each object. From these equations one can derive all the mutual relationships existing among the parameters \(M_{s}\), \(R_{e}\), \(L\), \(I_{e}\), \(\sigma\) characterizing a galaxy. We find: \[I_{e}=\Pi R_{e}^{\gamma} \tag{3}\] for the \(I_{e}-R_{e}\) plane, where \[\gamma=\frac{(2/\beta)-(1/2)}{(1/2)-(1/\beta)}\] and \(\Pi\) is a factor that depends on \(k_{v}\), \(M/L\), \(\beta\), and \(L_{0}^{\prime}\). It is given by \[\Pi=\left[\left(\frac{2\pi}{L_{0}^{\prime}}\right)^{1/\beta}\left(\frac{L}{M_ {s}}\right)^{(1/2)}\left(\frac{k_{v}}{2\pi G}\right)^{(1/2)}\right]^{\frac{1}{ 1/2-(1/\beta)}}.\] Then we have: \[I_{e}=\left[\frac{G}{k_{v}}\frac{L_{0}^{\prime}}{2\pi}M_{s}\Pi^{3/\gamma} \right]^{\frac{\beta-2}{1+(1/\beta)}}\sigma^{\frac{\beta-2}{1+(1/\beta)}} \tag{4}\] for the \(I_{e}-\sigma\) relation and \[R_{e}=\left[\frac{G}{k_{v}}\frac{L_{0}^{\prime}}{2\pi}\frac{M_{s}}{\Pi} \right]\sigma^{\frac{\beta-2}{1+(1/\beta)}} \tag{5}\] for the \(R_{e}-\sigma\) relation. In addition we have: \[R_{e}=\left[\frac{G}{k_{v}}\rho/2\frac{L_{0}^{\prime}}{2\pi}\frac{1}{\Pi} \right]^{\frac{\beta-(1+\beta)}{2-(1/\beta)}}M_{s}^{\frac{\beta-2}{1+(1/\beta) }} \tag{6}\] for the \(R_{e}\)-\(M^{*}\) relation. It is important to note here that in all these equations the slopes of the log relations depend only on \(\beta\). This means that when a galaxy changes its luminosity \(L\) and its velocity dispersion \(\sigma\), i.e. when \(\beta\) has a well defined value (either positive or negative), the effects of the motion in the \(L-\sigma\) plane are propagated in all the FP projections. In these planes the galaxies cannot move in whatever directions, but are forced to move only along the directions (slopes) predicted by the \(\beta\) parameter in the above equations. In this sense the \(\beta\) parameter is the link we are looking for between the FJ (and the FP) and the observed distributions in the FP projections. In addition, the combination of eqs. (2) gives us another important equation. It is now possible to write a FP-like equation valid for each galaxy depending on the \(\beta\) and \(L_{0}^{\prime}\) parameters: \[\log R_{e}=a\log\sigma+b<\mu>_{e}+c \tag{7}\] where the coefficients: \[a = (2+\beta)/3 \tag{8}\] \[b = 0.26\] \[c = -10.0432+0.333*(-\log(G/k_{v})-\log(M/L)\] \[-2*\log(2\pi)-\log(L_{0}^{\prime}))\] are written in terms of \(\beta\) and \(L_{0}^{\prime}\). We note that this is the equation of a plane whose slope depends on \(\beta\) and the zero-point on \(L_{0}^{\prime}\). The similarity with the FP equation is clear. The novelty is that the FP is an equation derived from the fit of a distribution of real objects, while here each galaxy independently follows an equation formally identical to the classical FP but of profoundly different physical meaning. In this case, since \(\beta\) and \(L_{0}^{\prime}\) are time dependent, the equation represents the instantaneous plane on Figure 4: The FP at redshift \(z=1\). The black filled circles are the data of di Serego Alighieri et al. (2005). The red and blue circles are for Illustrs-1 and Illustris-TNG respectively. The FP has been determined considering only the galaxies with masses in the same range of the observational sample (those with \(M>10^{10}M_{\odot}\)). which a generic galaxy is located in the \(\log(\sigma)-\log(I_{e})-\log(R_{e})\) space and consequently in all its projections. Finally, the combination of the above equations allows us to determine the values of \(\beta\) and \(L_{0}^{\prime}\), the two critical evolutionary parameters. This is possible by writing the following equations: \[\beta[\log(I_{e})+\log(G/k_{v})+\log(M_{s}/L)+\log(2\pi)+\log(R_{ e})]+ \tag{9}\] \[+2\log(L_{0}^{\prime})-2\log(2\pi)-4\log(R_{e})=0\] \[\beta\log(\sigma)+\log(L_{0}^{\prime})+2\log(\sigma)+\log(k_{v}/ G)-\log(M_{s})+\] (10) \[-\log(2\pi)-\log(I_{e})-\log(R_{e})=0.\] Posing now: \[A = \log(I_{e})+\log(G/k_{v})+\log(M_{s}/L)+\log(2\pi)+ \tag{11}\] \[\log(R_{e})\] \[B = -2\log(2\pi)-4\log(R_{e})\] \[A^{\prime} = \log(\sigma)\] \[B^{\prime} = 2\log(\sigma)-\log(G/k_{v})-\log(M_{s})-\log(2\pi)-\] \[\log(I_{e})-\log(R_{e})\] we obtain the following system: \[A\beta+2\log(L_{0}^{\prime})+B=0 \tag{12}\] \[A^{\prime}\beta+\log(L_{0}^{\prime})+B^{\prime}=0 \tag{13}\] with solutions: \[\beta = \frac{-2\log(L_{0}^{\prime})-B}{A} \tag{14}\] \[\log(L_{0}^{\prime}) = \frac{A^{\prime}B/A-B^{\prime}}{1-2A^{\prime}/A}. \tag{15}\] The key result is that the parameters \(L\), \(M_{s}\), \(R_{e}\), \(I_{e}\) and \(\sigma\) of a galaxy fully determine the evolution that is encoded in the parameters \(\beta\) and \(L_{0}^{\prime}\). Considering the fact that each structural parameter is known with a maximum error of \(\sim 20\%\), the single values of beta cannot be trusted too much. On average instead we will show that the galaxies move in the SSRs only in the directions defined by beta. Given this premise, we proceed now to show the basic SSRs at much higher redshifts. ## 4 The SSRs at high redshift To explore the behavior of the SSRs at high redshifts we can only relay on simulations because we do not have enough observational data for galaxies at high redshift. Fortunately, despite the small systematic overestimate of the effective radii, the Illustris-1 and Illustris-TNG data are sufficiently good to be trusted even at high redshifts. Furthermore, as shown by D'Onofrio & Chiosi (2023), both Illustris-1 and Illustris-TNG produce a very similar distribution for the \(\beta\) parameter. Thanks to this, the simulated data can provide a reliable insight on the evolution of the SSRs with time. In the following we will show the results for both the Illustris-1 and Illustris-TNG samples currently available to us. Figures 5 and 6 present the \(I_{e}-R_{e}\) plane from \(z=4\) (upper left) to \(z=0\) (bottom right) for the Illustris-1 and Illustris-TNG respectively. For Illustris-1 the whole sequence of redshifts is \(z=4\) (left upper panel), \(z=3\), \(z=2.2\), \(z=1.6\), \(z=1.0\), \(z=0.6\), \(0.2\), \(z=0\) (right bottom panel) as indicated. In all the panels, the colored dots indicate galaxies with \(\beta>0\) and the black points those with \(\beta<0\). Crosses and open squares indicate galaxies with SFR greater than the average \(<\)SFR\(>\) or lower than the average respectively at each redshift epoch. The same color code is also used for the arrows indicating the mean slope of \(\beta\), calculated from eq. (15) for each object of the simulation). Such mean value provides approximately the direction of motion in this plane for most of the galaxies as expected from eq. (3). The sequence of panels indicates that the tails well visible at \(z=0\) for the brightest galaxies start to appear at \(z\sim 1-1.5\). This epoch probably corresponds to the time in which minor mergers either with or without star formation on already formed massive objects became the typical event thus increasing both the mass and radius of galaxies (Naab et al., 2009). It is interesting to note that the directions of the arrows, whose slope depends on eq. (3) (see Table 1), flip progressively with \(z\), in particular for the positive \(\beta\)'s, assuming the value close to \(-1\) (as predicted by the VT) approximately at \(z\sim 0.6\), and remaining constant thereafter. Such slope gives the only possible direction of motion of galaxies in the \(I_{e}-R_{e}\) plane at each epoch when the evolution proceeds and \(\beta\) changes. The brightest galaxies, that have likely reached a full virial equilibrium far in the past, are no longer affected by strong episodes of star formation, and start to move along this direction at \(z\sim 1-1.5\), forming the tail we observe today. Notably, even the galaxies with \(\beta\leq 0\) progressively reach the same slope. This happens because several objects have large negative \(\beta\) values. As we will see later, both positive and negative values of \(\beta\) are possible and, as demonstrated by D'Onofrio & Chiosi (2021), this is a necessary condition for reproducing the \(L-\sigma\) distribution starting from the \(I_{e}-R_{e}\) distribution (and viceversa). A further thing to note is that the galaxies with strong SFR (greater than \(<\)SFR\(>\)) have in general a positive \(\beta\). When some object with negative \(\beta\) appears on top of the distribution, it has a SFR \(>\)\(<\)SFR\(>\). Only later on, when the present epoch is approached, we start to see in the upper part of the cloud, galaxies with negative \(\beta\) and SFR \(<\)SFR\(>\) (the black open squares). These objects might be relatively small compact galaxies where the SF is over (a possible candidate for this class of galaxies could be M32). Notably the galaxies with the higher surface brightness have positive \(\beta\) at high redshift and only later this region of the plot is populated by objects with negative \(\beta\) and low SFR. A very similar behavior is observed in Fig. 6 when we use the Illustris-TNG data. As in Fig. 5 the colored dots indicate the galaxies with positive \(\beta\) and the arrows the mean value of \(\beta\). With respect to the previous figure we note that the colored arrows do not change very much their directions. We attribute this behavior to the different dimension of the two samples. For the Illustris-TNG data the arrows have always slope close to \(-1\), a fact that indicate quite big values for \(\beta\). In any case, the trend of populating the upper region of high surface brightness with object with negative \(\beta\) and low \(<\)SFR\(>\) is confirmed. One might ask now what produces the tail for the brightest objects in the \(I_{e}-R_{e}\) plane, being the arrows directed at all redshift epochs approximately toward the same directions. This can be better understood looking at the other FP projections and taking into account that the physical mechanisms at work in small and large galaxies can be different. Figure 7 is even more impressing in showing how the changes of \(\beta\) across time determine the motions in the FP projections (the symbols and color code have the same meaning of those in Fig.5). In the \(I_{e}-\sigma\) plane the curvature formed by the brightest galaxies is much more pronounced. We note again that as \(\beta\) increases, the colored arrows point to the direction of the tail that we see at \(z\sim 0\). In the figure we also note that the galaxies with negative \(\beta\) are preferentially at the bottom of the cloud distribution and their number decreases up to \(z\sim 0.6-1.0\). Then they increase again in number and tend to crowd the top region of the distribution. As before the upper region of the distribution with high \(I_{e}\) is populated by objects with high SFR at the most remote epochs and only approaching \(z=0\), galaxies with negative \(\beta\) and low SFR appears on top of the distribution. Again the TNG data (Fig. 8) give a similar picture of the \(I_{e}-\sigma\) plane. Now the change of direction due to positive and negative values of \(\beta\) is much evident and we understand that the tail originates when \(\beta\) increases and the galaxies progressively become much virialized. Figures 9 and 10 display the \(R_{e}-\sigma\) plane with the Illustris-1 and Illustris-TNG data, respectively. Again the slopes of the arrows predicted by eqs. (3), (4), and (5) are in good agreement with those inferred from the observed distribution of real galaxies and explain the tail formed by the bright galaxies. The same can be said for the \(R_{e}\)-\(M^{*}\) plane (Figs. 11 and 12). In both planes the tail formed by the brightest galaxies stands out clearly. The slope of the tail in the \(R_{e}\)-\(M^{*}\) plane is exactly close to 1, as predicted by the VT. The last bottom right panel (that at \(z=0\)) shows in particular that objects both with negative and positive large values of \(\beta\) begin to climb the tail as soon as their mass exceeds about \(10^{11}\)\(M_{\odot}\). As in the \(I_{e}-R_{e}\) plane, the TNG data exhibit quite similar mean values for the slopes predicted at the different redshifts. The slope is close to 1 and this means that the majority of the galaxies are quite well virialized at \(z=0\). A further notable thing is that the TNG100 data indicate the presence of quite massive objects with small radii, not visible in Illustris-1. These objects might be the class of compact massive galaxies with high \(I_{e}\) also visible in Fig. 6 and 8. Notably we can see that all these objects have negative \(\beta\) and very low SFR. In other words they are isolated compact massive galaxies where the SF stopped long time ago. The picture we have illustrated here using the Illustris-1 and Illustris-TNG data clearly reveals a progressive trend of the galaxies toward the full virial equilibrium as indicated by the slopes of the arrows when \(|\beta|\to\infty\). Such condition is reached by the Figure 5: The \(I_{e}-R_{e}\) plane at different redshift for Illustris-1. From top left to bottom right we can see the distribution of galaxies at \(z=4\), \(z=3\), \(z=2.2\), \(z=1.6\), \(z=1.0\), \(z=0.6\), \(z=0.2\) and \(z=0\). The colored dots mark the galaxies with \(\beta>0\) at each epoch, whereas the black symbols those with \(\beta\leq 0\). The crosses mark the galaxies with SFR greater than the mean \(<\)SFR\(>\) at each epoch, while the open squares those with SFR lower than \(<\)SFR\(>\). The arrows indicate the average direction of motion predicted on the basis of the values of \(\beta\) from eq. (3) (see also Table 1). The colored arrows mark the average for the galaxies with \(\beta>0\) and the black arrows those with \(\beta\leq 0\). Their length is not related or proportional to any other quantity: it has been chosen for graphical reasons. most massive galaxies approximately at \(z\sim 1.5-1.0\). In general the galaxies with the largest radii have \(\beta>0\) both in simulations and observations. This behavior is compatible with the predictions of minor mergers in which galaxies might increase their radius without changing significantly their mass and luminosity (Naab et al. 2009; Genel et al. 2018). Finally, we have to remark that the FJ relation does not change very much with redshift. As the redshift decreases from \(z=4\) to \(z=0\) through six intermediate steps, we observe that some galaxies have negative \(\beta\) and others have positive \(\beta\). The fits of the observed distributions reveal that the slope of the FJ relation progressively decreases passing from nearly 4 (at \(z=4\)) to nearly 2 (at \(z=0\)). One may legitimately ask why the scatter of the FJ relation does non increase with time if there are objects that move nearly perpendicularly to the trend indicated by the observed distribution. The same trend is visible with the TNG data (not plotted here). We believe that the scatter cannot increase as consequence of the merger activity because the maximum possible variation in luminosity that a galaxy might experience does not exceed a factor of two (when a galaxy approximately doubles its mass merging with a similar object of the same mass and stellar content) that in log units corresponds to a factor of \(\sim 0.3\), a very small shift compared with the scale spanned by the data values. To support this statement in Appendix A we present a toy model predicting the effects on the total luminosity of a galaxy with mass \(M_{1}\), age \(T_{1}\) and luminosity \(L_{1}\) would undergo as a consequence of a merger with another object of mass \(M_{2}\), age \(T_{2}\) and luminosity \(L_{2}\). The event may or may not be followed by star formation engaging a certain amount of gas with mass \(M_{3}\). Using reasonable values for the masses and luminosities of the three components (see eq.(A.1 in Appendix A ) we may expect that the total luminosity first increases and then decreases on a timescale that depends on the amount of matter engaged in the burst of activity. In any case the luminosity evolution is fast up to a few \(10^{8}\) years after the burst (turnoff mass about \(3M_{\odot}\)) slows down up to \(10^{9}\) years (turnoff mass about \(2~{}M_{\odot}\)), and then becomes even slower afterwards. The estimated fading rate of the luminosity is about \(|\Delta(logL/L_{\odot})|\simeq 0.015\) per Gyr and per unit SSP mass, must be multiplied by 5.8 to get the real fading rate per Gyr (see the SSP database of Tantalo 2005). Consequently it is very unlikely to catch a galaxy exactly at the time of the maximum luminosity. Equation (A.1) allows us to quickly evaluate Figure 6: The \(I_{e}-R_{e}\) plane at different redshift for the TNG-100 data. From top left to bottom right we can see the distribution of galaxies at \(z=4\), \(z=3\), \(z=2.2\), \(z=1.0\), and \(z=0\). The colored dots mark the galaxies with \(\beta>0\) at each epoch, whereas the black symbols those with \(\beta\leq 0\). The crosses mark the galaxies with SFR greater than the mean \(<\)SFR\(>\) at each epoch, while the open squares those with SFR lower than \(<\)SFR\(>\). The arrows indicate the average direction of motion predicted on the basis of the values of \(\beta\) from eq. (3) (see also Table 1). The colored arrows mark the average for the galaxies with \(\beta>0\) and the black arrows those with \(\beta\leq 0\). Their length is not related or proportional to any other quantity: it has been chosen for graphical reasons. the effects of mergers with different combinations of masses and ages of the involved galaxies. However, the examples shown in Appendix A demonstrate that but for the case of a merger between to objects of comparable mass, in which the luminosity and mass of the resulting objects are double the original ones, mergers among objects of different mass, age and likely undergoing some star formation during the merger generate objects that in practice do not keep trace of the merger but simply keep the properties (mass and luminosity) of the most massive component. More details are not of interest here. The main conclusion of this section can be summarized as follows. The hypothesis that the VT and the \(L=L_{\rm{\nu}}^{\rm{\nu}}\sigma^{\beta}\) work together to govern the evolution of mass \(M_{\rm{\nu}}\), luminosity \(L\), radius \(R_{e}\), surface brightness \(I_{e}\) and velocity dispersion \(\sigma\), leads to a coherent and self-consistent explanation of all the scale relations of galaxies together with a reasonable explanation for the tilt of the FP as demonstrated by D'Onofrio & Chiosi (2022, 2023). ## 5 The important role of \(\beta\) To better understand the effects played by \(\beta\) it is necessary to think about the possible variations of \(R_{e}\) and \(I_{e}\) when \(L\) and \(\sigma\) vary in the \(L-\sigma\) plane. There are six possible changes of \(L\) and \(\sigma\) in this plane: \(\sigma\) either decreases, increases or remains constant, and the same does \(L\). The effective relationship between the two variables depends in turn on \(\beta\), e.g. when \(\beta\) is negative, not necessarily there is a decrease in luminosity, and when \(\beta\) is positive, a decrease in luminosity might also occurs (see D'Onofrio & Chiosi 2022, 2023, for a detailed discussion of this topic). The ambiguity in the direction of evolution can be only solved by looking at the movements of the galaxies in the different SSRs, in particular observing the behavior of \(I_{e}\). When the luminosity of a galaxy changes, both the effective radius \(R_{e}\) and the mean effective surface intensity \(I_{e}\) vary. This happens because \(R_{e}\) is not a true physical radius, like e.g. the virial radius (which depends only on the total mass), but it is the radius of the circle that encloses half the total luminosity of the galaxy. Since galaxies have different stellar populations with different ages and metallicity, it is very unlikely that a change in luminosity does not change the whole shape of the luminosity profile and therefore the value of \(R_{e}\). If the luminosity decreases passively, in general one could expect a decrease of \(R_{e}\) and an increase of \(I_{e}\). On the other hand, if a shock induced by harassment or stripping induces an increase of \(L\) (and a small decrease in \(\sigma\)), we might expect an increase of \(R_{e}\) and a decrease of \(I_{e}\). The observed variations of these parameters depend strongly on the type of event that a galaxy is experiencing (stripping, shocks, feedback, merging, etc.). In general, one should keep in mind that these three variables \(L\), \(R_{e}\) and \(I_{e}\) are strongly coupled each other and that even a small variation in \(L\) might result in ample changes of \(R_{e}\) and \(I_{e}\). In summary, as already pointed out, the variations of the parameter \(\beta\) with time are responsible of all the changes observed in the FP projections. This means that the FP problem should Figure 7: The \(I_{e}-\sigma\) plane for the Illustris-1 data. Symbols and colors as in Fig. 5. be considered from an evolutionary point of view where time plays an important role and the effects of evolution are visible in all the FP projections. The single SSRs are snapshots of an evolving situation. The \(L=L_{0}^{\prime}\sigma^{\beta}\) law catches such evolution in the right way, predicting the correct direction of motion of each galaxy in the basic diagnostic planes. We now show that the parameter \(\beta\) changes with the cosmic epochs and that such variations are in turn related mainly to the change of the mean surface intensity due to the natural variation of the star formation activity with time. We will see that \(\beta\) tends to be low when star formation is high and viceversa. A large scatter is however present at all epochs. Furthermore we will show that \(\beta\) increases considerably if and when the galaxy can attain the condition of full virialization, i.e. the two variables \(M_{s}\) and \(R_{e}\) combine in such a way as to yield the measured velocity dispersion (i.e. that measured for the stellar content). Figure 13 shows the \(\beta-\log(I_{e})\) plane. The dots of different colors represent the galaxies at different redshifts using the same color code of Fig. 5. From this plot it is clear that \(\beta\) increases and \(\log(I_{e})\) on average decreases when the cosmic epoch approaches \(z=0\) (light gray dots). In the remote epochs (\(z=4\)) and up to \(z\sim 1.5\) we observe an almost linear dependence of \(\beta\) on \(I_{e}\), in which \(\beta\) ranges from 0 to \(\sim 20\). This condition is an indication that at such epochs the galaxies are still far from the full virialization. The real data of WINGS (black dots) are very well superposed over the simulation data, showing a large spread with large positive and negative values of \(\beta\). This behavior of \(\beta\) is connected with the average star formation rate at the different cosmic epochs. It is clearly seen in Fig. 14, where we note that, when the SFR is high, the values of \(\beta\) are close to 0-10. The large scatter in \(\beta\) starts to be visible with the gray dots at \(z\sim 0.6-1.0\), the same epoch in which before we have seen the tail of high luminosity galaxies in the SSRs for the first time. Figure 15 is also helpful to understand that \(\beta\) gets large positive and negative values preferably in galaxies with masses higher than \(-10^{10}M_{\odot}\). Finally, Fig. 16 shows \(\beta\) versus the quantity [\(\log(2T)\)] - \(\log(\Omega)\)] that is a proxy of the virial condition, being the difference between the kinetic and potential energy of the stellar systems. The figure clearly indicates that \(|\beta|\) increases while \(\beta\) can be either positive or negative, when the difference of the two energies approach zero5. Note that at high redshift \(\beta\) remains very close to 0. This means that the galaxies are still far from virial equilibrium. In contrast at low redshifts the peak of \(\beta\) falls in the interval 0 to 20 (\(z=1\)) and 0 to 50 (\(z=0\)) with larger and larger spreads toward both high positive and low negative values. Footnote 5: The difference predicted by the VT is 0, but the calculated energies depend on \(R_{e}\), which is not exactly the virial radius. In Fig. 17 we show the histogram of the number frequency distribution (\(N/N_{tot}\)) of \(\beta\) in the model galaxies at different redshifts. There is no much difference between the histograms of the Illustris-1 and Illustris-TNG samples. The most remarkable fea Figure 8: The \(I_{e}-\sigma\) plane for the Illustris-TNG data. Symbols and colors as in Fig. 6. tures to note are that (i) at high redshifts (\(z\geq 2\)) the distribution peaks fall in the interval \(4\leq\beta\leq 0\) with a small tail of positive values in the interval \(0\leq\beta\leq 4\); (ii) the distribution gradually spreads to higher values of \(|\beta|\) at low redshifts (1 and 0), both positive and negative values of \(\beta\) are present; (iii) finally, at low redshifts the peaks are visible both in the positive and negative range of the \(\beta\) values and \(|\beta|\) can attains large values. The reason why we observe such large dispersion in \(\beta\) is that the term \(1-2A^{\prime}/A\) at the denominator of eq.(15) becomes very close to zero. Consequently both \(\log(L_{0}^{\prime})\) and \(\beta\) diverge. According to the direction in which 0 is approached, one can have either very large and positive or large and negative values for \(\beta\). As already discussed, this happens when the system is in conditions of strict virialization. The sign of \(\beta\) depends on the particular history of the variables \(M_{s}\), \(R_{e}\), \(L\), and \(I_{e}\), in other words whether the term \(2A^{\prime}/A\) is tending to 1 from below (\(\beta>0\)) or above 1 (\(\beta<0\)). From an operational point of view we may define "state close to strict virialization" when \(|\beta|>20\). Notably the Illustris-1 and Illustris-TNG models agree very well with the observational data (black dots in these panels). The inclusion of real dynamics and the hierarchical scenario provide much better conditions to bring the action of virialization into evidence. The hierarchical scenario by mergers, ablation of stars and gas, harassment, secondary star formation, inflation of dimension by energy injections of various kinds, etc. induces strong variations of the fundamental parameters of a galaxy and hence strong temporary deviations from the virial conditions. However, after this happened, the viral conditions are soon recovered over a suitable timescale. This can be short or long depending on the amount of mass engaged in the secondary star forming activity and the amount of time elapsed since the star forming event took place (see the burst experiments in Chiosi & Carraro 2002; Tantalo & Chiosi 2004). As a consequence of all this, detecting systems on their way back to virial equilibrium is likely a frequent event thus explaining the high dispersion seen on the \(\beta\)-\(I_{e}\) plane. The value of \(\beta\) evaluated for each galaxy can provide a useful hint about the equilibrium state reached by the system. Most likely, the condition of strict virial equilibrium is a transient phenomenon that could occur several times during the life of a galaxy. This is suggested by the large number of galaxies with very small or very high \(\beta\). ## 6 The history of mass assembly The Illustris-1 and Illustris-TNG simulations have made clear that the history of mass assembling of galaxies is not simple, but goes through repeated episodes of mass accretion and mass removal. Figures 18 and 19 show the main SSRs and the \(\beta-z\) plane respectively. Eight single galaxies of different mass and evolutionary history extracted randomly from the sample are displayed in these plots. These galaxies are taken from our Illustris-1 sample. Each galaxy is indicated by a broken line of different color, while the mass assembly history of each object is represented by the series of dots of the same color. Along each line Figure 9: The \(R_{e}-\sigma\) plane for Illustris-1. Symbols and colors as in Fig. 6. there are eight points, one for each value of the redshift from \(z=4\) to \(z=0\) according to the list already presented in the previous sections. A very similar figure is obtained with the TNG data and therefore has not be plotted here. The \(\beta\) values of these 8 galaxies at different redshift are shown in Fig. 19 and are very close to 0 at every epochs with the exceptions of the blue track. Using the \(\beta\) values one should enter in Table 1 and derive the possible directions of motion at each redshift in each of the planes. For example the yellow and green objects have always values of beta close to 0 (a bit positive). These corresponds to slopes around \(\sim-2\) in the \(I_{e}-R_{e}\) plane, \(\sim 2.5\) in the \(R_{e}-\sigma\) and \(\sim-1.5\) in the \(I_{e}-\sigma\) planes. They are therefore objects of the "big cloud" were galaxies can move in every possible direction. The blue galaxy on the other hand reaches quite high values of \(\beta\) and it possible to see that its movements are in the direction close to \(-1\) in the \(I_{e}-R_{e}\) plane. It is clear from Fig. 18 that the galaxies do not move in the planes of the SSRs in a continuous and uniform way, rather they randomly change their position at different epochs. In the same figure we can also note that the galaxies with blue, red, brown, and magenta colors are more massive, more luminous, and with higher \(\sigma\) than the others even at early epochs (\(z=4\)). Even more important, we emphasizes that the same galaxies at epochs closer to us than \(z\sim 1.5\) are able to reach both large positive and negative values of \(\beta\). Their \(\beta\)s start low and gradually increase, in other words may reach the state of virial equilibrium. In contrast, the less massive and fainter galaxies have always low \(\beta\)s (see Fig. 18 and 19) close to 0 and are located in the region of low \(\sigma\), \(I_{e}\), and \(L\). The dwarf galaxies never reach the condition of full virialization. As already pointed out the condition of full virial equilibrium can be a transient state in the sense that once reached it cannot be maintained for ever if a galaxy undergoes events such as mergers, stripping and harassment that may push it away from this condition. However, the virial equilibrium can be recovered again on a suitable time scale that of course depends on the relative intensity of the disturbing event. For instance, in the case of a merger between two galaxies of comparable mass, most likely accompanied by intense star formation, the resulting system will not be in virial equilibrium and will take quite some time to reach this condition. On the contrary, the merger between two galaxies of significantly different mass, likely accompanied by modest star formation, will only slightly depart for the virial condition. If so, we expect that after a certain redshift, only the massive galaxies remain unperturbed by mergers and can move toward the condition of strict virialization, while the low mass ones are still far from this ideal condition. The few objects displayed in Fig. 19 are typical examples of the above situations. ## 7 Discussion and conclusions The aim of this paper is to show that combining the VT with the \(L=L_{0}^{\prime}\sigma^{\beta}\) relation (as a proxy of evolution, in which \(\beta\) and \(L_{0}^{\prime}\) vary from galaxy to galaxy and in the course of time) is rich of Figure 10: The \(R_{e}-\sigma\) plane for Illustris-TNG. Symbols and colors as in Fig. 7. positive consequences (see D'Onofrio et al. 2017, 2019, 2020; D'Onofrio & Chiosi 2021, for earlier studies along this line of thought). The variation of \(\beta\) and \(L_{0}^{\prime}\) with time traces the path followed by each galaxy in the various SSRs. The \(L=L_{0}^{\prime}\sigma^{\beta}\) law together with the VT yield \(I_{e}-R_{e}\), \(R_{e}-\sigma\), \(I_{e}-\sigma\) and \(R_{e}-M^{\prime}\) relations that nicely reproduce the data and more important strongly suggest the existence of a system of two equations in the unknowns \(\beta\) and \(L_{0}^{\prime}\) with coefficients functions of \(M_{s}\), \(R_{e}\), \(L\), and \(I_{e}\) that for each galaxy determines the value of \(\beta\) and \(L_{0}^{\prime}\). With the aid of these relations we can determine the instantaneous position and direction of motion of a galaxy on the FP and its projection planes. Because of this and limited to ETGs, we named these equations _fundamental equations of galaxy structure and evolution_(D'Onofrio & Chiosi 2022, 2023). With this study we show that the Illustris-1 and Illustris-TNG databases give basic parameters of galaxies in satisfactory agreement with the observational data for the galaxies at \(z\approx 0\). They indeed reproduce some distinct features observed in the FP projections, such as the tail of the bright ETGs, the ZoE, and the clumps of small mass objects. Basing on these simulated data, we look at the SSRs at different epochs (from redshift \(z=0\) up to redshift \(z=4\)) highlighting their expected behaviour. In summary we show that: 1. The SSRs change with time; 2. The variations of the SSRs can be explained with the variation of the \(\beta\) parameter driving the \(L=L_{0}^{\prime}(t)\sigma^{\beta(0)}\) law; 3. When \(\beta\) varies with time, a galaxy can move in the SSRs only in some well defined directions that ultimately depend on \(\beta\). These directions change with time and, going toward \(z=0\), progressively acquire the slope exhibited by the most massive galaxies that lay along the tail of the bright ETGs at \(z=0\); 4. The parameter \(\beta\) can get both positive and negative values across time. Basing on it, we suggest that the parameter \(\beta\) can be considered as a thermometer gauging the virialization conditions. As a matter of fact \(\beta\) can be either large and positive or large and negative when the galaxies are close to the virial equilibrium; 5. The only galaxies that can reach the virial state are those that became massive enough (above \(10^{9}-10^{10}M_{\odot}\)) already at high redshifts (\(z=4\)). These are no longer disturbed by the merging events, which by the way become rare events after \(z\sim 1.5\) and/or in any case are not influential in terms of changing the mass ratio between donor and accretor; 6. Finally, the \(L=L_{0}^{\prime}\sigma^{\beta}\) relation can be considered as an empirical way of catching the temporal evolution of galaxies. The values of \(\beta\) (and \(L_{0}^{\prime}\)) mirror the history of mass assembly and luminosity evolution of a galaxy. The conclusion is that SSRs are full of astrophysical information about galaxy evolution. ###### Acknowledgements. M.D. thanks the Department of Physics and Astronomy of the Padua University for the financial support. Figure 11: The \(R_{e}\)-\(M^{\prime}\) plane for Illustris-1. Symbols and colors as in Fig. 5.
2306.03528
Adversarial Attacks and Defenses for Semantic Communication in Vehicular Metaverses
For vehicular metaverses, one of the ultimate user-centric goals is to optimize the immersive experience and Quality of Service (QoS) for users on board. Semantic Communication (SemCom) has been introduced as a revolutionary paradigm that significantly eases communication resource pressure for vehicular metaverse applications to achieve this goal. SemCom enables high-quality and ultra-efficient vehicular communication, even with explosively increasing data traffic among vehicles. In this article, we propose a hierarchical SemCom-enabled vehicular metaverses framework consisting of the global metaverse, local metaverses, SemCom module, and resource pool. The global and local metaverses are brand-new concepts from the metaverse's distribution standpoint. Considering the QoS of users, this article explores the potential security vulnerabilities of the proposed framework. To that purpose, this study highlights a specific security risk to the framework's SemCom module and offers a viable defense solution, so encouraging community researchers to focus more on vehicular metaverse security. Finally, we provide an overview of the open issues of secure SemCom in the vehicular metaverses, notably pointing out potential future research directions.
Jiawen Kang, Jiayi He, Hongyang Du, Zehui Xiong, Zhaohui Yang, Xumin Huang, Shengli Xie
2023-06-06T09:24:06Z
http://arxiv.org/abs/2306.03528v2
# Adversarial Attacks and Defenses for Semantic Communication in Vehicular Metaverses ###### Abstract For vehicular metaverses, one of the ultimate user-centric goals is to optimize the immersive experience and Quality of Service (QoS) for users on board. Semantic Communication (SemCom) has been introduced as a revolutionary paradigm that significantly eases communication resource pressure for vehicular metaverse applications to achieve this goal. SemCom enables high-quality and ultra-efficient vehicular communication, even with explosively increasing data traffic among vehicles. In this article, we propose a hierarchical SemCom-enabled vehicular metaverses framework consisting of the global metaverse, local metaverses, SemCom module, and resource pool. The global and local metaverse are brand-new concepts from the metaverse's distribution standpoint. Considering the QoS of users, this article explores the potential security vulnerabilities of the proposed framework. To that purpose, this study highlights a specific security risk to the framework's SemCom module and offers a viable defense solution, so encouraging community researchers to focus more on vehicular metaverse security. Finally, we provide an overview of the open issues of secure SemCom in the vehicular metaverses, notably pointing out potential future research directions. Vehicular metaverse, semantic communication, adversarial attacks, security defense. ## I Introduction With the advances of the Internet of Things and Artificial Intelligence, metaverse technology, regarded as the next-generation Internet, is rapidly emerging to build a virtual-physical integrated world with fully immersive and personalized experiences for users in many scenarios and applications. In particular, vehicular networks are revolutionizing towards vehicular metaverses that have vast potential to provide diverse, immersive, and personalized in-vehicle entertainment/services for both drivers and passengers [1]. In vehicular metaverses, vehicle data are mainly divided into two categories: i) static data including background information on roadside infrastructures and road networks, and ii) dynamic data involving information about pedestrians, moving vehicles, traffic flow, and so on. To support time-sensitive vehicular metaverse services (e.g., AR navigation), a large number of dynamic data is required for real-time updates of the vehicular physical-virtual world to avoid service quality degradation, while conventional communication based on Shannon Information Theory cannot support efficient communications and massive connectivity for vehicular metaverses due to limited wireless communication resources. Recently, Semantic Communication (SemCom) has been introduced as a revolutionary paradigm that breaks through the bandwidth bottleneck of classical communications, significantly improving communication efficiency by leveraging deep learning models to only transmit critical information sufficient for the receivers, thus dramatically reducing the number of bits transmitted. A deep learning-enabled end-to-end SemCom system with Deep Neural Networks (DNNs) as semantic codecs has been proposed in [2]. It is a promising solution to integrate SemCom into vehicular metaverses for enabling real-time transmission of vast data used to sustain the metaverse services. In vehicular metaverses, vehicles only send data requests and then download lots of static and dynamic data from edge servers in the way of SemCom with unprecedented communication efficiency in comparison to traditional communication manners. Although semantic communication plays an indispensable role in vehicular metaverses, SemCom-enabled vehicular metaverses are still in their infancy. There are many challenges to be resolved for realizing its potential, especially privacy and security issues. On the one hand, an eavesdropper can infer vehicle location and driving habits by analyzing communication data. Even though privately trained codecs provide SemCom with a natural barrier to being eavesdropped, this barrier does not work well in vehicular metaverses. This is because there exist similar and stationary communication tasks in neighbor vehicles and then the eavesdropper may obtain similar or even identical decoders to recover data from the wireless channels. The researches on [3, 4, 5] are committed to private communications. On the other hand, DNNs are vulnerable to attacks (e.g., poisoning attacks and adversarial attacks). Such weaknesses naturally exist in deep learning-based SemCom systems. Attackers can easily apply typical deep learning attacks to the SemCom systems and directly reduce the task/model accuracy [6]. Non-malicious perturbations may cause performance degradation of semantic communication systems as well [7]. In particular, the existing work ignores adversarial attacks of SemCom in vehicular metaverses. For adversarial attacks, the authors in [8] find that non-random perturbation on the test sample can arbitrarily manipulate the output of neural networks. In general, the attacker first obtains the structure and parameters of the target model. The adversarial samples are then initialized with the original data. After several training iterations, the elaborate adversarial samples similar to the original data are generated by maximizing the loss of the semantic encoder. Finally, the accuracy degradation of the target model is achieved in the face of adversarial samples. Hu et al. in [6] demonstrate that adversarial attacks are still feasible in the deep learning-enabled SemCom system. For vehicular metaverses, the adversarial attacks increase the probability of receiving wrong information, even causing traffic accidents if receivers are misled to take dangerous actions according to wrong information [6, 9]. To fully understand the risks of adversarial attacks for SemCom-enabled vehicular metaverses, in this article, we study the adversarial attacks and the corresponding defenses against these attacks. More specifically, we first design a hierarchical vehicular metaverse framework with a global metaverse and multiple local metaverses [10]. A new adversarial attack model and its defense scheme for this framework are then proposed, respectively. We discuss several security challenges of SemCom-enabled vehicular metaverses. The main contributions are summarized as follows: * According to vehicle data characteristics and service requirements, we design a new and hierarchical SemCom-enabled vehicular metaverse framework mainly consisting of a global metaverse, multiple local metaverses, and resource pools. This framework not only significantly reduces data transmission delay, but also efficiently utilizes the computing and storage resources of edge servers. * We propose a novel adversarial attack called Semantic Noise Attack (SNA) to generate adversarial samples by adding semantic noise and design a new defense scheme using the Semantic Distance Minimization (SDM) mechanism to weaken the adversarial samples, but almost without sacrificing transmission accuracy. * For use cases, we present two SemCom systems with different semantic encoders on traffic sign classification and license plate recognition, respectively. We examine the effects of SNA on the SemCom systems and evaluate the robustness of the defense scheme based on SDM in vehicular metaverses. The numerical results indicate that the proposed defense scheme significantly reduces the success rate of the attacks. ## II Semantic Communication for Vehicular Metaverses For vehicular metaverses, sensing data is divided into static and dynamic data based on the properties in the real world. Static data is the background information from the physical space, such as road sensors, roadside infrastructures, and the topology network of roads. Dynamic data is the "add-in" information from dynamic entities in the physical space including vehicles, pedestrians, traffic signs, etc. Actually, the dynamic data is used to update the digital twins of dynamic entities in the virtual space. Compared with other metaverse services, vehicular metaverses have stricter latency requirements to ensure user immersion and improve the QoS. ### _Semantic Communication and Vehicular Metaverses_ As shown in Fig. 1, a hierarchical vehicular metaverse consists of a global metaverse, \(n\) local metaverses, and \(m\) resource pools. More specifically, * _Global Metaverse_: a giant metaverse deployed over the cloud by Metaverse Service Providers (MSPs) for management globally. The global metaverse is a replica of the physical world in the virtual space, which contains both static data and dynamic data within an entire city or even wider area, e.g., a global city map with abundant traffic-related information in the navigation applications. Instead of massively communicating directly with users, it was updated periodically by the data from all the local metaverses. * _Local Metaverses_: as a component of the global metaverses, a local metaverse could be the metaverse mapping to a small-scale area, like a street block or a road intersection. The local metaverses are established on edge servers and are closely connected to nearby users, and thus the vehicles can communicate directly with their local metaverses to obtain real-time metaverse services without large latency [10]. * _Resource Pools_: the virtual resource pool consisting of physical entities with sufficient resources in the local metaverse, e.g., roadside units acting as edge servers. The edge servers in different regions establish a resource pool for their common local metaverse, which includes computing, storage, and bandwidth resources to support the metaverse services for users. The hierarchical framework has several new designs and thus obtains multiple advantages. First, proximity to the users allows the framework to avoid long-distance communication, and thus reduce communication time. Users only need communication with the closest node in the resource pool to access the services. Second, the framework trims the budget for deploying dense sensor nodes in physical space. The strategy for updating and collecting data does not require massive sensors [10]. Third, the framework efficiently utilizes the storage and network resources of servers and vehicles. Both the global and local metaverses only hold indispensable data to maintain normal service. The vehicles only download the data of the local metaverses corresponding to their location. Finally, the SemCom in the framework reduces the number of bits that need to be transmitted. Hence, the effects of channel instability caused by vehicle movement are almost mitigated. In vehicular metaverses, for AR navigation, the MSP collects massive static data in a low-cost way (e.g., from satellites) and deploying them on the global metaverse. As shown in 1, the static data at each intersection is regularly updated to corresponding resource pools to build local metaverses. The resource pools collect dynamic data from data providers in the physical world to keep updating digital twins. Meanwhile, when a physical entity (e.g., a vehicle) moves from one local metaverse to another one, its digital twin may migrate to the new local metaverse according to its location. Note that all the changes in the static collected data will be updated to the global metaverse. Rather than encapsulating an entire city's dynamic and static data in one metaverse, the local metaverses split the data, thus relieving the communication and computing pressure of the cloud and resource pools and shortening the distance between users and data. Vehicles can interact with edge servers in the resource pools to access metaverse services. Before entering the coverage of a local metaverse, each user, i.e., vehicle, initiates a service request to the edge server. Once a user request is received, the edge server sends static information from the local metaverse to the vehicle. Meanwhile, the required dynamic data are constantly invoked and sent to the vehicle, allowing the driver to glimpse navigation and driving prompts. Congested wireless resources cannot support the transmission of such large amounts of data with low latency. To reduce latency, the edge servers only transmit key features of data to the vehicles by SemCom. In vehicular metaverses, vehicles do not need to recover the source data, but only need to complete pragmatic tasks (e.g., traffic sign recognition and pedestrian detection). Given that the tasks in vehicular metaverses are relatively fixed, we design task-oriented SemCom systems [11] to improve efficiency. The task-oriented SemCom aims to extract more condensed semantic information related to the task of the receiving end. The receiver no longer needs a complex semantic decoder to recover the source data but completes the task directly based on the semantic information with a simple model. Coincidentally, this is consistent with the computational power distribution between resource pools and vehicles. We consider that each edge server has a dedicated semantic encoder for every task. Due to the heterogeneity of hardware performance, driving habits, and knowledge base, each vehicle has a customized task model. Hence, users need to train their models in conjunction with edge servers before enjoying the services. The edge servers open interfaces at night or during idle hours for user training. The interface mechanically extracts the semantic information of the original data given by the user and sends it back. Therefore, the user can train their task model by semantic information and the labels of original data. After completing the training, the user can access the metaverse to enjoy application services. Three kinds of participants are involved in a service process as follows: * _User_: In vehicular metaverses, the vehicle usually acts as a user. The proposed framework and all the mechanics are designed to make the user obtain a safe, reliable, comfortable, and immersive experience. * _Edge Server_: The intermediary between the user and the data provider. The edge servers collect data from the data provider, store it temporarily and send it to the user when receiving a request. * _Data Provider_: Any sensing node that wants to make money by selling data (e.g., some users, roadside cameras). A data provider located near an intersection contingently provides data to the edge servers in the corresponding resource pool. Signal lights, traffic signs, pedestrians, and other information are captured by the data providers' sensors and sent to the edge server. Based on SemCom, the edge server sends the Fig. 1: The hierarchical vehicular metaverse architecture. Parts a and b show the normal service and attack flow respectively; The orange subfigure on the right-hand side depicts the steps involved in generating attack examples, while the blue subfigure illustrates the communication between the global metaverse and the local metaverses. data to the user with a graceful delay after the user initiates the request. By the task models, the driver can timely see the arrow guides and virtual traffic sign entities in front of the windshield, which can enhance the user's perception of road conditions. This driving paradigm can greatly improve the safety and comfort of driving, especially when the view is blocked by trucks or buildings. Passengers can also use this system to play metaverse games or work online. ### _Security Challenges of Semantic Communication_ SemCom, as a promising communication paradigm for the 6G future era, shows good performance in various tasks. However, it is vulnerable to some well-designed attacks. Therefore, it is necessary to consider the possible threats and defend against these attacks. We summarize typical security challenges as follows: * _Eavesdropping Attack_: Eavesdropping is the behavior of decoding information in a physical channel in order to steal private information. In SemCom, the user-unique semantic codec acts as a barrier to eavesdroppers. Even though an eavesdropper obtains the transmitted semantic information, it cannot decode it. However, this barrier is not foolproof. When the SemCom system is widely used, users from the same communication link may have the same semantic decoder structure. Lu et al. [12] reveal the privacy issues of the SemCom system in the industrial Internet of Things. At the application layer, Luo et al. [3] propose an adversarial training method with key pairs to prevent such eavesdropping. Tung et al. [4] propose a joint source-channel and encryption coding scheme for wireless image transmission. At the physical level, Chorti et al. [13] consider physical layer security as the essential issue in next-generation communication. Du et al. [5] consider this problem from a physical perspective and protect wireless communications against eavesdropping by exploring and utilizing the inherent features of the physical medium. * _Adversarial Attack_: By adding imperceptible semantic noise to the transmitted data, adversarial attacks make the SemCom system produce errors in the encoding stage or decoding stage. The authors in [6] consider two types of attacks, one is on the sending end and the other one is on the receiving end. The former simulated the case where the attacker is the sender, while the latter simulated the case where the attacker initiates the attack through the channel. With all or part of the detailed information of the SemCom system, the attacker generates adversarial samples against the model. Xiang et al. [7] use the calibrated network and expand the training set to counteract the semantic noise in the text transmission tasks. * _Poisoning Attack_: Poisoning attacks aim at manipulating the training process to degrade model performance or to insert backdoors. SemCom systems are trained based on shared semantic knowledge bases. The communication parties are likely to expand the training set from the third party to obtain a better SemCom codec. Some low-quality data and malicious data are inevitable, which may carry the wrong semantic information. Using data that contains false semantic information for training may lead to a degradation of performance in the SemCom system. This type of attack is called _semantic data poisoning attack_[14]. Researches on poisoning attacks and their defenses in the SemCom system are still in the vacant stage. We summarize the existing work in Table I. Unlike prior works, we propose a black-box model, that is more realistic, and use datasets, that are consistent with vehicular metaverse scenarios. Moreover, we tailor semantic encoder models for different tasks. ## III Adversarial Attacks for Semantic Communication in Vehicular Metaverse ### _Adversarial Attacks_ Deep learning models are often targeted by adversarial attacks, where attackers generate adversarial samples that appear natural to human eyes but cause the model to produce incorrect outputs. Adversarial samples are typically created by adding specific noise that appears to be normal noise to clean samples. In addition to channel noise and attenuation, SemCom furthermore faces a special noise called _semantic noise_[6]. While channel noise is simulated to improve performance in real environments, semantic noise, which can subtly alter the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & **Background** & **Attack** & **Defense** & **Dataset** & **Modal** & **Model** \\ \hline ESCS [3] & \(\bigcirc\) & Eavesdrop & Encryption & Europarl & Text & Transformer \\ VQ-VAE [6] & \(\bigcirc\) & Adversarial attack & Mask + Codebook & \begin{tabular}{c} CIFAR-10 \\ Cars196 \\ ImageNet \\ \end{tabular} & Image & ViT \\ R-DeepSC [7] & - & Semantic noise & \begin{tabular}{c} Calibrating \\ Augment training set \\ \end{tabular} & Europarl & Text & Transformer \\ DeepJSCEC [4] & \(\bigcirc\) & Eavesdrop & Encryption & CIFAR-10 & Image & DNNs \\ SNA\&SDM & \(\bigcirc\) & SNA & SDM & \begin{tabular}{c} GTSRB \\ CCPD \\ \end{tabular} & Image & \begin{tabular}{c} ResNet \\ LPRNet \\ \end{tabular} \\ \hline \hline \end{tabular} * The \(\bigcirc\) represent white box attack, while the \(\bigcirc\) means black box setting. In the white-box setting, the attacker possesses knowledge of the structure or parameters of the semantic codec model. In contrast, the attacker in the black-box setting is ignorant and typically attacks by probing the model’s input and output. \end{table} TABLE I: Existing Works Comparison. meaning of data, is not typically considered. Semantic noise usually results from poor-quality data due to aging sensors and channel interference. By artificially adding semantic noise, attackers can efficiently produce adversarial samples, which cause a devastating blow to the SemCom system. We propose an attack based on semantic noise for vehicular metaverses. According to Section II, it is stated that edge servers are nodes that are semi-trusted and do not actively attempt to attack the system, but can still be deceived. These servers forward both accurate and inaccurate data to users. Typically, edge servers assess the reliability of data based on the reputation of their provider. However, this method fails when a provider is compromised or intentionally behaves in a trustworthy manner to gain credibility. The diagram in Fig. 1 illustrates how an attacker can impersonate a data provider and deceive well-behaved edge servers. Initially, the attacker collects natural data as if they were a legitimate provider. Based on the captured data, the attacker generates several adversarial samples through the SNA. At an opportune moment, the attacker submits these generated adversarial samples to the edge server. When a user requests information from the server, it extracts incorrect semantic details and sends them back to the user. This misinformation may lead users to make risky decisions. The process of generating adversarial samples through SNA is further explained in Fig. 1 and described below: * _Step 1_: The attacker gains access to the training interface offered by the edge server and initiates a training procedure. * _Step 2_: The attacker creates a group of adversarial samples using the benign data and then transmits both sets to the interface. * _Step 3_: The interface extracts semantic information using the semantic encoder and transmits it back to the attacker. * _Step 4_: In the received semantic information, the attacker calculates the maximum semantic distance to the benign data for every adversarial sample as the loss. * _Step 5_: Update adversarial samples based on the loss and jump to _Step 2_ until reaching the maximum iterations. Using SNA, an attacker can calculate its loss by utilizing the interface provided by the edge server as environmental feedback, without knowing the structure and parameters of the semantic encoder. Edge servers aim to attract numerous data providers due to their requirement for dynamic data. As a result, detecting an infiltrating attacker is challenging, particularly when it attempts to enhance its reputation. It should be noted that the loss not only includes the steps necessary for executing an effective attack but also controls low-frequency perturbations [9] to minimize detection by human observers. Once the SNA is implemented, it brings incalculable losses to the MSP and users. For MSPs, the presence of an attacker means that pre-trained semantic encoders can no longer be used, resulting in financial loss. For users, incorrect semantic information can cause drivers and passengers to watch lower-quality virtual entities and thus reduce immersion. In some applications, incorrect semantic information can even further manipulate user behaviors. For example, in AR navigation, a stop sign contains the semantic information of going straight. The user continues to drive straight based on semantic information, thus causing an accident. ### _Defense for Adversarial Attacks_ As aforementioned, the SNA aims to modify semantic information by introducing semantic noise. In nature, semantic noise is the addition of disturbance in the efficient direction, which can change the position of data in the semantic space with the minimum disturbance. Generally speaking, one straightforward solution is to make the semantic encoder robust to semantic noise. Adversarial training is an attractive alternative but comes at the cost of natural accuracy. This article introduces the trade-off between natural and robust accuracy [15] into SemCom for vehicular metaverses to achieve SDM. Adversarial training involves generating online adversarial samples that result in maximum semantic loss and allows the model to learn their distribution. In SDM, as illustrated in Fig. 2, online adversarial samples are generated for each data point through several iterations to maximize the distance from benign data. The KL divergence between the outputs of the semantic encoder is used to measure semantic distance. Both adversarial data and benign data are fed into the semantic encoder to calculate their semantic information. [15] The complete loss function includes two parts, natural loss, and robust loss. The robustness loss is determined by calculating the distance of extracted semantic information with a coefficient \(\lambda\). Natural loss refers to the original mission's (e.g., classification) objective of improving model accuracy, while robust loss aims at ensuring that both adversarial samples and benign data have similar semantic information. Fig. 2: Semantic Distance Minimization Defense Process, where lines with the same color represent the same data stream, the dashed line indicates that the semantic encoder is updated according to the loss function. ## IV Case Study: Semantic Communication in Vehicular Metaverse As a case study, we simulate a scenario to evaluate proposed attack methods and defense performance. In this scenario, an attacker deduces the user's location, produces adversarial samples, and provides them to RoadSide Units (RSUs) before the user accesses a local metaverse. To achieve this goal, we build SemCom systems in the above framework that simulate two tasks of AR navigation service in vehicular metaverses. The performance of the system performing traffic sign recognition and license plate recognition is measured. We conducted an SNA attack against these missions to assess the effectiveness of the attacks. Defensive SemCom systems are trained according to SDM, and their performance is re-evaluated. The same attack works on the new SemCom systems to test them. ### _Simulation settings and Security Attacks_ We consider an attacker capable to deduce the user location and provide adversarial samples to RSUs before the user enters the local metaverse. In this context, we develop SemCom systems to simulate two tasks related to AR navigation services in vehicular metaverses. Specifically, we design semantic codeces for each task, while the channel encoder and decoder are both with dense layers with different units. During training, Additive White Gaussian Noise (AWGN) is added to the channel with a Signal Noise Ratio (SNR) between 5 and 10 [2], allowing the SemCom system to adapt for various real communication environments. #### Iv-A1 SemCom for traffic sign recognition The GTSRB 1dataset consists of more than 50,000 images between \(15*15\) to 250*250 pixels, divided into 43 classes. The pragmatic task of traffic sign recognition is completed in the communication process without the need to train the pragmatic function separately. In this simulation, the semantic encoder is a ResNet, while the receiver uses a fully connected layer to get the classification. Footnote 1: [https://benchmark.ini.rub.de/](https://benchmark.ini.rub.de/) #### Iv-A2 SemCom for license plate recognition The CCPD 2 dataset consists of more than 250,000 vehicle images from China for license plate detection and recognition. In this simulation, the pragmatic task is license plate recognition, so we pre-process the dataset according to the annotation, cutting the license plate part of its image. The semantic encoder is an LPRNet 3, while the receiver uses a greedy decoder to get the license plate information. Footnote 2: [https://github.com/detectRecog/CCPD](https://github.com/detectRecog/CCPD) Footnote 3: [https://arxiv.org/abs/1806.10447](https://arxiv.org/abs/1806.10447) As for the attack, to simulate the SNA in vehicular metaverses, we generate adversarial samples of all the test images and perform the corresponding pragmatic tasks through the SemCom system. The same attack is performed on the defensive model. We conducted an SNA attack against these missions to assess the effectiveness of the attacks. We set the same \(L2\) norm for all attacks to ensure that performance is compared at the same attack strength [6, 9]. ### _Metrics and Defense in Semantic Communication_ The robust SemCom models are trained with the aforementioned defense method for the two pragmatic tasks. During the generating online adversarial images phase, \(10\) iterations are performed to maximize the semantic loss. In the training phase, we set \(\lambda\) as \(1\) in the traffic sign recognition task, which is \(0.1\) in the license plate recognition task to achieve better defensive performance. The same attack works on the new SemCom systems to test them. The natural accuracy of the SemCom system on the test set is used to characterize its performance. For license plate recognition, only when all the symbols on the license plate are correctly recognized can be counted. We replace all images in the test set with the adversarial samples from the SNA and recalculate the accuracy, called _robust accuracy_. To evaluate the effect of an attack, we use natural accuracy as an upper bound. The degree of decline in robust accuracy compared to the upper bound indicates the effectiveness of an attack. More reduction in accuracy means more successful attacks. We also calculate the accuracy for SDM SemCom systems to Fig. 3: Accuracy versus the SNR on the adversarial test set with \(l2\approx 3.6\), where \(a\) vs \(b\) curve represents the \(b\) target model attacked in \(a\) attack mode. The random attack is to add stochastic noise to the source images. demonstrate defensive effects. On the contrary, lower accuracy cuts mean more successful defenses. In addition, the difference in natural accuracy between the original SemCom system and the SDM system demonstrates the defense overhead. These assessments are repeated in different SNR environments. ### _Numerical Results_ Figure 3 displays the SNR to accuracy curves of different models when tested on adversarial samples. In the task of recognizing traffic signs, the SemCom model's accuracy is significantly reduced by \(40\%\) due to the SNA method. This reduction in precision cannot ensure the normal operation of services and may even mislead drivers into making dangerous decisions based on incorrect road signs. As random channel noise is considered during model training, adding random disturbance through attack methods only causes a slight decrease in precision. The SDM SemCom system exhibits higher accuracy than the original system against SNA attacks. Additionally, random defense adds random disturbance to benign images instead of maximizing semantic loss to simulate an attack sample, resulting in weaker defensive effects compared to our proposed approach shown in Figure 3. This phenomenon still exists with global deviation only in license plate recognition tasks. Figure 4 shows the natural accuracy of different models when tested with benign data. Specifically, in the task of recognizing traffic signs, the SDM system exhibits a natural accuracy loss of approximately \(5\%\), whereas the random defense approach results in a loss of \(13\%\). Therefore, compared to random defense, SDM provides greater security while minimizing the reduction in natural accuracy. ## V Future Direction ### _Privacy Protection Schemes for Vehicular Metaverses_ In vehicular metaverses, the communication link between vehicles and RSUs forms a dynamic topological network structure due to vehicle mobility. During wireless communication, there is a risk of data eavesdropping. A malicious party could use eavesdropped data to learn private information such as user location and driving habits. Although there exists research on eavesdropping, the applicability of the defense schemes in dynamic environments has not been tested. Therefore, it is necessary to design a privacy protection scheme for the dynamic vehicular metaverse with changing network topology. ### _Customized Attack Methods for Vehicular Metaverses_ Security is a major concern for both communication systems and vehicle applications. In vehicular metaverses, however, this issue has not received enough attention. A lot of work associated with adversarial attacks, poisoning attacks, and backdoor attacks focuses on deep learning. Moreover, a targeted attack in that an attacker can control the information received by the vehicle rather than just obfuscation is more harmful. If these attacks are carried out, they are bound to raise security concerns in vehicular metaverses. Therefore, diverse and stronger attack methods for SemCom systems need to be invented to support robustness research. ### _Efficient and Controllable Defenses for Attacks_ This article presents a basic defense solution, which trains a semantic encoder with the capability of extracting correct semantic information from adversarial samples. However, the precision loss caused by the defensive approach should be avoided in some cases. For relatively simple pragmatic tasks, we can obtain security performance at a small cost, but for complex tasks, the cost will increase. Further research is needed to develop solutions that offer greater resistance with less cost and more controllability. The research on attacks and defenses can complement each other to further improve the security of semantic communication in vehicular metaverses. ## VI Conclusions Considering the data characteristic and service requirements, we have developed a hierarchical framework for vehicular metaverses that addresses the challenges of large data volumes and strict latency requirements in vehicular metaverses. To minimize data transfer, we incorporated SemCom into the framework, while also raising security concerns with an adversarial attack method specific to vehicular metaverses. To protect against such attacks, we proposed a training method for the SemCom system that can withstand semantic noise. Our evaluation showed that our defense is effective in resisting adversarial attacks, which is crucial for ensuring a secure and immersive vehicular metaverse. We believe that applying security technologies to semantic communication in both deep learning and traditional security will provide valuable insights for creating a safe and comfortable vehicular metaverse. ## Acknowledgment The work is supported by NSFC under grant No. 62102099, U22A2054, No. 62101594, No. 62001125, and the Pearl River Talent Recruitment Program under Grant 2021QN02S643, and is supported by the National Research Foundation (NRF) and Infocomm Media Development Authority under the Future Fig. 4: Accuracy versus the SNR on the benign test set Communications Research Development Programme (FCP). The research is also supported by the SUTD SRG-ISTD-2021-165, the SUTD-ZJU IDEA Grant (SUTD-ZJU (VP) 202102), and the Ministry of Education, Singapore, under its SUTD Kickstarter Initiative (SKI 20210204).
2302.09294
Platform-Independent and Curriculum-Oriented Intelligent Assistant for Higher Education
Miscommunication and communication challenges between instructors and students represents one of the primary barriers to post-secondary learning. Students often avoid or miss opportunities to ask questions during office hours due to insecurities or scheduling conflicts. Moreover, students need to work at their own pace to have the freedom and time for the self-contemplation needed to build conceptual understanding and develop creative thinking skills. To eliminate barriers to student engagement, academic institutions need to redefine their fundamental approach to education by proposing flexible educational pathways that recognize continuous learning. To this end, we developed an AI-augmented intelligent educational assistance framework based on a power language model (i.e., GPT-3) that automatically generates course-specific intelligent assistants regardless of discipline or academic level. The virtual intelligent teaching assistant (TA) system will serve as a voice-enabled helper capable of answering course-specific questions concerning curriculum, logistics and course policies. It is envisioned to improve access to course-related information for the students and reduce logistical workload for the instructors and TAs. Its GPT-3-based knowledge discovery component as well as the generalized system architecture is presented accompanied by a methodical evaluation of the system accuracy and performance.
Ramteja Sajja, Yusuf Sermet, David Cwiertny, Ibrahim Demir
2023-02-15T19:02:01Z
http://arxiv.org/abs/2302.09294v1
#### Platform-Independent and Curriculum-Oriented Intelligent Assistant for Higher Education ###### Abstract Miscommunication and communication challenges between instructors and students represents one of the primary barriers to post-secondary learning. Students often avoid or miss opportunities to ask questions during office hours due to insecurities or scheduling conflicts. Moreover, students need to work at their own pace to have the freedom and time for the self-contemplation needed to build conceptual understanding and develop creative thinking skills. To eliminate barriers to student engagement, academic institutions need to redefine their fundamental approach to education by proposing flexible educational pathways that recognize continuous learning. To this end, we developed an AI-augmented intelligent educational assistance framework based on a power language model (i.e., GPT-3) that automatically generates course-specific intelligent assistants regardless of discipline or academic level. The virtual intelligent teaching assistant (TA) system will serve as a voice-enabled helper capable of answering course-specific questions concerning curriculum, logistics and course policies. It is envisioned to improve access to course-related information for the students and reduce logistical workload for the instructors and TAs. Its GPT-3-based knowledge discovery component as well as the generalized system architecture is presented accompanied by a methodical evaluation of the system accuracy and performance. Artificial Intelligence, Natural Language Processing, Machine Learning, Transformers, GPT-3 ## 1 Introduction One of the main causes of the knowledge disparities that lead to learning gaps among both undergraduate and graduate students is instructors' inability to communicate with these students in ways that suit the students' learning schedules and styles [23]. It has been widely shown in the literature that it is particularly effective to teach in ways that allow students to build conceptual understanding of the subject they are studying [14]. This requires a certain degree of freedom and time for self-contemplation [16]. Not surprisingly, allowing students to learn at their own pace positively contributes to a substantial increase in learning motivation and the development of creative thinking skills (Ciampa, 2014). A significant portion of students avoid or miss the opportunity to visit teaching assistants and instructors during office hours due to scheduling conflicts, the feeling of not being prepared, the imposter syndrome, and shyness (Abdul-Wahab et al., 2019). Furthermore, most students study outside of regular work hours, which creates needs for assistance at odd times (Mounsey et al., 2013). The lack of immediate assistance can lead to discouragement and creates the feeling of being stuck despite the fact that many queries can be simply answered based on available material without in-depth expertise (Seeroo and Bekaroo, 2021). Teaching assistants can sometimes fill this void, but they have their own responsibilities (e.g., classes, research, grading) which may render them unavailable during times such as exam weeks when the students need them most (Howitz et al., 2020). Thus, it would be extraordinarily helpful to develop new and more readily available forms of student assistance if this can be done without decreasing the time TAs and instructors have to spend on higher-level instruction (Mirzajani et al., 2016). Information and communication tools and services are critical parts of instructional technology and the learning process. Web technologies support instructors in the delivery of curriculum on advanced modeling and analysis tools (Ewing et al., 2022), programming libraries (Ramirez et al., 2022), and engineering ethics using serious games (Ewing and Demir, 2021). AI has been used actively in two core areas, including information processing and knowledge communication. Deep learning models are commonly used for image processing (Li and Demir, 2023), data augmentation (Demiray et al., 2021), synthetic data generation (Gautam et al., 2022), and modeling studies (Sit et al., 2021). AI use cases for information communication and delivery are relatively limited in the engineering domain (Yesilkoy et al., 2022). Smart assistants based on customized ontologies (Sermet and Demir, 2019) have been actively used in public health care (Sermet et al., 2021) and environmental science (Sermet and Demir, 2018) studies. With the recent advancements (i.e., ChatGPT) in AI based communication, there is a significant interest in the research of chatbots, which can be defined as intelligent agents (i.e., assistants) that have the ability to comprehend natural language queries and produce a direct and factual response utilizing data and service providers (Brandtzaeg and Folstad, 2017). Voice-based assistants are actively used in education, environmental science, and operational systems to access real-time data, train first responders (Sermet and Demir, 2022), and facilitate decision support coupled with other communication technologies like virtual and augmented reality (Sermet and Demir, 2020). Technology companies have been taking the lead on operational virtual assistants integrated into their ecosystem which triggered a brand new and massive market that is forecasted to reach US$ 11.3 billion by 2024 (IMARC Group, 2019). Several studies emphasize the potential chatbots hold to serve as the next generation information communication tool and make the case for an urgent need for chatbot development and adoption in their respective fields (Androutsopoulou et al., 2019; Vaidyam et al., 2019; USACE, 2019; Miner et al., 2020). However, the usage of chatbots for effective and reliable information communication is not widespread among public, government, scientific communities, and universities (Schoemaker and Tetlock, 2017). The adoption of virtual assistants within the context of academic curriculum can help closing the learning gaps identified above and, in the literature, (Hwang and Chang, 2021). Considering the prevalence of mobile phones and computers among students along with the recent remote-interaction culture that is gained during the pandemic, such technological and web-based solutions are relevant and needed more than ever (Iglesias-Pradas et al., 2021). A recent report on the AI Market in the US Education Sector (TechNavio, 2018) emphasizes AI's focus on creating intelligent systems, discusses its increasing use in enhancing student learning, and states that intelligent interactive programs that are based on Machine Learning and Natural Language Processing help in overall learning of students. It is reported that the most significant market trend is the increased emphasis on chatbots (MindCommerce, 2019). The main aspect of how AI can be a vital tool in education is the utilization of AI in developing next-generation educational tools and solutions to provide a modern learning experience with the vision of personalized teaching, advising, and support (GATech, 2018; Ceha et al., 2021). We propose an AI-augmented intelligent educational assistance framework that automatically generates course-specific intelligent assistants based on provided documents (e.g., syllabus) regardless of discipline or academic level. It will serve as a message-enabled helper capable of answering course-specific questions concerning scope and logistics (e.g., syllabus, deadlines, policies). The students can converse with the assistant in natural language via web platforms as well as messaging applications. The framework is conceived to address the listed issues and to unlock the immense potential of conversational AI approaches for education and enhancing the learning experience. Core benefits and advantages of the framework include the availability of assistance regardless of time, more TA and instructor time for advanced and customized advising, answers to time consuming and repetitive questions, reduced human error due to miscommunication for course logistics, and accommodations for personal barriers, cultural, and disability related issues (e.g., language barrier). A case study is conducted to quantitatively measure the proposed approach's efficacy and reliability within the context of the cited benefits. The remainder of this article is organized as follows. Section 1.1 summarizes the relevant literature and identifies the knowledge gap. Section 2 presents the methodology of the design choices, development and implementation of a course-oriented intelligent assistance system based on syllabi. Section 3 describes the preliminary results and provides benchmark results and performance analysis. Section 4 concludes the articles with a summary of contributions and future work. ### Related Work There have been several initiatives to leverage conversational interfaces in education systems and higher learning (Hobert, 2019; Wollny et al., 2021; Chuaphan et al., 2021). Georgia Tech pioneered a virtual teaching assistant (TA) named "Jill Watson" and reported inspiring results for student satisfaction (GATech, 2018). Additionally, many students were inspired and created their own chatbots that converse about the courses, exhibiting increased interest in AI tools. The positive impacts of cultivating a teaching motivation for individual learning are successfully demonstrated in University of Waterloo's Curiosity Notebook research project, in which the students reported increased engagement upon conversation with an intelligent agent (i.e., Sigma) that asks Geology-related questions in a humorous manner (Ceha et al., 2021). Several universities have similar projects exploring AI's role in education including Stanford University (Ruan et al., 2019) and Carnegie Mellon University (Helmer, 2019). Further initiatives have been explored to utilize chatbots in certain aspects of campus life (Dibitonto et al., 2018; Duberry and Hamidi, 2021). Georgia State University developed a virtual assistant for admission support (i.e., Pounce) to incoming freshmen students. The Randomized Control Trial they implemented to assess effectiveness yielded that first-generation and underrepresented groups disproportionately benefited from the system which resulted in decreased gap in graduation rate among different demographics (Hart, 2019). Furthermore, 94% of the students recommended GSU to continue the service, citing their satisfaction in receiving instant responses any time of the day without the feeling of being judged or perceived as unintelligent (Mainstay, 2021). The process of creating a knowledge framework includes retrieving relevant documents and extracting answers from the retrieved documents (Zylich et al., 2020). One of the main documents that can be used to acquire course information to answer logistical questions is syllabus (Zoroayka, 2018; Zylich et al., 2020). Chatbots can also be extended to other works such as helping students with technical issues and questions (Chuaphan et al., 2021). The limits of TA's human resources can be addressed with the help of these chatbots. A chatbot was deployed at Stanford University to respond to student inquiries for a class by compiling information from their participation in an online forum (Chopra et al., 2016). Similarly, a solution to augment the staffing shortages is with an AI Teaching Assistant (Ho, 2018). In addition to assisting with staff shortages, the virtual teaching assistants improve students' educational experiences (du Boulay, 2016). The chatbots can be developed internally using readily available open source tools (Zylich et al., 2020) or through the use of cloud-based language models (Benedetto et al., 2019; Chuaphan et al., 2021; Ranavare and Kamath, 2020). The literature review clearly shows the importance of chatbots and how they can be used in the educational setting. Based on the survey done by Abdelhamid and Katz, 2020, more than 75% of students who responded to the study said they have previously used a chatbot service or other comparable system. 71% of the students stated that they find it challenging to meet with their teaching assistants for a variety of reasons. More than 95% of students claimed that having a chatbot available would be beneficial for providing some of their questions with answers. Though the previous work puts forth limited-scope case studies that clearly serve as proof of potential and benefits of utilizing conversational approaches in the educational setting, a complete and multidisciplinary solution has not been introduced to transform teaching and learning. A major distinction of the proposed framework in contrast to relevant work is the ability to automatically generate a ready-to-use intelligent assistant based on dynamic input provided in the format of a textual document, such as a curriculum summary and syllabus. It is both independent from the field and the technology (e.g., learning management systems) used for the content delivery. Furthermore, it relies on a Service-Oriented Architecture (SOA) to enable integration to any delivery channel. ## 2 Methodology ### Natural Language Inference In recent years, the rapid expansion of data volume was facilitated by technological innovation. A Forbes survey from a few years back revealed that 2.5 quintillion bytes of data were produced every single day. According to current estimates, unstructured data makes up more than 90% of all information stored (Kim et al., 2014). The introduction of language models as a foundation for numerous applications trying to extract useful insights from unstructured text was one of the major forces behind such research. In order to anticipate words, a language model analyzes the structure of human language. There are multiple available large language models such as BERT (Devlin et al., 2019), XLNet (Yang et al., 2020), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020), GPT-3 (Brown et al., 2020), GPT-2 (Radford et al., 2019), and PaLM (Chowdhery et al., 2022). OpenAI provides the generative pre-trained Transformer 3 (GPT-3), an autoregressive language model that uses deep learning to generate writing that resembles that of a human. GPT-3 can be utilized off the shelf as well as by using a few-shot learning technique and fine-tuning the model in order to adapt it to any desired application area.GPT-3 has been pre-trained on a large quantity of text from the public internet resources, which may be regarded as few-shot learning. When given only a few instances, it can typically figure out what task you're attempting to complete and offer a convincing solution. Fine-tuning builds on fine-tuning learning by training on many more instances that can fit in the prompt, allowing to obtain higher outcomes. Once a model has been fine-tuned, it no longer needs examples in the prompt. Fine tuning also reduces expenses and makes lower-latency requests possible. We decided to choose GPT-3 because the model is cloud based and has a developer friendly API. GPT-3's Davinci is the biggest model in terms of parameters that's available to use by researchers and general public. ### Syllabus Knowledge Model It is crucial to pick the correct questions to pose to the chatbot in order to test its accuracy. The key questions that could arise from the course syllabus were extracted using the literature on syllabus templates because the goal of this research was to create a chatbot that could answer questions using the course logistics and policies from syllabus. The important sections of a syllabus or course description is Course Information, Faculty Information, Instructional Materials and Methods, Learning Outcomes, Grading and Course Expectations, Policies, Course Schedule (Hess et al., 2007; Passman & Green, 2009). Similarly critical sections such as disability statements, academic misconduct, inclusivity, accessibility and harassment; and optional information such as mental health resources can also be included in the syllabus (Wagner et al., 2022). Based on this literature, we included questions that were related to the following topics: (1) Course Information, (2) Faculty Information, (3) Teaching Assistant Information, (4) Course Goals, (5) Course Calendar, (6) Attendance, (7) Grading, (8) Instructional Materials, and (9) Course and Academic Policies. Course Information, Faculty Information and TA Information sections of the knowledge graph developed to encompass a standard syllabus in higher education is shown in Table 1. After analyzing all the main categories specified above, we have generated 36 questions to reflect information included in these categories. We also used text and data augmentation techniques on these 36 questions to generate 120 questions in total to reflect different ways a question could be asked. This text and data augmentation approaches are reflected in Table A.1 in the form of competency questions. \begin{table} \begin{tabular}{|l|l|} \hline **Category** & **Syllabus Element** \\ \hline Course Information & Course Name \\ \cline{2-3} & Course Number \\ \cline{2-3} & Credit Hours \\ \cline{2-3} & Location and Class Times \\ \hline Faculty Information & Name \\ \cline{2-3} & Contact Information \\ \cline{2-3} & Office Location \\ \cline{2-3} & Office Hours \\ \hline TA/Teaching Assistant Information & Name \\ \cline{2-3} & Contact Information \\ \cline{2-3} & Office Location \\ \cline{2-3} & Office Hours \\ \hline Course Goals and/or Objectives & Course Objectives \\ \cline{2-3} & Expectations from the course \\ \hline Course Calendar & Due dates \\ \cline{2-3} & Assignment dates \\ \hline Attendance and classroom behavior & Attendance policy \\ \cline{2-3} & Expected Classroom behavior \\ \hline Grading and Assignments & Grading Criteria \\ \cline{2-3} & Tentative Exam Schedule \\ \hline Instructional Materials & Textbooks \\ \cline{2-3} & Other required materials for the course \\ \hline Policies & Late Assignments \\ \cline{2-3} & Academic Dishonesty \\ \cline{2-3} & Disability Statement \\ \cline{2-3} & Freedom of Speech \\ \cline{2-3} & Makeup Policy \\ \cline{2-3} & Mental Health Resources \\ \cline{2-3} & Absences for Religious Holy Days \\ \hline \end{tabular} \end{table} Table 1: Knowledge Graph for a Syllabus in Higher Education ### System Architecture The VirtualTA framework can be partitioned into four major cyber components with specialized functions (Figure 1). The first component can be attributed to curating and indexing appropriate classroom resources for information that falls within the scope of a syllabus, as described in Table 1. The second component contains the cyber framework to create, serve, and manage the smart assistant. It includes server management, API access from the perspectives of both students and instructors, data analytics, and smart assistant management. Intelligent Service component is concerned about the deep learning-powered natural language tools that are provided under the umbrella of VirtualTA framework (e.g., inference and intent mapping, emotion detection and lightening the mood with witty yet helpful responses). Finally, the integration component is concerned with communication channels the smart assistant can be served from and entails the appropriate protocols, webhooks, and software. #### 2.3.1 Intelligent Services Course Knowledge Model Generation: In order to power the VirtualTA, the raw syllabus document needs to be parsed to create a knowledge model (Figure 2). The process for knowledge model generation entails utilizing GPT-3 to attempt to find relevant snippets out of unstructured text by using the competency questions provided in Table A.1. The retrieved information for each syllabus element is curated and stored in a JSON file. Upon post-processing and validation, the resulting knowledge model is used to fine-tune the model for question answering. Pre-Processing: Once the syllabus file is parsed, the generated data is split into smaller pieces or documents to lower the cost to use the GPT-3 model and lower the latency time. The data was initially divided into 2000 characters, but this led to increased API request costs. We created our Figure 1: System architecture and components of VirtualTA final version of the code to divide the syllabus content into documents with 200 characters without compromising accuracy or the model's affordability. It was made sure that the split does not divide a word into different syllables. Post-processing: When the extracted text from the syllabus does not contain the information the question was intended for, the GPT-3 model may sometimes return irrelevant snippets to the asked question. In some cases, the model can return partial answers as well; partial in the sense that the response has the correct information, yet it is not complete (e.g., returns the information of 3 TAs out of 5 total TAs listed on the course description). To address and resolve these edge cases, the instructors (e.g., teaching assistant, faculty) are provided with the initial draft of the automatically populated knowledge graph and validate the information or modify as needed before the graph can be fed to the model for question answering. This is a one-time process, where the instructor(s) or TAs can go through the template at the start of the semester, to check the proposed answers by the model, and modify the knowledge base with accurate information. Throughout the semester, this workflow can be repeated as needed if major changes occur in the syllabus. Question Answering (QA): The questions answering process relies on the provided course knowledge model and two models to understand the intent, map it to the requested resource, and produce a natural language response in the form of a to-the-point and concise answer. The question-answering framework from GPT-3 by OpenAI works in two parts. The first part of the QA process is the search model, which is used to search the provided documents. This model then lists the most applicable documents to answer the given question. For this we created a fine-tuned model rather than using the models provided by OpenAI. Our fine-tuned model is trained on the Stanford Question Answering Dataset (SQuAD) dataset. Both the training and validation datasets have 1,314 entries each for our fine-tuned model. The second part of the QA process is the completion model, which is a built-in model provided by OpenAI named "text-davinci-002". Davinci is the most capable model family provided by OpenAI. This model is good at understanding content, summarization and content generation. Completion model is used to generate the answer from the documents provided by the Search Model. As a way to expand the accessibility as well as the human-like and empathetic interactions of the system in a way that it can seamlessly serve with an approachable persona, several enhancements were devised and implemented. The question-answering system operates in a variety of languages. This was accomplished through the use of GPT-3's language translation capabilities. Students can ask a question in any language supported by GPT-3, which we then Figure 2: Knowledge base population process with instructor revision translate to English, send to VirtualTA, receive an answer in English, and finally translate back to the language the question was asked in. VirtualTA can be tailored to the demands of the students by fine-tuning the model using the questions asked by the students and answers provided by the model. This customization can be done at the domain or course level. The sentiment of a student's question can be analyzed by VirtualTA, and if negative emotions or stress are identified, the system will give positive comments or optimistic messages to lighten the situation and point them towards appropriate available resources. #### 2.3.2 Framework Integration The framework is founded upon a centralized web-based cyberinfrastructure for data acquisition, training deep learning models, storage and processing of course-specific information, as well as to host the generated chatbots for use in communication channels. The cyberinfrastructure entails an NGINX web server, NodeJS-based backend logic, a PostgreSQL database, accompanied with caching, and user and course management mechanisms. The core intelligent assistant is created based upon the Service-Oriented Architecture, allowing its plug-and-play integration into any web platform with webhooks. While the system can be integrated into numerous channels (e.g., augmented and virtual reality applications, automated workflows, learning management systems), several integrations have been realized as part of this paper to showcase its utility. Web-based Conversational Interface: To make asking questions and receiving responses easier, a web-based chatbot application user interface (UI) has been developed. The UI designed by Palace C was modified for this development using standard JS. Through the API we developed, VirtualTA's replies may be retrieved. The user sees this response on the chatbot provided by VirtualTA. This may be included into any web-based chatbot by using the API we established to receive an answer and utilize it as the chatbot's response. This procedure enables VirtualTA's functionality to be incorporated to any web-based conversational bot. Social Platforms: A Discord bot is created to allow students to include the VirtualTA into workspaces they already use for specific courses to facilitate easy access to pertinent information. The availability of VirtualTA into social messaging platforms students already utilize allows for easy adoption of the system as well as an organic and friendly interaction. Smart Apps and Devices: VirtualTA is integrated to work with Google Assistant. We have created an API to return an answer, when asked a question regarding a course. We have used Google DialogFlow to integrate the VirtualTA as a third-party action on Google Assistant. Students can access VirtualTA using Google Assistant on their mobile phones, smart home devices, Android TV, Android Auto and smart watches. This integration has been deployed in the test environment on this platform and screenshots of these implementations are shared in section 3 below. ### Case Study Design In order to establish the accuracy and performance of the VirtualTA, an assessment was conducted by collecting 112 syllabus files from a variety of institutions and domains, including Engineering, Math, Physics, History, Computer Science, English, Art, Business, Philosophy, Arabic, Anthropology, Accounting, Chemistry, Music, and Economics. We removed 12 of these files because the syllabi were in image format and text extraction from the image could hinder the benchmark of VirtualTA's capabilities. Hence, a case study is designed upon 100 syllabi and in two phases to assess the performance for (1) extracting data from syllabi and (2) mapping user questions to the extracted syllabi data. #### 2.4.1 Phase 1 - Knowledge Extraction We chose 38 files from the 100 syllabus files we collected. We asked the VirtualTA 36 questions we chose based on main categories for every syllabus file. The goal of this is to measure the accuracy of the bot on the frequently asked questions. We had three parameters we are collecting from this study: number of questions answered correctly, number of questions answered incorrectly and number of questions partially answered. These parameters could be seen below in the template that is created for one of the courses and is in JSONL format. Before Edits{ "QUESTION":"What is the name of the course?","ANSWER":"BUS 100","isTrue":"Change this to TRUE or FALSE or PARTIAL"} { "QUESTION":"What is the course number?","ANSWER":"The course number is BUS 100.","isTrue":"Change this to TRUE or FALSE or PARTIAL"} { "QUESTION":"How many credit hours is this course worth?","ANSWER":"This course is worth 3 credit hours.","isTrue":"Change this to TRUE or FALSE or PARTIAL"} After Edits{ "QUESTION":"What is the name of the course?","ANSWER":"Introduction to Business","isTrue":"FALSE"} { "QUESTION":"What is the course number?","ANSWER":"The course number is BUS 100.","isTrue":"TRUE"} { "QUESTION":"How many credit hours is this course worth?","ANSWER":"This course is worth 3 credit hours.","isTrue":"TRUE"} Once we collect the answers from the bot on all the 36 questions for the syllabus file, we store these results in a JSONL file format. In the file we have three fields including question, answer and isTrue flag. The question field contains the question asked to the bot, the answer field contains the answer we got from the bot and the isTrue is whether the given answer is correct or incorrect. We manually went through all the 38 JSONL files and checked the answers with the actual syllabus file and change the isTrue field to "TRUE" if the answer given by the bot is correct, "FALSE" if the answer given by the bot is partially correct. When an answer is false/incorrect, in addition to making the isTrue "FALSE", we also change the answer to the correct version/information. These manual corrections were done to use this information for our second phase of testing. The manual corrections explained above could be seen in Table 2 (After Edits), this figure shows the corrections made and isTrue field set to either "FALSE","TRUE" or "PARTIAL" and these changes have been highlighted in green. #### 2.4.2 Phase 2 - Question Answering We use the manually corrected templates created in Phase 1. Here in this phase of testing we increase the questions asked from 36 to 70. This was done using text augmentation to test the model's question and answering performance on different question asking techniques or structures. Each question has at least one other variation except for two questions. The rationale for these two questions, "How do I submit my assignments?" and "when is the final exam?", left out from data augmentation is because we couldn't discover a sensible and good approach to supplement or augment these queries. We had three parameters collected during this study: number of questions answered correctly, number of questions answered incorrectly, and number of questions partially answered. Once we collect the answers from the bot on all the 70 questions for the template file created in Phase 1, we store these results in a JSONL file format. In the file, we have three fields including question, answer and isTrue flag. The question field contains the question asked to the bot, the answer field contains the answer we got from the bot and the isTrue field is whether the given answer is correct or incorrect. We manually went through all the 38 JSONL files and checked the answers with the actual syllabus file and change the isTrue field to "TRUE" if the answer given Figure 3: Web based chatbot user interface with questions and answers by the bot is correct, "FALSE" if the answer given by the bot is incorrect and "PARTIAL" if the answer given by the bot is partially correct. ## 3 Results and Discussion ### Communication Channels The user interface for the web platform shown in Figure 3 has been adapted from Palace, 2021. Figure 3 shown below is a web platform designed using vanilla JavaScript. Figure 3 shows select Competency Questions asked to the model and the answers returned by the model for a history course. Figure 4 show the third-party integration of VirtualTA model and in this case the third-party application is Discord. In the figure the questions asked to the model and the responses given by the model for a STEM course specifically a CS course is provided. Figure 4: Integration of VirtualTA with Discord social media application Figure 5 shows the integration of VirtualTA with Google Assistant. The questions are asked to VirtualTA using voice. The command "talk to Virtual T.A." is needed to connect Google Assistant to the third-party action VirtualTA. These figures show the questions asked and the answers returned by VirtualTA for a history course. Figure 5: Integration of Virtual TA with Google Assistant application Figure 6 illustrates the capability of language translation in VirtualTA. The VirtualTA support languages for Spanish, French, and German are shown in (A), (B), and (C), respectively. Any language that GPT-3 supports can be used by the user to ask a query, and VirtualTA will respond in that language. Figure 7: VirtualTA sentiment analysis for a history course The possibilities of VirtualTA's sentiment analysis are displayed in Figure 7. The model responds with the standard response and also in a humorous or witty way to lighten the situation if it determines that the user is asking a question with a negative emotion or feeling. Figure 7 show some examples of sentiment analysis in VirtualTA. Private information, including the instructor's email address, has been obscured in Figure 7 (center). ### Performance Evaluation To quantify the model's effectiveness, precision (Equation 1), recall (Equation 2), and f1-score (Equation 3) metrics have been selected for this imbalanced classification problem with multiple classes as formulated below (Sokolova and Lapalme, 2009). _n value_ in the equations below represents the number of different questions in the FAQ (i.e., classes). For computing results using equations 1-3 (Sokolova & Lapalme, 2009), we have used the criteria listed below and computed two sets of results where one includes "PARTIAL" as correct/TruePositive and the other does not consider "PARTIAL" as correct/TruePositive. \[\text{Precision (multiclass)}=\frac{\Sigma_{0}^{n}\;\;TP}{\Sigma_{0}^{n}\;\;TP+ \Sigma_{0}^{n}\;\;FP} \tag{1}\] \[\text{Recall (multiclass)}=\frac{\Sigma_{0}^{n}\;\;TP}{\Sigma_{0}^{n}\;\;TP+ \Sigma_{0}^{n}\;\;FN} \tag{2}\] \[\text{f1 - score (multiclass)}=\frac{2\;x\;precision\;x\;recall}{precision+ recall} \tag{3}\] The aim of the testing phase is to optimize the precision and recall values to build an accurate and complete system, however a trade-off evaluation is necessary (He and Ma, 2013). Depending on the use case, it may be necessary to maximize precision to ensure an accurate answer is provided or to maximize recall to map as many questions as possible while limited sacrifice of the accuracy. For this use case, we tried to maximize the precision value for the model. For each of the 38 syllabus files utilized in the testing phase, these performance values were calculated and analyzed. The results in Table 3 from the knowledge extraction sub-section and Table 4 from question answering sub-section represent the average accuracy values across all 38 files. VirtualTA prioritizes accuracy of the responses over giving an answer. We want to provide the most accurate results to the students. It is better to respond with "Answer not found" than to give an incorrect answer. Especially where the incorrect answer could misinform a student and can lead to them missing office hours or homework deadlines. When the model is unsure of the answer or is unable to locate pertinent documents, it responds to the user as "Response not found," which prevents it from providing the user an incorrect answer and allows them to double-check the answer outside of VirtualTA with instructor or TA. #### 3.2.1 Knowledge Extraction These values shown in Table 2 are calculated by computing the regular accuracy which is correct answers over the total number of questions. The common problems faced by the model is in the "Teaching Assistant Information" section, and this could be due of many reasons, and we identified some of these cases as the reason for lower accuracy in our case study as follows: (1) when a course or syllabus has multiple TAs (Teaching Assistants), the model is failing to detect all the TAs from the text provided, (2) when a course doesn't have a TA listed; the model fills in the TA questions with the instructor's information. For instance: when there is no TA for the course and user asks, "when are the TA's office hours?" the model replies back with the instructor's office hours, (3) the formatting of some syllabuses we tested on were really confusing or messy. This resulted in the model missing simple questions such as the course number or course name. In calculating these results discussed in this section, we considered all of the above cases (1-3) as Incorrect/False. \begin{table} \begin{tabular}{|l|c|c|} \hline **Performance Metrics** & **Without Partial** & **Includes Partial** \\ \hline Recall & 0.87 & 0.88 \\ \hline Precision & 0.96 & 0.96 \\ \hline f1-score & 0.91 & 0.92 \\ \hline \end{tabular} \end{table} Table 4: Performance metrics for Phase 2 testing \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Category name** & \begin{tabular}{c} **Number of questions** \\ **per category** \\ \end{tabular} & **Accuracy** & **Accuracy with partial content** \\ \hline Course Information & 6 & 63.2\% & 69.7\% \\ \hline Faculty Information & 4 & 60.5\% & 73.0\% \\ \hline TA information & 4 & 49.3\% & 57.2\% \\ \hline Course Goals & 2 & 90.8\% & 94.7\% \\ \hline Course Calendar & 3 & 90.4\% & 92.1\% \\ \hline Attendance & 2 & 93.4\% & 97.4\% \\ \hline Grading & 4 & 63.8\% & 71.0\% \\ \hline Instructional Materials & 2 & 92.1\% & 96.0\% \\ \hline Policies & 9 & 95.0\% & 95.6\% \\ \hline Overall & 36 & 76.5\% & 81.6\% \\ \hline \end{tabular} \end{table} Table 2: Accuracy for Phase 1 testing results \begin{table} \begin{tabular}{|l|c|c|} \hline **Performance Metrics** & **Without Partial** & **Includes Partial** \\ \hline Recall & 0.64 & 0.69 \\ \hline Precision & 0.79 & 0.81 \\ \hline f1-score & 0.71 & 0.74 \\ \hline \end{tabular} \end{table} Table 3: Performance metrics for Phase 1 testing #### 3.2.2 Question Answering The values shown in Table 5 are calculated by computing the regular accuracy which is correct answers over the total number of questions. The common problems faced by the model in this phase is in the "Course Information" section, and this could be because of many edge cases, and we identified two main cases as the reason for lower accuracy in our case study as follows: (1) when the number of credit hours are not given in the context, the model tries to calculate the credit hours based on the number of lectures per week; (2) there is a small chance where the model can give different answers to similar questions, this could be based on the style of questioning. In calculating these results discussed in this section, we considered all of these above cases as Incorrect/False. ## 4 Conclusion and Future Work In this research, we designed an automated system for answering logistical questions in online course discussion boards, third party applications or educational platforms and highlighted how it can aid in the development of virtual teaching assistants. Specifically, the project's aims include enhancing course content quality and individualized student advising by delegating the time-consuming repetitive duties of instructors to virtual assistants and mitigating inequality among students in accessing knowledge to narrow retention and graduation gaps. This document outlines the entire procedure and displays the VirtualTA architecture. Through this architecture, VirtualTA can be integrated with third-party applications to enable access from a variety of intermediaries, such as web-based systems, agent-based bots (such as Microsoft Skype and Facebook Messenger), smartphone applications (such as smart assistants), and automated web workflows (e.g., IFTTT, MS Flow). Users will find it simple to access VirtualTA through any communication channel that is familiar to them and that they feel comfortable using. Additionally, it enables any number of users enrolled in the course to access the system. We want to expand upon our existing approach to include course content in addition to the syllabus or administrative information. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Category name** & \begin{tabular}{c} **Number of questions** \\ **per category** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **partial content** \\ \end{tabular} \\ \hline Course Information & 12 & 82.2\% & 83.6\% \\ \hline Faculty Information & 8 & 97.4\% & 99.0\% \\ \hline TA information & 8 & 98.7\% & 98.7\% \\ \hline Course Goals & 4 & 98.0\% & 99.3\% \\ \hline Course Calendar & 5 & 99.5\% & 100\% \\ \hline Attendance & 4 & 98.7\% & 100\% \\ \hline Grading & 7 & 95.5\% & 98.5\% \\ \hline Instructional Materials & 4 & 96.0\% & 97.4\% \\ \hline Policies & 18 & 99.0\% & 99.6\% \\ \hline Overall & 70 & 95.3\% & 96.5\% \\ \hline \end{tabular} \end{table} Table 5: Accuracy values for Phase 2 testing Furthermore, several future studies are possible including: (1) a case study with students for a semester long course in multiple fields/departments, (2) integrating the VirtualTA with learning management systems, (3) creating course content assistance and search, quiz and flash card mechanisms, (4) integration to other main-stream communication channels, (5) personalized communication to the pace and language for the student's level of understanding, (6) improving the system accuracy and performance by fine tuning the model on bigger datasets, (7) developing methods to understand different question asking techniques, and (8) integrating necessary course information directly from learning management systems. **Acknowledgement:** This material is based upon work supported by the National Science Foundation under Grant No. #2137891 and #2230710.
2306.14054
Towards Understanding Gradient Approximation in Equality Constrained Deep Declarative Networks
We explore conditions for when the gradient of a deep declarative node can be approximated by ignoring constraint terms and still result in a descent direction for the global loss function. This has important practical application when training deep learning models since the approximation is often computationally much more efficient than the true gradient calculation. We provide theoretical analysis for problems with linear equality constraints and normalization constraints, and show examples where the approximation works well in practice as well as some cautionary tales for when it fails.
Stephen Gould, Ming Xu, Zhiwei Xu, Yanbin Liu
2023-06-24T20:46:33Z
http://arxiv.org/abs/2306.14054v1
# Towards Understanding Gradient Approximation in ###### Abstract We explore conditions for when the gradient of a deep declarative node can be approximated by ignoring constraint terms and still result in a descent direction for the global loss function. This has important practical application when training deep learning models since the approximation is often computationally much more efficient than the true gradient calculation. We provide theoretical analysis for problems with linear equality constraints and normalization constraints, and show examples where the approximation works well in practice as well as some cautionary tales for when it fails. Machine Learning, Gradient Approximation, Gradient Approximation, Gradient Approximation, Gradient Approximation, Gradient Approximation ## 1 Introduction This paper investigates certain approximations to gradient calculations for differentiable constrained optimization problems. Our focus is on continuous optimizations problems that may be embedded within deep learning models (Gould et al., 2016; Amos and Kolter, 2017; Agrawal et al., 2019; Gould et al., 2021; Blondel et al., 2022). This is in contrast to works that compute search directions for back-propagating through discrete optimization problems where a true gradient does not exist or is uninformative (i.e., zero almost everywhere), e.g., (Blondel et al., 2020; Berthet et al., 2020; Vlastelica et al., 2020; Petersen et al., 2022). For continuous constrained optimization problems the gradient of a solution with respect to parameters of the problem (i.e., inputs) can be determined by implicit differentiation of the problem's optimality conditions (Amos and Kolter, 2017; Agrawal et al., 2019; Gould et al., 2021). One of the main computational difficulties in the presence of constraints is evaluating quantities of the form \((AH^{-1}A^{\mathsf{T}})^{-1}\) where \(A\) encodes first derivatives of the constraint functions and \(H\) encodes second derivatives of the objective and constraints. Gould et al. (2022) observed that for deep models involving optimal transport--a well-known differentiable optimization problem--ignoring the constraints in the backward pass, i.e., treating the problem as if it were unconstrained, still allows the model to learn while greatly speeding the backward pass. This prompts the question explored in this paper: why and when does this gradient approximation work? ## 2 Gradient Approximation In this section we develop theoretical insights for when back-propagating through a differentiable optimization problem using an approximate gradient gives a descent direction for the global loss. Full proofs can be found in the appendix. The following result for the derivative of the solution to parametrized equality constrained optimisation problems comes from Gould et al. (2021)[Prop. 4.5]. **Proposition 2.1**.: (Gould et al., 2021)_Consider functions \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) and \(h:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{p}\). Let_ \[y(x)\in\left\{\begin{array}{ll}\text{argmin}_{u\in\mathbb{R}^{m}}&f(x,u)\\ \text{subject to}&h_{i}(x,u)=0,\quad i=1,\ldots,p\end{array}\right\}.\] _Assume that \(y(x)\) exists, that \(f\) and \(h=[h_{1},\ldots,h_{p}]^{\mathsf{T}}\) are \(2^{\text{nd}}\)-order differentiable in the neighborhood of \((x,y(x))\), and that \(\text{rank}(D_{Y}h(x,y))=p\). Then for \(H\) non-singular_ \[Dy(x)=H^{-1}A^{\mathsf{T}}\!\big{(}AH^{-1}\!A^{\mathsf{T}}\big{)}^{-1}\!\big{(} AH^{-1}B-C\big{)}-H^{-1}B\] _where \(A=D_{Y}h(x,y)\in\mathbb{R}^{p\times m}\), \(B=D_{XY}^{2}f(x,y)-\sum_{i=1}^{p}\lambda_{i}D_{XY}^{2}h_{i}(x,y)\in\mathbb{R} ^{m\times n}\), \(C=D_{X}h(x,y)\in\mathbb{R}^{p\times n}\), \(H=D_{YY}^{2}f(x,y)-\sum_{i=1}^{p}\lambda_{i}D_{YY}^{2}h_{i}(x,y)\in\mathbb{R }^{m\times m}\), and \(\lambda\in\mathbb{R}^{p}\) satisfies \(\lambda^{\mathsf{T}}\!A=D_{Y}f(x,y)\)._ Symbol \(D\) denotes the total or partial (with respect to the subscripted variable) derivative operator. We refer the reader to Gould et al. (2021) for the full derivation. Given an incoming gradient of a loss with respect to the output (i.e., solution) \(\mathsf{D}\mathcal{L}(y)\), back-propagation computes the gradient of the loss with respect to the input \(x\) via the chain rule of differentiation as \(\mathsf{D}\mathcal{L}(x)=\mathsf{D}\mathcal{L}(y)\mathsf{D}y(x)\). As mentioned, however, terms involving \(A\), namely \((AH^{-1}\!A^{\mathsf{T}})^{-1}\), may present significant computational challenges. Ignoring such terms gives a computationally much simpler expression, but how well does it approximate the true gradient? Formally, define \(\widehat{H}=\mathsf{D}^{2}_{YY}f(x,y)\) so that \(H=\widehat{H}-\sum_{i=1}^{p}\lambda_{i}\mathsf{D}^{2}_{YY}h_{i}(x,y)\) for a constrained problem. Let \(v^{\mathsf{T}}=\mathsf{D}\mathcal{L}(y)\in\mathbb{R}^{1\times m}\) be the incoming gradient of the loss \(\mathcal{L}\) with respect to output \(y\), let \(g^{\mathsf{T}}=v^{\mathsf{T}}\mathsf{D}y(x)\) be the true gradient of the loss with respect to input \(x\) and let \(\widehat{g}^{\mathsf{T}}=v^{\mathsf{T}}\widehat{\mathsf{D}}y(x)=-v^{\mathsf{ T}}\widehat{H}^{-1}B\) be the approximation obtained by ignoring constraints. We wish to understand when \(-\widehat{g}\) is a descent direction for \(\mathcal{L}\), i.e., when is \[g^{\mathsf{T}}\widehat{g}\geq 0\,? \tag{1}\] To simplify analysis and make progress towards some theoretical insights we will assume a single constraint function \(h(u)=0\) that is independent of \(x\). Furthermore, we will assume that the objective function takes the special form \(f(x,u)=x^{\mathsf{T}}u+\bar{f}(u)\). An example of this is the objective function for the optimal transport problem. Together, these assumptions imply that \(C=0\) and \(B=I\) in Prop. 2.1. Substituting for \(\mathsf{D}y(x)\) and \(\widehat{\mathsf{D}}y(x)\) under these assumptions we have that \(-\widehat{g}\) is a descent direction if and only if, \[v^{\mathsf{T}}\left(\!H^{-1}-\frac{H^{-1}aa^{\mathsf{T}}H^{-1}}{a^{\mathsf{T }}H^{-1}a}\right)\!\widehat{H}^{-1}v\geq 0, \tag{2}\] where we have written \(a^{\mathsf{T}}=A=\mathsf{D}_{Y}h(y)\in\mathbb{R}^{1\times m}\) to make it clear that we are only considering problems with a single constraint.1 We now explore two special cases. Footnote 1: We recognize that for a single constraint the quantity \(a^{\mathsf{T}}H^{-1}a\) is trivial to invert and hence the approximation here offers little computational advantage. Nevertheless, as we will see, analysis from this simplification is instructive for more general settings. ### Special Case: Linear Constraints Consider the case of a single linear equality constraint, \(a^{\mathsf{T}}u=d\). In this case we have \(\mathsf{D}^{2}_{YY}h(u)=0\) and therefore \(H=\widehat{H}\). The condition that our approximate gradient \(\widehat{\mathsf{D}}y(x)=-H^{-1}\) always leads to a descent direction is \[\min_{w}\,w^{\mathsf{T}}\left(I-\frac{aa^{\mathsf{T}}H^{-1}}{a^{\mathsf{T}}H^ {-1}a}\right)w\geq 0 \tag{3}\] which holds if and only if2 Footnote 2: See Appendix A.1 for complete derivation. \[\max_{\|w\|=1}\,w^{\mathsf{T}}\left(\frac{aa^{\mathsf{T}}H^{-1}}{a^{\mathsf{T }}H^{-1}a}\right)w\leq 1 \tag{4}\] where we have written \(w=H^{-1}v\) from Eqn. 2. Unfortunately this is only true when \(\text{cond}(H)=1\) as the following proposition shows. **Proposition 2.2**.: _Let \(H\in\mathbb{R}^{m\times m}\) be a non-singular symmetric matrix and let \(a\) be an arbitrary vector in \(\mathbb{R}^{n}\). Then,_ \[1\leq\max_{\|w\|=1}\,w^{\mathsf{T}}\left(\frac{aa^{\mathsf{T}}H^{-1}}{a^{ \mathsf{T}}H^{-1}a}\right)w\leq\frac{1}{2}+\frac{\text{cond}(H)}{2}.\] The lower bound is bad news. It states that, in general, we cannot guarantee that the approximation will be a descent direction for all incoming loss gradients (unless \(H\propto I\)). But let us not despair. This is in the worst case. The next result concerns the expected value of \(g^{\mathsf{T}}\widehat{g}\) and tells us that, if \(H^{-1}v\) is isotropic Gaussian distributed, then \(-v^{\mathsf{T}}\widehat{\mathsf{D}}y(x)\) is a descent direction of the loss on average. **Proposition 2.3**.: _Let \(w\sim\mathcal{N}(0,I)\). Then_ \[\mathbf{E}\left[w^{\mathsf{T}}\!\left(I-\frac{aa^{\mathsf{T}}H^{-1}}{a^{\mathsf{T }}H^{-1}a}\right)\!w\right]=m-1\geq 0.\] The result can be extended to multiple (\(1\leq p\leq m\)) linear equality constraints \(Au=d\) as follows. **Proposition 2.4**.: _Let \(w\sim\mathcal{N}(0,I)\). Then_ \[\mathbf{E}\left[w^{\mathsf{T}}\!\left(I-A^{\mathsf{T}}\!\left(AH^{-1}A^{\mathsf{T }}\right)^{-1}\!AH^{-1}\right)\!w\right]=m-p\geq 0.\] This result is encouraging: for linear equality constrained problems we can expect the approximate gradient to be a descent direction. Next we turn our attention to a non-linear constraint where the story is not as straightforward. ### Special Case: Normalization Constraint We now consider the case of a single non-linear constraint, the normalization constraint, \(\|u\|^{2}=1\), which occurs in many problems such as projection onto the \(L_{2}\)-sphere and eigen decomposition. Once again, let \(\widehat{H}=\mathsf{D}^{2}_{YY}f(x,y)\) and \(H=\mathsf{D}^{2}_{YY}f(x,y)-\lambda\mathsf{D}^{2}_{YY}h(y)=\widehat{H}-\lambda I\). We will assume that \(\widehat{H}^{-1}\) and \(H^{-1}\) exist.3 Here we have \(a\propto y\) so the general condition for the approximate gradient \(\widehat{g}=-v^{\mathsf{T}}\widehat{H}^{-1}\) to be a descent direction is Footnote 3: This implies, in particular, that \(\lambda\) is not an eigenvalue of \(\widehat{H}\), which is clearly not true for eigen decomposition (where we also have \(B\neq I\)). Still, some useful insights can be gained. A similar argument may be possible using pseudo-inverses or going back to the optimality conditions and deriving gradients directly. \[v^{\mathsf{T}}\left(H^{-1}-\frac{H^{-1}yy^{\mathsf{T}}H^{-1}}{y^{\mathsf{T}}H^ {-1}y}\right)\widehat{H}^{-1}v\geq 0. \tag{5}\] The left-hand side represents \(g^{\mathsf{T}}\widehat{g}\). As for the linear equality constrained case, we can compute its expected value. **Proposition 2.5**.: _Let \(H^{-1}v\sim\mathcal{N}(0,I)\) and other quantities as defined above for the normalization constrained special case. Then_ \[\mathbf{E}\left[g^{\mathsf{T}}\widehat{g}\right]=\sum_{i=1}^{m}\frac{\lambda_{i}- \lambda}{\lambda_{i}}-\frac{y^{\mathsf{T}}\widehat{H}^{-1}y}{y^{\mathsf{T}}H^ {-1}y}\] _where \(\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{m}\) are the eigenvalues of \(\widehat{H}\)._ The above result is for general Hessian matrix \(\widehat{H}\) and arbitrary \(\lambda\). Let us consider two important (non-exhaustive) cases to give concrete bounds. **Proposition 2.6**.: _Let \(\widehat{H}\succ 0\), and let \(g\) and \(\widehat{g}\) be the true and approximate gradients, respectively, as defined above._ 1. _If_ \(\lambda<\lambda_{1}\)_, then_ \(\mathbf{E}\left[g^{\mathsf{T}}\widehat{g}\right]\geq 0\)_;_ 2. _If_ \(\lambda>\lambda_{m}\)_, then_ \(\mathbf{E}\left[g^{\mathsf{T}}\widehat{g}\right]\leq 0\)_,_ _where \(\lambda_{1}\) and \(\lambda_{m}\) are the smallest and largest eigenvalues of \(\widehat{H}\), respectively._ In summary, for the former case \(-\widehat{g}\) is a descent direction on average, whereas for the latter case it is an ascent direction! Analogous results hold for \(\widehat{H}\prec 0\). ## 3 Examples and Experiments In this section we experimentally validate the findings from above on three different optiimization problems. Our experimental setup is depicted in Fig. 1. Briefly, a data generating network provides input for a differentiable optimization problem. We train the data generating network so that the solution of the optimization problem matches some predetermined target. Further details are provided in Appendix B. ### Euclidean Projection onto \(L_{2}\)-sphere Let us start with the simple problem of projecting a point \(x\in\mathbb{R}^{n}\) onto the unit sphere, \[y(x)\in\left\{\begin{array}{ll}\text{argmin}&\frac{1}{2}\|u-x\|^{2}\\ \text{subject to}&\|u\|_{2}=1\end{array}\right\}. \tag{6}\] Here we have closed-form solution, \(y=\frac{1}{\|x\|}x\), with true and expected gradients given by \[\text{D}y(x)=\frac{1}{\|x\|}\left(I-yy^{\mathsf{T}}\right)\quad\text{and} \quad\widehat{\text{D}y}(x)=I. \tag{7}\] The approximate gradient always gives a descent direction (when \(\text{D}y(x)\) exists) since \(I-yy^{\mathsf{T}}\) is positive semidefinite. Experimental results in Fig. 2 confirm that the approximate gradient is always a descent direction, i.e., \(g^{\mathsf{T}}\widehat{g}>0\), (bottom plots), and appears to work well for learning the parameters of the MLP especially during early iterations (top plots). ### Optimal Transport Entropy regularized optimal transport is a linear equality constrained optimization problem (Cuturi, 2013), \[\begin{array}{ll}\text{minimize}&\langle P,M\rangle+\frac{1}{\gamma}\text{ KL}(P\|rc^{\mathsf{T}})\\ \text{subject to}&P1=r\text{ and }P^{\mathsf{T}}1=c,\end{array} \tag{8}\] over variable \(P\in\mathbb{R}_{+}^{m\times n}\), where \(M\in\mathbb{R}^{m\times n}\) is an input cost matrix, \(r\) and \(c\) are positive vectors of row and column sum constraints (with \(1^{\mathsf{T}}r=1^{\mathsf{T}}c\)). Hyper-parameter \(\gamma>0\) controls the strength of the regularization term. Typical learning curves and gradient similarity per iteration is shown in Fig. 4, depicting behavior much like the previous example--the approximate gradient is always a descent direction and works especially well during the early stages of training. This is consistent with our analysis. Figure 1: Common experimental setup to compare behavior of approximate and exact gradients of constrained differentiable optimisation problems in a deep declarative network. Training data is a batch of randomly sampled input-target pairs \((z_{b},y_{b}^{*})\in\mathbb{R}^{d}\times\mathcal{Y}\). The input \(z_{b}\) passes through a multi-layer perceptron to generate the parametrization \(x_{b}\) for a declarative node whose output (i.e., optimal value) is \(y_{b}\). Thus \(y_{b}\) is ultimately a function of the input \(z_{b}\) and network parameters \(\theta\). Training aims to adjust \(\theta\) so as to minimize the square difference between output \(y_{b}\) and target \(y_{b}^{*}\). Figure 2: Learning curves (top) for exact and approximate gradients for projection onto the unit sphere experiments. Bottom curves show cosine similarity between approximate and exact gradients for each point on the approximate learning curve. Left versus right shows low- versus high-dimensional \(z_{b}\), respectively. ### Eigen Decomposition Given a real symmetric matrix \(X=X^{\mathsf{T}}\in\mathbb{R}^{m\times m}\), the (unit) eigenvector associated with the largest eigenvalue of \(X\) can be found by solving the following equality constrained optimization problem (Ghojogh et al., 2019), \[\begin{array}{ll}\text{maximize (over $u\in\mathbb{R}^{m}$)}&u^{\mathsf{T}}Xu\\ \text{subject to}&u^{\mathsf{T}}u=1.\end{array} \tag{9}\] Here we assume that the largest eigenvalue is simple otherwise a well-defined derivative does not exist. The optimality conditions for solution \(y\in\mathbb{R}^{m}\) are thus4, Footnote 4: Indeed, this holds for any simple eigenvalue-eigenvector pair. \[Xy-\lambda_{\text{max}}y=0_{m}\text{ and }y^{\mathsf{T}}y=1, \tag{10}\] which gives differentials (Magnus, 1985), \[\mathsf{d}y=(\lambda_{\text{max}}I-X)^{\dagger}(\mathsf{d}X)y \tag{11}\] where \({}^{\dagger}\) denotes pseudo-inverse. So with respect to the \((i,j)\)-th component of \(X\), and using symmetry, we have \[\text{D}_{X_{ij}}y(X)=-\frac{1}{2}(X-\lambda_{\text{max}}I)^{\dagger}(y_{j}e_ {i}+y_{i}e_{j}). \tag{12}\] Ignoring the equality constraint \(u^{\mathsf{T}}u=1\) we arrive at \[\widetilde{\text{D}_{X_{ij}}}y(X)=-\frac{1}{2}X^{\dagger}(y_{j}e_{i}+y_{i}e_{j }). \tag{13}\] There is no computational gain here unless we need derivatives for multiple different eigenvectors and hence require multiple pseudo-inverses \((X-\lambda_{k}I)^{\dagger}\) for the exact gradient. Moreover, results shown in Fig. 3 confirm our analysis that the approximation is a poor choice, and rarely a descent direction, unless \(y\) corresponds to the max. eigenvalue and all other eigenvalues are negative (equiv., the min. eigenvalue and all other eigenvalues are positive), as in Fig. 3(c). ## 4 Discussion We have shown that (for certain objective functions) ignoring linear constraints gives a descent direction on average but that this does not always hold for normalization constraints. Experiments verify our analysis, and also show that even when we have a descent direction, the approximation tends to only work well in early stages of training. Whenever using approximations their behavior should be well-understood. This work is a step towards understanding of gradient approximations in differentiable optimization. Figure 4: Learning curves (top) and corresponding gradient cosine similarity (bottom) for optimal transport experiments. Figure 3: Learning curves (top) and corresponding gradient cosine similarity (bottom) for eigen decomposition experiments. For (a) the loss is applied to all eigenvectors; for (b)–(d) it is only applied to the eigenvector corresponding to the largest eigenvalue.
2310.09085
Low-energy matrix elements of heavy-quark currents
In QCD at energies well below a heavy-quark threshold, the heavy-quark vector current can be represented via local operators made of the lighter quarks and of the gluon fields. We extract the leading perturbative matching coefficients for the two most important sets of operators from known results. As an application, we analytically determine the ${\rm O}(\alpha_s^3)m_c^2/m_b^2$ effect of the bottom quark current on the $R(s)$ ratio below the bottom but above the charm threshold. For the low-energy representation of the charm quark current, the two most important operators are given by the total divergence of dimension-six gluonic operators. We argue that the charm magnetic moment of the nucleon is effectively measuring the forward matrix elements of these gluonic operators and predict the corresponding bottom magnetic moment. Similarly, the contribution of the charm current to $R(s\approx 1\,{\rm GeV}^2)$, which is associated with quark-disconnected diagrams, is dominantly determined by the decay constants of the $\omega$ and $\phi$ mesons with respect to the two gluonic operators.
Harvey B. Meyer
2023-10-13T13:11:19Z
http://arxiv.org/abs/2310.09085v2
# Low-energy matrix elements of heavy-quark currents ###### Abstract In QCD at energies well below a heavy-quark threshold, the heavy-quark vector current can be represented via local operators made of the lighter quarks and of the gluon fields. We extract the leading perturbative matching coefficients for the two most important sets of operators from known results. As an application, we analytically determine the \(\mathrm{O}(\alpha_{s}^{3})m_{c}^{2}/m_{b}^{2}\) effect of the bottom quark current on the \(R(s)\) ratio below the bottom but above the charm threshold. For the low-energy representation of the charm quark current, the two most important operators are given by the total divergence of dimension-six gluonic operators. We argue that the charm magnetic moment of the nucleon is effectively measuring the forward matrix elements of these gluonic operators and predict the corresponding bottom magnetic moment. Similarly, the contribution of the charm current to \(R(s\approx 1\,\mathrm{GeV}^{2})\), which is associated with quark-disconnected diagrams, is dominantly determined by the decay constants of the \(\omega\) and \(\phi\) mesons with respect to the two gluonic operators. + Footnote †: preprint: MITP-23-056 ## I Introduction Imagine a world of strong and electromagnetic interactions in which the up, down and strange quarks are electrically neutral. What would be the size of the magnetic moment of the proton? With what signal strength would the \(\omega\) and \(\phi\) mesons show up in the famous ratio of cross-sections \[R(s)=\frac{\sigma(e^{+}e^{-}\to\mathrm{hadrons})}{\sigma(e^{+}e^{-}\to\mu^{+} \mu^{-})}\qquad? \tag{1}\] More generally, what form would \(R(s)\) take below the \(J/\psi\) resonance? These are the sorts of questions we are after in this paper. These questions may at first seem to be of purely academic interest. However, there are a few reasons to pay attention to them. One is that the observables in the aforementioned world can be isolated and computed rigorously in lattice QCD, since it is straightforward in that framework to keep the \((u,d,s)\) electric charges'switched off'. In particular, a first calculation of the nucleon charm magnetic moment has appeared [1], finding a negative value on the order of \(10^{-3}\) in units of the nuclear magneton. In the context of high-precision calculations of the hadronic vacuum polarisation (HVP), the sub-threshold effects due to the heavy quarks are associated with the quark-disconnected diagrams involving at least one heavy-quark loop. The charm disconnected diagrams have been reported to be very small [2] in a calculation of the hadronic vacuum polarisation contribution to the muon \((g-2)\) performed on a coarse lattice: less than one percent of the \((u,d,s)\) disconnected contribution. However, the charm disconnected loops are bound to be less suppressed at somewhat higher virtualities, in the vacuum polarisation or in the closely related running of the weak mixing angle [3; 4]. In any case, theoretical predictions for the charm-quark contributions can be compared unambiguously with lattice QCD calculations. A second motivation is that in the process of deriving these theoretical predictions, one gains insight into what matrix elements in the low-energy effective theory the heavy-quark contribution is really picking up. By the same token, the predictions can fairly straightforwardly be extended to the bottom quark, which is not easily handled dynamically in lattice QCD due to its large mass. A third, much longer-term motivation, is that the proton magnetic moment \(\mu_{p}\) is known experimentally to a precision of \(0.3\,\mathrm{ppb}\)[5]. In principle, it could be used to search for new physics, as is done for the anomalous magnetic moment of the muon \(a_{\mu}=(g-2)_{\mu}/2\), if the Standard Model prediction could be made significantly more precise. The generic sensitivity to heavy degrees of freedom is enhanced due to the heavier mass scale of the proton. Since its magnetic moment is a non-perturbative quantity from the outset, and not just starting at O(\(\alpha^{2}\)) as for \(a_{\mu}\), the hadronic uncertainties enter at O(1), making the theoretical task enormously harder. For instance, a recent lattice QCD calculation [6] achieved a precision of \(2.4\%\) on the magnetic moment of the proton. Nevertheless, from this point of view one might be curious to know whether the current experimental precision on the proton magnetic moment already makes it sensitive to the bottom, or even to the top quark contributions to the electromagnetic current. The questions formulated in the introductory paragraph can be addressed by 'integrating out' the heavy quark. In a low-energy matrix element, the heavy quark-antiquark pair coupling to the external photon annihilates in a short-distance process. Via a three-gluon intermediate state, it can act as a light-quark bilinear. The total divergence of the antisymmetric tensor current is the lowest-dimensional operator that has the right symmetry properties. However, it is a helicity-flip operator, and therefore cannot contribute if the light quarks are actually massless. This leads to an additional factor of the light-quark masses appearing in this operator, making it effectively of dimension five, and therefore suppressed by \(1/m_{Q}^{2}\). This operator is considered in section II. Looking for further, not chirally suppressed contributions, we note that it is not possible to construct a low-energy effective operator with just two field strength tensors, leading us to consider (dimension-seven) operators built out of three field strength tensors in section III. Operators such as \(m_{f}\partial_{\nu}(\bar{\psi}_{f}G_{\mu\nu}\psi_{f})\) are of the same dimension, but chirally suppressed, and we will therefore not consider them. Thus the low-energy representation of the heavy-quark vector current is power-suppressed. In this paper we will therefore be discussing effects that are very small, but this must be weighed against the fact that matrix elements of the electromagnetic current are known to very high precision and are continuously being improved upon. It is worth contrasting the low-energy representation of a heavy-quark _vector_ current with that of the corresponding _axial-vector_ current, which has been worked out long ago [7]. The leading result for the mapping between renormalisation-group invariant currents reads [8] \[(\bar{Q}\gamma^{\mu}\gamma^{5}Q)_{\rm RGI}\stackrel{{ m_{Q}\to \infty}}{{\longrightarrow}}-\frac{6}{33-2N_{f}}\Big{(}\frac{\alpha_{s}(m_{Q})}{ \pi}\Big{)}^{2}\sum_{f=1}^{N_{f}}(\bar{q}_{f}\gamma^{\mu}\gamma^{5}q_{f})_{\rm RGI}. \tag{2}\] That is, the low-energy matrix elements of the axial current of a heavy quark are asymptotically suppressed by \(1/(\log(m_{Q}^{2}/\Lambda_{\rm QCD}^{2}))^{2}\), as opposed to a power law. A further example is the Lagrangian mass term of a heavy quark, \(m_{Q}\bar{Q}Q\), which does not decouple as \(m_{Q}\to\infty\), but rather contributes to the trace anomaly of the low-energy effective theory [9]. In section II, we work out the matching coefficient of the light-quark tensor currents to the heavy-quark vector current. In section III, we work out the matching coefficient of the three-gluon operators to the heavy-quark vector current. We turn to physics applications in section IV and conclude in section V. ## II The tensor currents of the light quarks Consider QED with two types of leptons \(\ell\) and \(L\), respectively of masses \(m\) and \(M\). The latter is assumed to be far greater than the former, \(m\ll M\). At scales well below \(M\), and to leading order, the effective field theory is QED with the lepton \(\ell\) only. We refer the reader to the lecture notes [10] for more details. But what are the low-energy matrix elements of the heavy-lepton current \(\bar{L}\gamma^{\mu}L\)? Taking into account that it is a conserved current with vanishing forward matrix elements on a light-lepton state, the operator \(\partial_{\nu}(\bar{\ell}[\gamma^{\mu},\gamma^{\nu}]\ell)\) is the lowest-dimensional candidate1 In the chiral limit, this operator is however chirality-flipping, which \(\bar{L}\gamma^{\mu}L\) is not; therefore, we can anticipate that our operator will be accompanied by a factor \(m\). Footnote 1: Note that we use a Minkowski-space notation with ‘mostly minus’ metric and \(\{\gamma^{\mu},\gamma^{\nu}\}=2\eta^{\mu\nu}\). A transcription into Euclidean notation is given at the beginning of section IV. The matching coefficient can be obtained from the following consideration. The contribution to the anomalous magnetic moment of lepton \(\ell\) of the current \(-i\partial_{\nu}(\bar{\ell}[\gamma^{\mu},\gamma^{\nu}]\ell)\) is \[\Delta F_{2}(q^{2}=0)=-4m, \tag{3}\] while the contribution to \(F_{1}(0)\) vanishes. On the other hand, the direct (three-loop) calculation of the anomalous magnetic moment contribution of the current \(\bar{L}\gamma^{\mu}L\) yields [11; 12] \[\Delta F_{2}(0) = c_{2}\left(\frac{\alpha}{\pi}\right)^{3}\,\left(\frac{m}{M} \right)^{2}+{\rm O}((m/M)^{4}), \tag{4}\] \[c_{2} = \frac{3}{2}\zeta(3)-\frac{19}{16}. \tag{5}\] Hence we have the following mapping \[\bar{L}\gamma^{\mu}L\longrightarrow d_{2}^{\rm QED}(M,m,\alpha)(-i )\partial_{\nu}(\bar{\ell}[\gamma^{\mu},\gamma^{\nu}]\ell), \tag{6}\] \[d_{2}^{\rm QED}(M,m,\alpha)=-\frac{c_{2}}{4}\frac{m}{M^{2}}\, \left(\frac{\alpha}{\pi}\right)^{3}, \tag{7}\] Figure 1: Left: representative diagram of the light-by-light contribution to the \(g-2\) of an electron via a muon loop. Right: Effective interaction between the electron and the photon induced by the muon loop, described by the total divergence of the antisymmetric tensor current. of the current in the complete theory into the low-energy EFT. The two-point function \[f^{\mu\nu}(x;M,m)\equiv\langle\bar{L}(x)\gamma^{\mu}L(x)\;\bar{\ell} (0)\gamma^{\nu}\ell(0)\rangle \tag{8}\] \[\to d_{2}^{\rm QED}(M,m,\alpha)\langle(-i)\partial_{\lambda}(\bar{ \ell}(x)[\gamma^{\mu},\gamma_{\lambda}]\ell(x))\;\bar{\ell}(0)\gamma^{\nu}\ell (0)\rangle\] can thus be evaluated within QED with a single light lepton for \(M|x|\gg 1\). Similar considerations apply to QCD with \(N_{f}\) 'light' quark flavours and one additional heavy flavour \(Q\). The heavy quark current \(\bar{Q}\gamma_{\mu}Q\) can be matched to an operator made of fields in the 'light' sector. The matching coefficient is the same as in QED, up to a colour factor. To determine this factor, consider the two-point function \[\langle\bar{Q}(x)\gamma^{\mu}Q(x)\;\bar{q}(0)\gamma^{\nu}q(0)\rangle=\frac{d^{ abc}}{4}\frac{d^{abc}}{4}f^{\mu\nu}(x;m_{c},m_{s}), \tag{9}\] where \(d^{abc}=2\,{\rm Tr}(\{T^{a},T^{b}\}T^{c})\), \({\rm Tr}(T^{a}T^{b})=\frac{\delta^{ab}}{2}\) and we have indicated the colour factor explicitly. For the gauge group SU(\(N\)), \(d^{abc}d^{abc}=(N^{2}-1)(N^{2}-4)/N\). and the tensor \(f_{\mu\nu}\) is the same as in Eq. (8), to leading perturbative order. In the low-energy EFT, i.e. QCD with \(N_{f}\) quark flavours, the two-point function of Eq. (9) is represented by the matching coefficient times \(\langle-i\partial_{\lambda}(\bar{q}(x)[\gamma^{\mu},\gamma^{\lambda}]q(x))\; \bar{q}(0)\gamma^{\nu}q(0)\rangle\). At leading non-trivial order in perturbation theory, the latter two-point function is simply \(N\) times the low-energy two-point function in the second line of Eq. (8). From this comparison, one obtains the colour factor and concludes that \[\bar{Q}\gamma^{\mu}Q\longrightarrow\sum_{f=1}^{N_{f}}d_{2}(M,m_{f },\alpha_{s})\,(-i)\partial_{\nu}(\bar{q}_{f}[\gamma^{\mu},\gamma^{\nu}]q_{f}),\] \[d_{2}(M,m,z)=\frac{(N^{2}-1)(N^{2}-4)}{16N^{2}}\;d_{2}^{\rm QED }(M,m,z)\stackrel{{ N=3}}{{=}}\frac{5}{18}\;d_{2}^{\rm QED}(M,m,z). \tag{10}\] Clearly, due to the chiral suppression, the largest contribution comes from the heaviest quark still considered as 'light'; typically, this would be the strange quark. In terms of large-\(N\) counting, \(d_{2}\) is of order \(\lambda_{H}^{3}/N\), where \(\lambda_{H}\equiv g_{s}^{2}N\) is the 't Hooft coupling. Finally, the scale at which \(\alpha_{s}\) should be evaluated in \(d_{2}\) is of order \(2M\). The tensor operator has an anomalous dimension, therefore it should be evolved from the scale \(M\) to the standard renormalisation scale at which it is defined, typically \(2\,{\rm GeV}\) in the \(\overline{\rm MS}\) scheme. The anomalous dimension is known to four-loop order in the \(\overline{\rm MS}\) and in the RI\({}^{\prime}\) schemes [13], and the non-perturbative renormalisation of the tensor current has also been determined very recently [14]. To leading order, its scale evolution reads \[{\cal O}_{T}(\mu)={\cal O}_{T}(2M)\left(\frac{\alpha_{s}(\mu)}{\alpha_{s}(2M)} \right)^{C_{F}/\beta_{0}},\quad C_{F}=\tfrac{N^{2}-1}{2N},\qquad\beta_{0}= \tfrac{11}{3}N-\tfrac{2}{3}N_{f}. \tag{11}\] Since we will mostly be interested in the charm quark, for which \(2M\) is not very different from \(\mu=2\,{\rm GeV}\), and since we are only making order-of-magnitude estimates, we will ignore this effect in the following numerical estimates. It should however be taken into account for making asymptotic statements on the \(M\) dependence of low-energy matrix elements of the heavy-quark current, and when estimating the effects of the bottom quark. One can now answer questions such as 'What would be the magnetic moment of the nucleon if the photon coupled only to an asymptotically heavy quark of mass \(M\)?' One finds \[\mu_{p,n}^{Q}\equiv G_{M}^{Q}(0)=F_{2}^{Q}(0)=-4m_{N}\sum_{f=1}^{N_{f}}d_{2}(M,m_ {f},\alpha_{s})\;g_{T}^{f}, \tag{12}\] where \(g_{T}^{f}\) is the tensor charge of the nucleon with respect to flavour \(f\), \[\langle N(p,s)|\bar{q}_{f}[\gamma_{\mu},\gamma_{\nu}]q_{f}|N(p,s)\rangle=g_{T} ^{f}\,\bar{U}(p,s)[\gamma_{\mu},\gamma_{\nu}]U(p,s). \tag{13}\] Although the quark-mass factor gives an enhanced weight to the strange quark, the strange tensor charge is expected to be much smaller than the sum of the up and down tensor charges. We return to numerical estimates for the charm-current contribution in section IV.1. A simple prediction of the considerations above is that the heavy-quark current contribution to the magnetic moments of the hyperons occuring via the tensor currents is expected to be much larger than in the nucleon. However, we shall see that the three-gluon operator actually dominates for the interesting cases of the charm and bottom quarks. ## III Gluonic operators and the Euler-Heisenberg Lagrangian In this section, we derive the low-energy representation of the heavy-quark current (a) in the case that the \(N_{f}\) light quarks are massless - in which case the tensor current does not contribute, or (b) in the \(N_{f}=0\) case, i.e. for the case that the low-energy is pure gluodynamics. The most efficient way to proceed is to write an effective Lagrangian for the coupling of the gluon fields to an external photon. This was the method used in [15] to derive the effective Lagrangian for the low-energy (\(\gamma\gamma gg\)) interaction induced by a heavy-quark loop connecting two electromagnetic vertices. Here we are treating the case of a heavy-quark loop emanating from a single electromagnetic vertex and inducing a \(\gamma ggg\) coupling. This case was considered by Combridge [16] in the perturbative regime. Both cases are closely related to the classic Euler-Heisenberg Lagrangian for the pure QED case [10; 17]. First, some notational conventions. The free fermion action is \(S=\int d^{4}x\;\bar{\psi}(i\partial_{\mu}\gamma^{\mu}-m)\psi\). The gluon action reads \[S^{(g)}=-\tfrac{1}{2}\int d^{4}x\;\operatorname{Tr}\{G_{\mu\nu}(x)G^{\mu\nu}( x)\}=-\tfrac{1}{4}\int d^{4}x\;G_{\mu\nu}^{a}(x)G^{\mu\nu,a}(x) \tag{14}\] with \(G_{\mu\nu}(x)=G_{\mu\nu}^{a}(x)T^{a}=(\partial_{\mu}A_{\nu}^{a}-\partial_{\nu }A_{\mu}^{a}+gf^{abc}A_{\mu}^{b}A_{\nu}^{c})T^{a}\) and \([T^{a},T^{b}]=if^{abc}T^{c}\). Following the conventions of [18], the interactions between the gauge fields and the fermions are written \[S^{(Q\gamma)}+S^{(Qg)}=\int d^{4}x\,\Big{(}-e_{Q}\,A_{\mu}\,\bar{Q}\gamma^{\mu }Q+g_{s}\,A_{\mu}^{a}\,\bar{Q}\gamma^{\mu}\,T^{a}Q\Big{)} \tag{15}\] with \(e_{Q}\) the electric charge of the heavy quark, for instance \(e_{Q}=-\frac{1}{3}|e|\) with \(\alpha=e^{2}/(4\pi)\) for the bottom quark and \(g_{s}\) the (positive) strong coupling constant. The interaction of the light quarks with the gluon field then takes the same form as in \(S^{(Qg)}\). The two possible terms in the action describing the interaction with a hypothetical photon coupling only to a heavy quark of electric charge \(e_{Q}\) are \[S_{\rm eff}^{(Q\gamma)} = \frac{-e_{Q}}{2}\int d^{4}x\;F^{\mu\nu}(x)\biggl{(}h_{1}\,{\rm Tr} \Bigl{(}G_{\mu\nu}(x)\,G_{\alpha\beta}(x)G^{\alpha\beta}(x)\Bigr{)}\] \[\qquad\qquad\qquad\qquad\qquad+h_{2}\,{\rm Tr}\Bigl{(}{ \frac{1}{2}}\{G_{\mu\alpha}(x),G_{\nu\beta}(x)\}G^{\alpha\beta}(x) \Bigr{)}\biggr{)},\] \[= -e_{Q}\frac{d^{abc}}{8}\int d^{4}x\;F^{\mu\nu}(x)\biggl{(}h_{1}\, G_{\alpha\beta}^{a}(x)G^{\alpha\beta,b}(x)G_{\mu\nu}^{c}(x)+h_{2}\,G_{\mu \alpha}^{a}(x)G_{\nu\beta}^{b}(x)G^{\alpha\beta,c}(x)\biggr{)}\] \[= -e_{Q}\int d^{4}x\;A^{\mu}(x)\;\biggl{(}h_{1}\partial^{\nu}\,{ \rm Tr}\Bigl{(}G_{\mu\nu}(x)\,G_{\alpha\beta}(x)G^{\alpha\beta}(x)\Bigr{)} \tag{18}\] \[\qquad\qquad\qquad\qquad+h_{2}\partial^{\nu}\,{\rm Tr}\Bigl{(}{ \frac{1}{2}}\{G_{\mu\alpha}(x),G_{\nu\beta}(x)\}G^{\alpha\beta}(x) \Bigr{)}\biggr{)},\] with \(\{A,B\}=AB+BA\) denoting the anticommutator. The heavy-quark loop diagram enabling a coupling between a photon and three gluons, as well as its effective representation in the low-energy effective theory, are shown in Fig. 2. To determine the matching coefficients \(h_{1}\) and \(h_{2}\), we consider the invariant amplitude \({\cal M}\) for the scattering process of a photon and a gluon into two gluons, \[\gamma(p,\sigma)+g(k,\tau,a)\;\longrightarrow\;g(p^{\prime},\sigma^{\prime},b) +g(k^{\prime},\tau^{\prime},c), \tag{19}\] which we write \({\cal M}=-e_{Q}(h_{1}{\cal M}^{(1)}+h_{2}{\cal M}^{(2)})\) in the low-energy effective field theory. The analogous \(\gamma\gamma\to\gamma\gamma\) light-by-light scattering amplitudes at low energies are for instance given in the tutorial [19]. These matching coefficients can be inferred from the well-known Euler-Heisenberg (EH) result for the pure QED case by applying the appropriate color and combinatorial factors. We thus start off from the EH result for the case of a heavy-lepton loop of mass \(M\)[20; 21; 22], \[h_{1}^{\rm QED}(M,e_{Q}) = \frac{1}{18}\frac{e_{Q}^{3}}{16\pi^{2}\,M^{4}}, \tag{20}\] \[h_{2}^{\rm QED}(M,e_{Q}) = -\frac{7}{45}\frac{e_{Q}^{3}}{16\pi^{2}\,M^{4}}. \tag{21}\] Let us first inspect the one-loop calculation of the amplitude \({\cal M}\) in the complete, \(N_{f}+1\) flavour theory with regard to the differences between the \(\gamma\gamma\to\gamma\gamma\) and \(\gamma g\to gg\) cases. Figure 2: Left: the heavy-quark loop inducing a coupling between a photon and three gluons. Right: Effective interaction between the photon and the three gluons, described by the effective action Eq. (16). First, the factor \(\frac{d^{abc}}{4}\) appears in the latter case, as we have seen in section II. The only other difference is that \(e_{Q}\) must be replaced by \((-g_{s})\). The sign stems from the opposite sign of the \(\bar{Q}Qg\) interaction term in \(S^{(Qg)}\) as compared to the \(\bar{Q}Q\gamma\) term in \(S^{(Q\gamma)}\). When computing the amplitude \(\gamma g\to gg\) in the low-energy effective field theory, there are two differences with respect to the \(\gamma\gamma\to\gamma\gamma\) case. First, the factor \(\frac{d^{abc}}{4}\) appears, as is clear from comparing Eq. (16) applied to QED and Eq. (17) applied to QCD. However, precisely the same factor also appears in the calculation of the quark-loop diagram in the \((N_{f}+1)\) flavour theory, as described in the previous paragraph, so that it does not affect the matching calculation. Secondly, there is a combinatorial effect. In Eq. (16) applied to QED, each field strength tensor plays the same role: the first operator consists of two contracted field strength tensors, the second of a single circular chain of such tensors. Now, in the calculation of the \(\gamma g\to gg\) amplitude, one specific field strength tensor is necessarily annihilating the initial-state photon, whereas in the QED case, it can be contracted with any one of the four photons involved in the amplitude. This results in the \(\gamma\gamma\to\gamma\gamma\) amplitudes \({\cal M}^{(1)}\) and \({\cal M}^{(2)}\) being four times larger than the corresponding expressions multiplying \(\frac{d^{abc}}{4}\) in the \(\gamma g\to gg\) case. The upshot is that, in order to compensate the reduced combinatorial factor, the matching coefficients must be four times larger in the \(\gamma g\to gg\) case. We thus conclude \[h_{1}(M,g_{s}) = -\frac{2}{9}\frac{g_{s}^{3}}{16\pi^{2}\,M^{4}}, \tag{22}\] \[h_{2}(M,g_{s}) = +\frac{28}{45}\frac{g_{s}^{3}}{16\pi^{2}\,M^{4}}. \tag{23}\] In summary, using Eq. (18)) we have found the following low-energy representation of the heavy-quark current, \[\bar{Q}\gamma^{\mu}Q \longrightarrow h_{1}(M,g_{s})\;\partial_{\nu}{\rm Tr}\Big{(}G^{\mu\nu}(x)\,G_{ \alpha\beta}(x)G^{\alpha\beta}(x)\Big{)}\] \[+h_{2}(M,g_{s})\;\partial_{\nu}{\rm Tr}\Big{(}\tfrac{1}{2}\{G^{ \mu\alpha}(x),G^{\nu\beta}(x)\}G_{\alpha\beta}(x)\Big{)}.\] To repeat, these two operators are parametrically the leading ones in the limit where the light-quark masses vanish, or indeed for the \(N_{f}=0\) (pure gauge) theory. Rewriting the gluonic operators in terms of the rescaled non-Abelian gauge field having for action \(S=\int d^{4}x(-\frac{1}{2g^{2}}){\rm Tr}\{{\cal G}_{\mu\nu}{\cal G}^{\mu\nu}\}\), the computed matching coefficients formally become independent of the gauge coupling. They are also independent of \(N\). We expect the gluonic operators \({\rm Tr}\{{\cal GGG}\}\) to have hadronic matrix elements of order unity, similar to the \({\rm Tr}({\cal G}_{\mu\nu}{\cal G}_{\mu\nu})\) 'trace anomaly' operator having an order-unity matrix element on the nucleon in units of the nucleon mass [23]. Thus, while the matching coefficients of the gluonic operators are suppressed by two additional powers of \(1/M\) relative to the tensor current of the light quarks, they are neither suppressed by light quark masses, nor by a factor of \(1/N\), nor by \((\alpha_{s}(M)/\pi)^{3}\). ## IV Physics applications Since some of our applications are closely related to lattice QCD calculations, we provide the low-energy representation of the heavy-quark current in Euclidean notation. In Euclidean space, we use hermitian Dirac matrices, \(\{\gamma_{\mu}^{\mbox{\tiny E}},\gamma_{\nu}^{\mbox{\tiny E}}\}=2\delta_{\mu\nu}\), so that the action for a free fermion reads \(S_{\mbox{\tiny E}}=\int d^{4}x\ \bar{\psi}(\gamma_{\mu}^{\mbox{\tiny E}}\,\partial_{ \mu}+m)\psi\), and the Euclidean action describing the heavy quark's interactions with the gauge fields takes the form \[S_{\mbox{\tiny E}}^{(Q\gamma)}+S_{\mbox{\tiny E}}^{(Qg)}=\int d^{4}x\,\Big{(}- ie_{Q}\,A_{\mu}\,\bar{Q}\gamma_{\mu}^{\mbox{\tiny E}}\,Q+ig\,A_{\mu}^{a}\,\bar{Q} \gamma_{\mu}^{\mbox{\tiny E}}\,T^{a}Q\Big{)}. \tag{25}\] We then have the operator mappings \[\bar{Q}\gamma_{\mu}^{\mbox{\tiny E}}Q\longrightarrow\sum_{f=1}^{N_{f}}d_{2}(M, m_{f},\alpha_{s})\;\partial_{\nu}(\bar{q}_{f}[\gamma_{\mu}^{\mbox{\tiny E }},\gamma_{\nu}^{\mbox{\tiny E}}]q_{f}), \tag{26}\] and \[\bar{Q}\gamma_{\mu}^{\mbox{\tiny E}}Q \longrightarrow h_{1}(M,g_{s})\;i\,\partial_{\nu}\mbox{Tr}\Big{(}G_{\mu\nu}(x) \,G_{\alpha\beta}(x)G_{\alpha\beta}(x)\Big{)}\] \[+h_{2}(M,g_{s})\;i\,\partial_{\nu}\mbox{Tr}\Big{(}{ \frac{1}{2}}\{G_{\mu\alpha}(x),G_{\nu\beta}(x)\}G_{\alpha\beta}(x) \Big{)}.\] The expressions for the coefficients \(d_{2}\), \(h_{1}\) and \(h_{2}\) are given in Eqs. (10, 22, 23). ### Charm and bottom magnetic moments of the proton In this section, we estimate the size of the charm magnetic moment of the nucleon in units of the nuclear magneton, \(\mu_{p,n}^{c}\). Consider first the contribution of the tensor currents of the light quarks in the low-energy EFT. For the following numerical estimates, we use the tensor charges given in Ref. [24] in the \(\overline{\mbox{MS}}\) scheme at \(2\,\)GeV, which have been computed in lattice QCD with dynamical up, down, strange and charm quarks; see also the recent results in Ref. [25]. \[\mu_{p,n}^{c}\Big{|}_{\rm tensor} = \frac{5}{18}\big{(}{\frac{3}{2}}\zeta(3)-{ \frac{19}{16}}\big{)}\,\Big{(}\frac{\alpha_{s}}{\pi}\Big{)}^{3}\,\frac{m_{N}}{ m_{c}^{2}}(m_{u}\,g_{T}^{u}+m_{d}\,g_{T}^{d}+m_{s}\,g_{T}^{s}) \tag{29}\] \[\simeq 9\times 10^{-8}(m_{u}\,g_{T}^{u}+m_{d}\,g_{T}^{d}+m_{s}\,g_{T}^{s} )\,/\,{\rm MeV}\simeq(3\pm 3)\times 10^{-8}.\] We have used \(m_{c}=1.27\,\)GeV and the light-quark masses from the Particle Data Group [26] (which are in the \(\overline{\mbox{MS}}\) scheme at \(2\,\)GeV), and set \(\alpha_{s}=0.3\). The uncertainty is large, due to a significant cancellation between the up and down quark contributions; the strange contribution is somewhat smaller, but not negligible, due to the \(m_{s}\) factor. In any case, the entire tensor contribution is minuscule. The same conclusion applies _a fortiori_ to the bottom magnetic moment of the nucleon, \(\mu_{p,n}^{b}\). We now turn to the O(\(1/m_{c}^{4}\)) contribution of the two gluonic operators. For the second gluonic operator, which has the larger matching coefficient, we obtain a contribution on the order of \[\mu_{p,n}^{c}\Big{|}_{\rm glue}=\frac{28}{45}\frac{1}{16\pi^{2}m_{c}^{4}} \cdot(\xi\,m_{N}^{4})=1.1\times 10^{-3}\xi, \tag{30}\] where \(\xi\) parametrizes the forward nucleon matrix of \({\frac{1}{2}}\mbox{Tr}(\{{\cal G}_{\mu\alpha},{\cal G}_{\nu\beta}\}{\cal G}^{ \alpha\beta})\). We expect \(|\xi|\) to be of order unity, in analogy with the matrix elements of the operator \(\mbox{Tr}({\cal G}_{\mu\nu}{\cal G}_{\mu\nu})\)[23]. Since the strangeness magnetic moment of the nucleon \(\mu_{p,n}^{s}\) is already negative (and on the order of \(-0.02\)[27; 28] with about 25% uncertainty), one might expect the same for the contribution of heavier quarks. The order of magnitude is consistent with the findings of Ref. [1] in lattice QCD, \[\mu_{p,n}^{c}=(-1.27\pm 0.38_{\rm stat}\pm 0.05_{\rm syst})\times 10^{-3}. \tag{31}\] Using the result of the latter publication, we can predict the order of magnitude of the bottom magnetic moment to be \[\mu_{p,n}^{b}\simeq\left(\frac{m_{c}}{m_{b}}\right)^{4}\mu_{p,n}^{c}\approx-1 \times 10^{-5}, \tag{32}\] while the top-quark contribution is at the level of \(-4\times 10^{-12}\). The physical magnetic moment of the proton, \(\mu_{p}\simeq 2.793\), is known to \(0.3\,\)ppb [5]. Thus the present measurement is sensitive to the coupling of the photon to the bottom quark, but not yet to its coupling to the top quark. It is also worth noting that the charm contribution to the average anomalous magnetic moment of proton and neutron, \((\mu_{p}+\mu_{n}-1)/2\simeq-0.060\), can be estimated as \((+2/3)\) times Eq. (31), yielding a one to two percent contribution. Experimentally, the average nucleon anomalous magnetic moment is known at the level of \(3.7\,\)ppm [5], which is still precise enough to resolve the bottom current contribution, even taking into account the latter's small charge factor of \(-1/3\). ### Vacuum polarisation: heavy-quark contributions well below their threshold Consider the two-point correlation function of two quark currents in the Euclidean time-momentum representation, \[G^{f,f^{\prime}}(x_{0})=-\int d^{3}x\;\langle(\bar{q}_{f}\gamma_{1}^{\rm E}q_{ f})(x)\;(\bar{q}_{f^{\prime}}\gamma_{1}^{\rm E}q_{f^{\prime}})(0)\rangle=\frac{1}{ 12\pi^{2}}\int_{0}^{\infty}ds\,s\,R^{f,f^{\prime}}(s)\,\frac{e^{-\sqrt{s}|x_{0 }|}}{2\sqrt{s}}. \tag{33}\] We have indicated the spectral representation of the correlator [29], the spectral function being normalized as the well-known \(R\) ratio, such that \[R(s)=\sum_{f,f^{\prime}=1}^{N_{f}}\mathcal{Q}_{f}\mathcal{Q}_{f^{\prime}}R^{f,f^{\prime}}(s) \tag{34}\] with \(\mathcal{Q}_{f}\in\{\frac{2}{3},-\frac{1}{3}\}\) the quark charges. We begin with the case of two distinct quark flavours \(f\) and \(f^{\prime}\), the former being the more massive one. In that case, the correlator receives exclusively quark-disconnected contributions. #### iii.2.1 The tensor current contribution: perturbative regime In \(G^{b,c}(x_{0})\), at distances much greater than \(M_{\Upsilon}^{-1}\), we replace the bottom current by its low-energy representation in terms of the charm-quark tensor current according to Eq. (10). Due to the relatively large charm mass, no chiral suppression of this contribution takes place. Evaluating the charm vector-tensor correlator to lowest order in perturbation theory, we obtain \[R^{b,c}(s)=\frac{3}{64}\,\Big{(}3\zeta(3)-\tfrac{{}_{19}}{8}\Big{)}\frac{(N^{2}-1 )(N^{2}-4)}{N}\,\left(\frac{\alpha_{s}(M)}{\pi}\right)^{3}\frac{m_{c}^{2}}{m_{b }^{2}}\,\sqrt{1-\frac{4m_{c}^{2}}{s}}. \tag{35}\] In an expansion in \(1/m_{b}^{2}\), this expression is the leading perturbative contribution of the bottom current to the spectral function above the charm threshold, but well below the bottom threshold. We believe this result to be new. The quark-disconnected contribution has been computed for massless quarks, in which case it is known to O(\(\alpha_{s}^{4}\)) included [30; 31]. #### ii.1.2 The tensor current contribution: hadronic regime Consider the correlator \(G^{c,s}(x_{0})\) at distances much greater than \(M_{J/\psi}^{-1}\). In this subsection, we replace the charm current by its low-energy representation in terms of the strange-quark tensor current according to Eq. (10) and work out an estimate for the ratio \(G^{c,s}/G^{s,s}\). Moreover, we exploit the fact that the strange-strange correlator is dominated over a significant interval of Euclidean times by the \(\phi\) meson, of mass \(M_{\phi}\). Using the spectral decomposition, we then obtain, for \(x_{0}\) positive and on the order of \(1\,\)fm, \[G^{c,s}(x_{0})\simeq d_{2}(m_{c},m_{s},\alpha_{s})M_{\phi}\cdot\frac{e^{-M_{ \phi}x_{0}}}{2M_{\phi}}\,\sum_{\lambda=1}^{3}\langle 0|\bar{s}[\gamma_{1}^{ \rm E},\gamma_{0}]s|\phi,\vec{0},\lambda\rangle\ \langle\phi,\vec{0},\lambda|\bar{s}\gamma_{1}^{ \rm E}s|0\rangle. \tag{36}\] Now we insert the standard parametrization of the matrix elements of a massive vector particle, \[\langle 0|\bar{s}\gamma^{\mu}s|\phi,\vec{p},\lambda\rangle = \epsilon_{\lambda}^{\mu}\,f_{\phi}M_{\phi}, \tag{37}\] \[\langle 0|\bar{s}\tfrac{i}{2}[\gamma^{\mu},\gamma^{\nu}]s|\phi, \vec{p},\lambda\rangle = i\,(\epsilon_{\lambda}^{\mu}\,p^{\nu}-\epsilon_{\lambda}^{\nu}\,p ^{\mu})\,f_{\phi}^{\perp}. \tag{38}\] Thus we arrive at the expression \[G^{c,s}(x_{0})\simeq-d_{2}(m_{c},m_{s},\alpha_{s})(2f_{\phi}\,f_{\phi}^{\perp} \,M_{\phi}^{3})\frac{e^{-M_{\phi}x_{0}}}{2M_{\phi}}, \tag{39}\] the corresponding contribution to the spectral function reading \[\frac{R^{c,s}(s)}{12\pi^{2}}=-\left(d_{2}(m_{c},m_{s},\alpha_{s})2M_{\phi} \right)(f_{\phi}\,f_{\phi}^{\perp})\,\delta(s-M_{\phi}^{2}). \tag{40}\] On the other hand, the strangeness correlator is given by \[G^{s,s}(x_{0})\simeq f_{\phi}^{2}M_{\phi}^{2}\,\frac{e^{-M_{\phi}x_{0}}}{2M_{ \phi}}, \tag{41}\] corresponding to \[\frac{R^{s,s}(s)}{12\pi^{2}}=f_{\phi}^{2}\,\delta(s-M_{\phi}^{2}). \tag{42}\] Taking the ratio of correlators \[\frac{G^{c,s}(x_{0})}{G^{s,s}(x_{0})}\,\simeq\,-\left(d_{2}(m_{c},m_{s},\alpha_{s} )2M_{\phi}\right)\frac{f_{\phi}^{\perp}}{f_{\phi}}\simeq 5\cdot 10^{-6}\cdot\frac{f_{ \phi}^{\perp}}{f_{\phi}}\simeq 3\cdot 10^{-6}, \tag{43}\] we find a very small result. In the last step, we have assumed \(f_{\phi}^{\perp}/f_{\phi}\approx 2/3\) based on the lattice calculation [32], which was for the \(\rho\) meson, and references therein. Since the ratio (43) is very small, we turn to the role of the gluonic operators in the next subsection. For a long time, charmonium properties have been extracted on the lattice by neglecting the disconnected diagram in charm-current two-point functions. See however the dedicated study and discussion in Ref. [33], where the effect of the disconnected diagram on the extraction of the \(J/\psi\) mass could not be resolved, in contrast to the \(\eta_{c}\) channel. The effect of charm sea quarks, which is distinct from the one discussed here, has recently been addressed in [34]. Here we assess the relative importance of the disconnected diagram based on representing the charm current as a strange tensor current. We define \(G^{c,c}=G^{c,c}_{\rm conn}+G^{c,c}_{\rm disc}\) with \[G^{c,c}_{\rm conn}(x_{0}) = \int d^{3}x\;\langle{\rm Tr}\{\gamma_{1}S_{c}(x,0)\gamma_{1}S_{c }(0,x)\}\rangle, \tag{44}\] \[G^{c,c}_{\rm disc}(x_{0}) = -\int d^{3}x\;\langle{\rm Tr}\{\gamma_{1}S_{c}(x,x)\;{\rm Tr}\{ \gamma_{1}S_{c}(0,0)\}\rangle, \tag{45}\] where \(S_{c}(x,y)\) denotes the charm-quark propagator in a non-perturbative gauge field background, which is being averaged over. At distances well beyond the inverse \(J/\psi\) mass \(M_{\psi}^{-1}\), we have \[G^{c,c}_{\rm conn}(x_{0})\simeq f_{\psi}^{2}M_{\psi}^{2}\cdot\frac{e^{-M_{ \psi}x_{0}}}{2M_{\psi}}, \tag{46}\] whereas \[G^{c,c}_{\rm disc}(x_{0})\simeq d_{2}(m_{c},m_{s},\alpha_{s})^{2}\,(f_{\phi}^{ \perp})^{2}\;2M_{\phi}^{4}\cdot\frac{e^{-M_{\phi}x_{0}}}{2M_{\phi}}. \tag{47}\] Although the matching coefficient \(d_{2}\) is small, \(G^{c,c}_{\rm disc}\) falls off much more slowly than \(G^{c,c}_{\rm conn}(x_{0})\). With \(f_{\psi}\simeq 0.405\,\)GeV [35] and \(f_{\phi}^{\perp}\simeq 0.160\,\)GeV (an educated guess based on lattice results for \(f_{\rho}^{\perp}\)[32]), we reach the conclusion that the disconnected is about \(10\%\) of the connected at \(x_{0}\simeq 2.4\,\)fm, and of course its relative size increases proportionally to \(\exp((M_{\psi}-M_{\phi})x_{0})\) beyond that point. The effect on the effective mass (defined as the negative of the logarithmic derivative of \(G^{c,c}(x_{0})\)) reaches \(1\%\) at \(x_{0}\approx 2.2\,\)fm and increases rapidly thereafter. The \(\omega\) meson also contributes, for two reasons: one is through the matching of the charm current to the light-quark tensor current, which is however strongly chirally suppressed, and the other is via the coupling of the strange tensor current to the \(\omega\), whose size is unknown but presumably quite strongly Okubo-Zweig-Iizuka (OZI) suppressed. Asymptotically, the three-pion continuum dominates the charm correlator \(G^{c,c}(x_{0})\) in isospin-symmetric QCD. The values of \(x_{0}\) estimated above, at which the disconnected diagrams become significant, must be viewed as upper bounds, since in the next subsection we show that the representation of the charm current via gluonic operators probably yields a larger contribution. The contribution of the two gluonic operators Define the gluonic decay constants of the \(\phi\) meson as follows, \[\langle 0|{\rm Tr}\Big{(}{\cal G}^{\mu\nu}(x)\,{\cal G}_{\alpha \beta}(x){\cal G}^{\alpha\beta}(x)\Big{)}|\phi,\vec{p},\lambda\rangle = i\left(\epsilon_{\lambda}^{\mu}\,p^{\nu}-\epsilon_{\lambda}^{ \nu}\,p^{\mu}\right)f_{\phi}^{\mathcal{G},1}M_{\phi}^{3}, \tag{48}\] \[\langle 0|\tfrac{1}{2}{\rm Tr}\Big{(}\{{\cal G}^{\mu\alpha}(x),{ \cal G}^{\nu\beta}(x)\}{\cal G}_{\alpha\beta}(x)\Big{)}|\phi,\vec{p},\lambda\rangle = i\left(\epsilon_{\lambda}^{\mu}\,p^{\nu}-\epsilon_{\lambda}^{ \nu}\,p^{\mu}\right)f_{\phi}^{\mathcal{G},2}M_{\phi}^{3}, \tag{49}\] and similarly for the \(\omega\) meson. Based on the gluonic contribution, we then obtain \[G^{c,s}(x_{0})\simeq\Big{[}-\tfrac{2}{9}f_{\phi}^{\mathcal{G},1}+\tfrac{28}{45 }f_{\phi}^{\mathcal{G},2}\Big{]}\cdot\frac{M_{\phi}^{5}}{16\pi^{2}m_{c}^{4}}( M_{\phi}f_{\phi})\ \frac{e^{-M_{\phi}x_{0}}}{2M_{\phi}}, \tag{50}\] leading to the ratio of correlators \[\frac{G^{c,s}(x_{0})}{G^{s,s}(x_{0})}\simeq\frac{M_{\phi}^{4}}{16\pi^{2}m_{c}^ {4}}\cdot\Big{[}-\tfrac{2}{9}f_{\phi}^{\mathcal{G},1}+\tfrac{28}{45}f_{\phi}^{ \mathcal{G},2}\Big{]}\Big{/}f_{\phi}=2.6\cdot 10^{-3}\cdot\Big{[}-\tfrac{2}{9}f_{ \phi}^{\mathcal{G},1}+\tfrac{28}{45}f_{\phi}^{\mathcal{G},2}\Big{]}\Big{/}f_{ \phi}\;. \tag{51}\] We expect this contribution via the gluonic operators to be dominant over that of the tensor currents in Eq. (43) because we see no reason why the ratios \(f_{\phi}^{\mathcal{G},i}/f_{\phi}\) should be as small as \(10^{-3}\). By the same token, we expect \(G_{\rm disc}^{c,c}(x_{0})\) to become comparable to \(G_{\rm conn}^{c,c}(x_{0})\) at smaller \(x_{0}\) than estimated in the previous subsection. Moreover, via the gluonic operators, the \(\omega\) meson contributes to \((G^{c,u}+G^{c,d})/2\) analogously to the \(\phi\) meson in \(G^{c,s}\), without any chiral suppression, and both mesons contribute in a similar way to \(G_{\rm disc}^{c,c}(x_{0})\). ## V Conclusion We have derived the low-energy effective representation of heavy-quark vector currents. As a concrete perturbative result, we have obtained the bottom-current contribution to the \(R(s)\) ratio of order \((m_{c}^{2}/m_{b}^{2})\) in the sub-\(M_{\Upsilon}\) region; see Eq. (35). The leading contributions in \(1/m_{Q}\) to low-energy observables associated with the set of light-quark tensor currents can be estimated fairly reliably, but turn out to be very small in the hadronic vacuum polarisation or the charm magnetic moment of the nucleon. In the latter case, an existing direct lattice calculation strongly suggests that a different mechanism is responsible for the O(\(10^{-3}\)) size found for \(\mu^{c}\). Two (non-chirally suppressed) gluonic operators, whose matching coefficients we derived, can explain the size of \(\mu^{c}\) if their matrix elements are of order one in units of the nucleon mass. From here, we estimated the size of the bottom magnetic moment of the nucleon. Similarly in the \(R(s)\) ratio, the gluonic operators are bound to be the dominant ones in the low-energy representation of the charm current at \(s\lesssim 1\,{\rm GeV}^{2}\), unless the corresponding decay constants of the \(\omega\) and \(\phi\) mesons turn out to be enormously suppressed. The correlator \((G^{c,u}+G^{c,d})\) provides a clean way to probe the gluonic decay constants of the \(\omega\) meson. Since the charm quark can be treated dynamically in lattice calculations, the matching coefficients \((d_{2},h_{1},h_{2})\) could be determined non-perturbatively and the reliability of the low-energy expansion directly tested. Though technically more challenging, this type of study could also be carried out for the axial current to test the prediction of Eq. (2). Finally, the operator mappings derived in this paper might also be used for the algorithmic purpose of accelerating the stochastic calculation of disconnected charm loops over the entire lattice. ###### Acknowledgements. I thank Georg von Hippel and Konstantin Ottnad for discussions on the charm form factors of the nucleon. I acknowledge the support of Deutsche Forschungsgemeinschaft (DFG) through the research unit FOR 5327 "Photon-photon interactions in the Standard Model and beyond - exploiting the discovery potential from MESA to the LHC" (grant 458854507), and through the Cluster of Excellence "Precision Physics, Fundamental Interactions and Structure of Matter" (PRISMA+ EXC 2118/1) funded within the German Excellence Strategy (project ID 39083149).
2303.00827
Packing Odd Walks and Trails in Multiterminal Networks
Let $G$ be an undirected network with a distinguished set of terminals $T \subseteq V(G)$ and edge capacities $cap: E(G) \rightarrow \mathbb{R}_+$. By an odd $T$-walk we mean a walk in $G$ (with possible vertex and edge self-intersections) connecting two distinct terminals and consisting of an odd number of edges. Inspired by the work of Schrijver and Seymour on odd path packing for two terminals, we consider packings of odd $T$-walks subject to capacities $cap$. First, we present a strongly polynomial time algorithm for constructing a maximum fractional packing of odd $T$-walks. For even integer capacities, our algorithm constructs a packing that is half-integer. Additionally, if $cap(\delta(v))$ is divisible by 4 for any $v \in V(G) - T$, our algorithm constructs an integer packing. Second, we establish and prove the corresponding min-max relation. Third, if $G$ is inner Eulerian (i.e. degrees of all nodes in $V(G) - T$ are even) and $cap(e) = 2$ for all $e \in E$, we show that there exists an integer packing of odd $T$-trails (i.e. odd $T$-walks with no repeated edges) of the same value as in case of odd $T$-walks, and this packing can be found in polynomial time. To achieve the above goals, we establish a connection between packings of odd $T$-walks and $T$-trails and certain multiflow problems in undirected and bidirected graphs.
Maxim Akhmedov, Maxim Babenko
2023-03-01T21:25:26Z
http://arxiv.org/abs/2303.00827v1
# Packing Odd Walks and Trails in Multiterminal Networks ###### Abstract Let \(G\) be an undirected network with a distinguished set of _terminals_\(T\subseteq V(G)\) and edge _capacities_\(cap:E(G)\rightarrow\mathbb{R}_{+}\). By an _odd_\(T\)-walk we mean a walk in \(G\) (with possible vertex and edge self-intersections) connecting two distinct terminals and consisting of an odd number of edges. Inspired by the work of Schrijver and Seymour on odd path packing for two terminals, we consider packings of odd \(T\)-walks subject to capacities \(cap\). First, we present a strongly polynomial time algorithm for constructing a maximum fractional packing of odd \(T\)-walks. For even integer capacities, our algorithm constructs a packing that is half-integer. Additionally, if \(cap(\delta(v))\) is divisible by \(4\) for any \(v\in V(G)-T\), our algorithm constructs an integer packing. Second, we establish and prove the corresponding min-max relation. Third, if \(G\) is inner Eulerian (i.e. degrees of all nodes in \(V(G)-T\) are even) and \(cap(e)=2\) for all \(e\in E\), we show that there exists an integer packing of odd \(T\)-trails (i.e. odd \(T\)-walks with no repeated edges) of the same value as in case of odd \(T\)-walks, and this packing can be found in polynomial time. To achieve the above goals, we establish a connection between packings of odd \(T\)-walks and \(T\)-trails and certain multiflow problems in undirected and bidirected graphs. Odd path, signed and bidirected graph, multiflow, polynomial algorithm 2012 ## 1 Introduction Hereinafter, for graph \(G\) we use notation \(V(G)\) (resp. \(E(G)\)) to denote the set of vertices (resp. edges) of \(G\). Consider an undirected network \(G\) with a distinguished set of _terminals_\(T\subseteq V(G)\) and edge capacities \(cap:E(G)\rightarrow\mathbb{R}_{+}\). We use the notions of **walks** and **paths**; the former allow arbitrary edge and vertex self-intersections, while the latter forbid any self-intersections. Additionally, we consider **trails** that allow vertex self-intersections but not edge self-intersections. Any path is a trail and any trail is a walk, but not vice versa. A \(T\)**-walk** (resp. \(T\)**-trail** or \(T\)**-path**) is a walk (resp. trail or path) connecting two distinct vertices in \(T\) (note that its intermediate vertices may also be in \(T\)). By a (fractional) **packing** of \(T\)-walks (resp. \(T\)-trails, \(T\)-paths) subject to capacities \(cap\) we mean a weighted collection \(\mathcal{P}=\{\alpha_{1}\cdot W_{1},\ldots,\alpha_{m}\cdot W_{m}\}\), where \(W_{i}\) are \(T\)-walks (resp. \(T\)-trails, \(T\)-paths) and \(\alpha_{i}\in\mathbb{R}_{+}\) are **weights** such that \(\sum_{i}\alpha_{i}n_{i}(e)\leq cap(e)\) for any \(e\in E(G)\), where \(n_{i}(e)=0,1,\ldots\) denotes the number of occurrences of \(e\) in \(W_{i}\). If all \(\alpha_{i}\) are integer (resp. \(\frac{1}{k}\)-integer, i.e. become integer after multiplying by \(k\)) then the whole packing is said to be **integer** (resp. \(\frac{1}{k}\)**-integer**). The **value** of \(\mathcal{P}\) (denoted by \(\|\mathcal{P}\|\)) is \(\sum_{i}\alpha_{i}\); a packing of maximum value will be referred to as **maximum**. If one imposes no additional restrictions, the values of maximum packings of \(T\)-walks, \(T\)-trails and \(T\)-paths coincide; this follows from the fact that any walk can be reduced into a path by removing its cyclic parts. Also for \(|T|=2\) and integer edge capacities the value of a maximum fractional packing equals the value of a maximum integer packing (by the max-flow integrality theorem [10, Cor. 10.3a]), while for \(|T|\geq 3\) a maximum packing may be half-integer [10, Sec. 73.2]. Now consider a much harder case where \(T\)-walks (resp. \(T\)-trails, \(T\)-paths) comprising a packing are required to be **odd**, i.e. to consist of an odd number of edges. Now an attempt to transform a walk into a path (or even a walk into a trail) by a similar decycling approach fails since it may alter the parity. Our original source of inspiration lies in the work of Schrijver and Seymour [11], who established a min-max formula for the value of a maximum fractional packing of odd \(T\)-paths for \(T=\{s,t\}\). At its dual side, the formula involves enumerating (not necessarily induced) subgraphs \(H\) of \(G\) that contain both \(s\) and \(t\) but no odd \(s-t\) path and upper-bounding the value of packings by a certain "capacity" of \(H\). In a sense, such \(H\) is analogous to an \(s-t\) cut in the standard max-flow-min-cut-theorem [10, Th. 10.3] and is called an **odd path barrier**. The above result is established for just \(|T|=2\), only concerns fractional packings and, moreover, is non-constructive. In case of integer capacities one should ultimately aim for a min-max formula and a polynomial algorithm for constructing a maximum integer packing of odd \(T\)-paths. These questions, unfortunately, seem to be notoriously hard. In particular, [12, Sec. 3.3] shows that checking if a given graph contains a pair of edge-disjoint odd \(T\)-trails is NP-hard already for \(|T|=2\). ### Our results The present paper deals with the **multiterminal** version of the problem (allowing arbitrary number of terminals \(T\)) but considers packings of odd \(T\)-walks and \(T\)-trails rather than odd \(T\)-paths. **First**, for packings of odd \(T\)-walks and real-valued capacities we present a polynomial time reduction (Theorem 9) from a maximum odd \(T\)-walk packing problem to a maximum multiflow problem for a special commodity graph family due to Karzanov [6]. For even integer capacities, our algorithm produces a half-integer packing. Also, if capacities are even integers and \(cap(\delta(v))\) (which is, as usual, the sum of \(cap(e)\) for all edges \(e\) incident to \(v\)) is divisible by \(4\) for any \(v\in V(G)-T\), a maximum packing can be made integer. **Second**, we present a min-max formula (Theorem 12) for maximum odd \(T\)-walk packings. It is strikingly similar to the one due to Schrijver and Seymour [11] (for odd \(s-t\) paths) and involves, at its dual side, subgraphs of \(G\) containing no odd \(T\)-walks. **Third**, we extend the above results to odd \(T\)-trail packings. Consider the unit-capacity case. Then a Schrijver-Seymour-type min-max relation does not hold for integer packings even if one assumes that \(|T|=2\) and the underlying graph is **inner Eulerian** (i.e. degrees of all non-terminal verticies are even). An example of such a "bad" instance can be found in [11, Sec.3]. (There it was given for the case of odd \(T\)-paths rather than odd \(T\)-trails but it turns the example works in both cases.) The fractionality status of such a packing problem seems to be open. We partially resolve it by proving that for an inner Eulerian graph with unit capacities and an arbitrary number of terminals an optimum packing of \(T\)-trails can always be chosen half-integer (and also can be found in polynomial time). If all capacities are multiplied by \(2\), the inner Eulerianness condition becomes \(cap(\delta(v))\) being divisible by \(4\) for all \(v\in V(G)-T\), which is equivalent to the condition from the first result, and our optimum packing becomes integer. We prove (Theorem 14) that there exists a packing of odd \(T\)-trails of the same value as in the case of odd \(T\)-walks, and this packing can be found in polynomial time. In other words, odd \(T\)-walks forming a maximum integer packing can always be rearranged ("untangled") to ensure that none of them has edge self-intersections. ### Our techniques The algorithm that deals with odd \(T\)-walks is based on a reduction to a certain multiflow problem [6] and some graph symmetrization. For the min-max formula regarding packing of odd \(T\)-walks, we indicate how optimum collections of cuts (in the sense of the above multiflow problem) correspond to minimum odd \(T\)-walk barriers. The algorithm dealing with odd \(T\)-trails attracts additional combinatorial ideas. Loosely speaking, it constructs a maximum integer packing consisting of odd \(T\)-walks \(W_{i}\). If all of these \(T\)-walks \(W_{i}\) are already \(T\)-trails (i.e. have no edge self-intersections), then we are done. Otherwise, for walk \(W_{i}\) with edge self-intersections, we either simplify \(W_{i}\) (while maintaining its parity) or find a **redundant** edge in \(G\) whose removal does not decrease the number of \(T\)-walks in the current packing, drop it, and repeat. The existence of a redundant edge is proved by a novel characterization of integer odd \(T\)-walk packings in terms of \(T\)-trail packings in inner Eulerian **bidirected** graphs [1] and relies on the corresponding min-max theorem. ### Related work There is also a solid body of recent research devoted to path packings in unit-capacitated graphs. (Note that here the notions of integer walk and trail packings coincide.) While even for \(T=\{s,t\}\) the problem of finding a maximum integer packing of odd \(T\)-trails in general networks does not seem to be tractable, a certain lower bound for the maximum value of such packings (relating it to odd \(T\)-trail covers) is known [5] (also see [4] for a weaker bound). Note that if one is interested in packing odd \(T\)-trails (rather than odd \(T\)-walks) in graphs with integer capacities larger than \(1\), then the problem does not seem to be directly reducible to unit capacities. Indeed, splitting each edge and solving the problem in the unit-capacitated case, one will face challenges with edge self-intersections when attempting to return back to the original graph. These challenges seem to be quite fundamental, and, in particular, we are not aware of any prior art concerning capacitated versions of the maximum odd \(T\)-trail integral packing problem. Our algorithm for constructing a maximum packing of odd \(T\)-trails is able to deal with edge self-intersections by certain \(T\)-walk "untangling" but this battle is not won easily. Another related (but still substantially different) area of research concerns integer packing of vertex-disjoint \(A\)-paths in **group-labeled** graphs. Here each edge \(xy\in E\) is endowed with an element \(g(x,y)\) of group \(\Gamma\) (obeying \(g(x,y)=-g(y,x)\)). Path \(P\) with both (distinct) ends in \(A\subseteq V(G)\) is called a **non-zero \(A\)-path** if the sum of all group elements corresponding to (directed) edges of \(P\) is non-zero. (This also extends to non-Abelian groups.) In [3] a polynomial algorithm for constructing a maximum integer packing of vertex-disjoint \(A\)-paths is given. See also [9] for a similar treatment involving permutation groups. With an appropriate choice of group \(\Gamma\) and edge labels, non-zero \(A\)-paths may express various well-studied notions, e.g. the much-celebrated Mader's integer packings of vertex-disjoint \(\mathcal{S}\)-paths [8], [10, Sec. 73.1]. Note that if \(\Gamma=\mathbb{Z}_{2}\) and \(g(x,y)=1\) for all edges \(xy\) one gets the odd parity constraint for paths comprising a packing. The latter motivates adding such \(\Gamma\) as a direct group summand in the Mader's case above hoping to capture the parity restriction. This approach, however, will not work as expected: now a path could either be odd _or_ connect terminals in distinct \(\mathcal{S}\)-classes (while we were certainly hoping for paths that simultaneously have ends in distinct \(\mathcal{S}\)-classes _and_ have odd length). ## 2 Walks, trails, packings and other notation Consider an undirected loopless graph \(G\) with possible parallel edges. In this paper we deal with certain families of path-like objects in \(G\) differing in kinds of allowed self-intersections. Formally: Given \(x,y\in V(G)\), an \(x-y\)walk is a sequence \(W=(e_{1},e_{2},\ldots,e_{l})\), where \(e_{i}\in E\) are such that \(e_{i}=v_{i-1}v_{i}\) for \(v_{0}=x\), \(v_{l}=y\) and \(v_{1},\ldots,v_{l-1}\in V(G)\). Here \(l\) is called the **length** of \(W\). A walk is called **even** or **odd** depending on the parity of its length. Vertices \(x\) and \(y\) are called the **endpoints** of \(W\) and \(v_{1},\ldots,v_{l-1}\) are called **intermediate** (for \(W\)). Note that some of vertices \(v_{i}\) of \(W\) may coincide, allowing a walk to visit the same vertex multiple times and traverse same edge multiple times. An \(x-y\)**trail** (resp. **path**) is an \(x-y\) walk \(W\) with all edges (resp. vertices) being distinct. An \(x-x\) walk (resp. trail) is called **cyclic**. Let \(T\subseteq V(G)\) be a distinguished set of vertices called **terminals**. A \(T\)**-walk** (resp. \(T\)**-trail**, \(T\)**-path**) is an \(x-y\) walk (resp. \(x-y\) trail, \(x-y\) path) for two distinct \(x,y\in T\). (Note that unless explicitly stated otherwise, intermediate vertices of such walks are allowed to be terminals.) Graph \(G\) is called **inner Eulerian with respect to \(T\)** (or simply **inner Eulerian** if \(T\) is clear from context) if for any \(v\in V(G)-T\) the degree of \(v\) in \(G\) is even. Given edge capacities \(cap:E(G)\to\mathbb{R}_{+}\), a weighted multiset \(\mathcal{P}=\{\alpha_{1}\cdot W_{1},\ldots,\alpha_{m}\cdot W_{m}\}\), where \(\alpha_{i}\in\mathbb{R}_{+}\) are **weights** and each \(W_{i}\) is a walk, is said to be a (fractional) **walk packing** if for any \(e\in E(G)\) the **load**\(\mathcal{P}(e):=\sum_{i}\alpha_{i}n_{i}(e)\) of edge \(e\) does not exceed \(cap(e)\), where \(n_{i}(e)=0,1,\ldots\) is the number of occurrences of \(e\) in \(W_{i}\). \(\|\mathcal{P}\|:=\sum_{i}\alpha_{i}\) is called the **value** of \(\mathcal{P}\). If all \(\alpha_{i}\in\mathbb{Z}_{+}\) then \(\mathcal{P}\) is called **integer**. If \(\mathcal{P},\mathcal{Q}\) are packings and \(\alpha\in\mathbb{R}_{+}\), \(\mathcal{P}+\mathcal{Q}\) denotes a union of weighted multisets and \(\alpha\cdot\mathcal{P}\) denotes the result of multiplying all weights in \(\mathcal{P}\) by \(\alpha\). When walks comprising a packing are restricted in some way, the analogous terminology is applied to the packing as a whole. In particular, one may speak of \(T\)-walk (resp. \(T\)-trail, \(T\)-path) packings \(\mathcal{P}\) indicating that walks in \(\mathcal{P}\) are, in fact, \(T\)-walks (resp. \(T\)-trails, \(T\)-paths). A triple \((G,T,cap)\) consisting of an undirected graph \(G\), terminal set \(T\subseteq V(G)\) and capacity function \(cap:E(G)\to\mathbb{R}_{+}\), is called a **network**. Two notable special cases of constant capacity function to appear throughout our paper are \(\mathbf{1}(e):=1\) and \(\mathbf{2}(e):=2\) for any \(e\in E(G)\). Consider network \(N=(G,T,cap)\) together with undirected graph \(H\) such that \(V(H)=T\) (called the **commodity graph**). A **multi-commodity flow** (or simply a **multiflow**) in network \(N\) with commodity graph \(H\) is a \(T\)-walk packing \(\mathcal{P}\) such that for any \(T\)-walk \(W\) in \(\mathcal{P}\) its (distinct) endpoints are connected by an edge in \(H\). We also employ the following graph-theoretic notation: Given graph \(G\) and \(A\subseteq V(G),v\in V(G)\), \(\delta(v)\) denotes the set of edges incident to \(v\), \(\delta(A)\) denotes the set of edges with exactly one endpoint in \(A\) and \(\gamma(A)\) denotes the set of edges with both endpoints in \(A\); For function \(f:X\to\mathbb{R}\) and \(Y\subseteq X\), \(f(Y)\) is defined as \(\sum_{x\in Y}f(x)\); e.g. for a set of vertices \(A\), \(f(\delta(A))\) is the total value of \(f\) over all edges with exactly one endpoint in \(A\); Given edge capacities \(\mathit{cap}\colon E(G)\to\mathbb{R}_{+}\) in graph \(G\) and \(S,T\subseteq V(G),S\cap T=\varnothing\), an \(S-T\)**cut** is a vertex set \(C\) such that \(S\subseteq C\subseteq V(G)-T\); the **capacity** of cut \(C\) is \(\mathit{cap}(\delta(C))\); the minimum capacity of an \(S-T\) cut is denoted by \(\lambda(S,T)\); When graph \(G\) is not clear from the context, it is specified explicitly, e.g. \(\delta_{G}\), \(\gamma_{G}\) and \(\lambda_{G}\). ## 3 Odd \(T\)-walk packing algorithm Let \((G,T,\mathit{cap})\) be a network. In this section we introduce an auxiliary network \((\widetilde{G},\widetilde{T},\widetilde{\mathit{cap}}\widetilde{p})\) constructed from \(G\) and employ it to provide a strongly polynomial time algorithm for finding a maximum odd \(T\)-walk packing. This network also plays a crucial role in further sections. Construct graph \(\widetilde{G}\) with \(V(\widetilde{G}):=V(G)\sqcup V(G)^{\prime}\), where \(V(G)^{\prime}\) is a disjoint copy of \(V(G)\), i.e. each vertex \(v\in V(G)\) has its own copy \(v^{\prime}\in V(G)^{\prime}\), and \(E(\widetilde{G}):=\{u^{\prime}v,uv^{\prime}\mid uv\in E(G)\}\). Also, let \(v^{\prime\prime}:=v\) for \(v\in V(G)\). If \(x,y\in V(G)\), vertices \(x\) and \(x^{\prime}\) are called **symmetric** to each other, and similarly for edges \(xy^{\prime}\) and \(x^{\prime}y\). For a vertex set (resp. an edge set or a walk) \(X\), let \(X^{\prime}\) be the vertex set (resp. edge set or walk) consisting of vertices (or edges) symmetric to ones in \(X\). Let \(\widetilde{T}:=T\sqcup T^{\prime}\). Finally, define capacities on edges of \(\widetilde{G}\) as \(\widetilde{\mathit{cap}}(uv^{\prime}):=\widetilde{\mathit{cap}}\widetilde{p} (u^{\prime}):=\frac{1}{\mathit{cap}}\mathit{cap}(uv)\) for any \(uv\in E(G)\). The following theorem encapsulates the first of our results announced in Section 1. [Odd \(T\)-walk packing] Given network \((G,T,cap)\), it is possible to construct a maximum fractional odd \(T\)-walk packing \(\mathcal{P}\) in \((G,T,cap)\) in strongly polynomial time. If all capacities are non-negative even integers, the resulting \(\mathcal{P}\) is half-integer. If additionally \(\mathit{cap}(\delta(v))\) is divisible by \(4\) for all \(v\in V(G)-T\), \(\mathcal{P}\) is integer. Proof.: Note that \(\widetilde{G}\) is bipartite, so for distinct \(x,y\in T\) any \(x-y^{\prime}\) walk in \(\widetilde{G}\) is odd and corresponds to an odd \(x-y\) walk in \(G\). Construct commodity graph \(H_{T}\) as follows: \(V(H_{T}):=\widetilde{T}\) and \(E(H_{T}):=\{t_{i}t_{j}^{\prime}\mid i\neq j\}\). Note that \(H_{T}\) is isomorphic to \(K_{[T],[T]}\) without a perfect matching (see Figure 1). Consider an arbitrary fractional odd \(T\)-walk packing \(\mathcal{P}=\{\alpha_{1}\cdot W_{1},\ldots,\alpha_{m}\cdot W_{m}\}\) in \((G,T,cap)\) of value \(p\). Denote endpoints of \(W_{k}\) as \(t_{k,1}\) and \(t_{k,2}\); this walk corresponds to a pair of \(t_{k,1}-t_{k,2}^{\prime}\) walk \(\widetilde{W}_{k}\) and \(t_{k,1}^{\prime}-t_{k,2}\) walk \(\widetilde{W}_{k}^{\prime}\) in \(\widetilde{G}\), which are symmetric to each other. Packing \(\widetilde{\mathcal{P}}:=\{\alpha_{1}\cdot\widetilde{W}_{1},\ldots,\alpha_{m} \cdot\widetilde{W}_{m}\}\) in \((\widetilde{G},\widetilde{T},\widetilde{\mathit{cap}}\widetilde{p})\) is of value \(p\). For any \(xy\in E(G)\) holds \(\widetilde{\mathcal{P}}(xy^{\prime})+\widetilde{\mathcal{P}}(x^{\prime}y)= \mathcal{P}(xy)\) since each occurrence of \(xy\) in some walk \(W_{k}\) in \(\mathcal{P}\) corresponds to exactly one occurrence of either \(x^{\prime}y\) or \(xy^{\prime}\) in \(\widetilde{W}_{k}\). The same properties hold for packing \(\widetilde{\mathcal{P}}^{\prime}=\{\alpha_{1}\cdot\widetilde{W}_{1}^{\prime}, \ldots,\alpha_{m}\cdot\widetilde{W}_{m}^{\prime}\}\), which is the symmetric counterpart of \(\widetilde{\mathcal{P}}\). Construct \(\mathcal{Q}:=\frac{1}{2}(\widetilde{\mathcal{P}}+\widetilde{\mathcal{P}}^{ \prime})\); the value of \(\mathcal{Q}\) is also \(p\). For any \(xy\in E(G)\) holds \(\widetilde{\mathcal{P}}^{\prime}(xy^{\prime})=\widetilde{\mathcal{P}}(x^{ \prime}y)\), thus \(\mathcal{Q}(xy^{\prime})=\frac{1}{2}(\widetilde{\mathcal{P}}(xy^{\prime})+ \widetilde{\mathcal{P}}^{\prime}(xy^{\prime}))=\frac{1}{2}(\widetilde{ \mathcal{P}}(xy^{\prime})+\widetilde{\mathcal{P}}(x^{\prime}y))=\frac{1}{2} \mathcal{P}(xy)\leq\frac{1}{2}\mathit{cap}(xy)=\frac{1}{\mathit{cap}}(xy^{ \prime})\), i.e. \(\mathcal{Q}\) is a (self-symmetric) multiflow in \((\widetilde{G},\widetilde{T},\widetilde{\mathit{cap}}\widetilde{p})\) with commodity graph \(H_{T}\). Thus \(p\) does not exceed the value of a maximum fractional multiflow in \((\widetilde{G},\widetilde{T},\widetilde{\mathit{cap}}\widetilde{p})\) with commodity graph \(H_{T}\). Conversely, consider a fractional multiflow \(\mathcal{Q}\) of value \(q\) in network \((\widetilde{G},\widetilde{T},\widetilde{\mathit{cap}}\widetilde{p})\) with commodity graph \(H_{T}\). Construct an odd \(T\)-walk packing \(\mathcal{P}\) of value \(q\) in \((G,T,cap)\) by taking preimages of all weighted walks in \(\mathcal{Q}\) with their respective weights. Clearly, for \(xy\in E(G)\) holds \(\mathcal{P}(xy)=\mathcal{Q}(xy^{\prime})+\mathcal{Q}(x^{\prime}y)\leq\widetilde{ \alpha\!\!\!ap}(xy^{\prime})+\widetilde{\alpha\!\!\!ap}(x^{\prime}y)=cap(xy)\). Thus, \(q\) does not exceed the value of a maximum fractional odd \(T\)-walk packing in \((G,T,cap)\). Therefore, the maximum value of a fractional odd \(T\)-walk packing in \((G,T,cap)\) equals the value of a maximum fractional multiflow in \((\widetilde{G},\widetilde{T},\widetilde{cap})\) with commodity graph \(H_{T}\). To conclude the proof, we utilize the following result due to Karzanov [6]: Let \((G,T,cap)\) be a network with commodity graph \(H\). Denote by \(\mathcal{A}\) the family of all inclusion-wise maximal anticliques (i.e. independent sets) in \(H\). Suppose \(\mathcal{A}\) can be split into two subfamilies \(\mathcal{A}_{1},\mathcal{A}_{2}\) such that all anticliques in each family \(\mathcal{A}_{i}\) are pairwise disjoint. Then a maximum multiflow in \((G,T,cap)\) can be found in strongly polynomial time. If, additionally, \(cap\) are integers and \(cap(\delta(v))\) is even for any \(v\in V(G)-T\), then the resulting multiflow is integer. Note that the family of anticliques in \(H_{T}\) obeys the property from Theorem 3. Indeed, define \(\mathcal{A}_{1}:=\{T,T^{\prime}\}\) and \(\mathcal{A}_{2}:=\{\{t,t^{\prime}\}\mid t\in T\}\) (see Figure 1). If some maximal anticlique contains a terminal and its symmetric copy, then it must belong to \(\mathcal{A}_{2}\); otherwise it cannot contain both a vertex from \(T\) and a vertex from \(T^{\prime}\), thus it belongs to \(\mathcal{A}_{1}\). Also, if \(cap(\delta(v))\) is divisble by \(4\) for any \(v\in V(G)-T\), then \(\widetilde{cap}(\delta(v))\) is even for any \(v\in V(\widetilde{G})-\widetilde{T}\). Therefore, applying Theorem 3.1 finishes the proof. ## 4 Odd \(T\)-walk barrier In this section we provide a combinatorial description of barrier structure that defines a tight upper bound for the value of a maximum odd \(T\)-walk packing, which is our second result announced in Section 1. This characterization is surprisingly similar to the corresponding barrier structure for maximum odd \(s-t\) path packings due to Schrijver and Seymour [11]. A strong duality is proven using the equivalence with multiflows from Section 3. Given network \((G,T,cap)\), a (not necessarily induced) subgraph \(B\) of \(G\) with \(T\subseteq V(B)\) is called an **odd \(T\)-walk barrier** if there is no odd \(T\)-walk in \(B\). The **capacity**\(cap(B)\) of barrier \(B\) is defined as \(\frac{1}{2}cap(I(B))+cap(U(B))\), where for an arbitrary (not necessarily induced) subgraph \(H\) of \(G\) we use the following notation: * \(I(H):=\{xy\in E(G)\mid xy\in\delta_{G}(V(H))\}\) (informally, the edge leaves \(H\) and does not return); * \(U(H):=\{xy\in E(G)\mid x,y\in V(H),\,xy\in E(G)-E(H)\}\) (informally, the edge takes a U-turn by leaving \(H\) and immediately returning back). It is easy to verify that the capacity of any barrier \(B\) is an upper bound for the value of any odd \(T\)-walk packing \(\mathcal{P}\). Indeed, any odd \(T\)-walk \(W\) endowed with weight \(\alpha\) in \(\mathcal{P}\) is Figure 1: Commodity graph \(H_{T}\); anticlique family \(\mathcal{A}_{1}\) is in red and \(\mathcal{A}_{2}\) is in blue. not entirely contained in \(B\); thus it either visits some vertex \(v\in V(G)-V(B)\) or traverses some edge \(xy\in E(G)-E(B)\) such that \(x,y\in V(B)\). In the former case it reserves \(\alpha\) units of capacity of at least two edges in \(I(B)\), and in the latter case it reserves \(\alpha\) units of capacity of at least one edge in \(U(B)\). Therefore \(\|\mathcal{P}\|\leq cap(B)\). The strong duality also holds: [see Appendix] Let \((G,T,cap)\) be a network. If \(\mathcal{P}\) ranges over odd \(T\)-walk packings and \(B\) ranges over odd \(T\)-walk barriers, then \(\max\limits_{\mathcal{P}}\|\mathcal{P}\|=\min\limits_{B}cap(B)\). The min-max formula above enables strengthening the statement of Theorem 3.2 as follows. Given network \((G,T,cap)\), let \(\mathcal{P}\) be a maximum fractional odd \(T\)-walk packing in \((G,T,cap)\). If all capacities are non-negative even integers, \(\|\mathcal{P}\|\) is integer. If additionally \(cap(\delta(v))\) is divisible by 4 for all \(v\in V(G)-T\), \(\|\mathcal{P}\|\) is even integer. Proof.: By Theorem 3.2, \(\|\mathcal{P}\|=\frac{1}{2}cap(I(B))+cap(U(B))\) for minimum odd \(T\)-walk barrier \(B\). If all capacities are even integers, then \(cap(U(B))\) and \(cap(I(B))\) are also even, therefore the first part of the statement is trivial. Let \(A:=V(G)-V(B)\), note that \(\delta(A)=I(B)\). Under the second condition, note the following congruence: \[0\equiv\sum\limits_{v\in A}cap(\delta(v))\equiv 2cap(\gamma(A))+cap(\delta(A)) \equiv cap(I(B))\pmod{4}\] Therefore, \(\frac{1}{2}cap(I(B))\) is also an even integer. ## 5 Odd \(T\)-trail packing algorithm Hereinafter we focus on network \((G,T,\mathbf{2})\) for an inner Eulerian graph \(G\). Since all capacities are 2, each edge can be traversed by at most two walks in an integer packing. Our ultimate goal is to construct a maximum integer \(T\)-trail packing. The third result announced in Section 1 is as follows: Given network \((G,T,\mathbf{2})\) with inner Eulerian \(G\), it is possible to construct a maximum integer packing of odd \(T\)-trails in polynomial time. This packing is also a maximum fractional packing of odd \(T\)-walks in \((G,T,\mathbf{2})\). We use a chemistry-inspired notation: replace each edge in \(G\) with two **valencies** each of which may be occupied by a walk. More formally: For edge \(e\in E(G)\), denote \(e^{1}\) and \(e^{2}\) to be two **valencies** of \(e\); edge \(e\) is called **underlying** for \(e^{1}\) and \(e^{2}\). Define the **valence graph**\(G^{12}\) to be the graph on the same vertices as \(G\) with valencies regarded as edges. Figure 2: An example of an odd \(T\)-walk barrier \(B\); vertices in \(T\) are crosses, vertices in \(V(B)-T\) are black dots, vertices not in \(V(B)\) are white dots; edges in \(E(B)\) are solid, edges not in \(E(B)\) are dashed; edges in \(I(B)\) are blue, edges in \(U(B)\) are red. In what follows, instead of integer odd \(T\)-walk packings in \((G,T,\mathbf{2})\) we shall be dealing with integer odd \(T\)-trail packings in \((G^{12},T,\mathbf{1})\), which effectively are sets of edge-disjoint odd \(T\)-trails in \(G^{12}\). However, a \(T\)-trail in \(G^{12}\) may correspond to a non edge-simple \(T\)-walk in \(G\) once we replace valencies with their underlying edges. This is captured as follows: Edge \(e\in E(G)\) is called **irregular** for \(T\)-trail \(W\) in \(G^{12}\) if \(W\) traverses both valencies \(e^{1},e^{2}\) and **regular** otherwise. Hence we are looking for a maximum set of edge-disjoint odd \(T\)-trails in \(G^{12}\) without irregular edges. A brief outline of our approach is as follows. In Section 5.1 we introduce **signing** on valencies that guide \(T\)-trails and ensure they have proper parities. Given a suitable signing, we prove the existence of an integer odd \(T\)-trail packing in \((G^{12},T,\mathbf{1})\) (with possible irregular edges) of the needed value by reduction to **bidirected networks**. We also prove that a suitable signing exists. In Section 5.2 we construct such a signing and also perform the so-called **terminal evacuation** by introducing an auxiliary terminal \(t^{\prime}\) for each \(t\in T\) that is connected to \(t\) with a proper number of valencies of certain signs. This transformation allows to assume that no trail in the packing contains any terminal as its intermediate vertex. Next, Section 5.3 ensures that inner vertices are of degree at most \(3\). Finally in Section 5.4 we deal with irregular edges. We show that whenever both valencies \(e^{1}\) and \(e^{2}\) of some edge \(e\) are used by some odd \(T\)-trail \(W\) in the packing, either \(W\) could be simplified (preserving its parity) or \(e\) is in fact **redundant**, i.e. \(G\) can be reduced by dropping \(e\). This reduction preserves the needed properties of \(G\); hence one can recompute a packing and iterate. These iterations continue until there are no more remaining irregular edges. ### Signed graphs We also use the framework of **signed graphs** whose edges are endowed with signs "\(\star\)" and "-". Intuitively, a signed valence graph introduces a convenient family of \(T\)-trails defined by the requirement of alternation which makes parity of the trail uniquely determined by the signs of the first and the last valence. A **signing** is an arbitrary function \(M:E(G^{12})\rightarrow\{\star,\neg\}\). Graph \(G^{12}\) together with some signing \(M\) forms a **signed valence graph**\((G^{12},M)\). In presence of terminal set \(T\subseteq V(G)\), a **signed valence network**\((G^{12},M,T,\mathbf{1})\) appears. A \(T\)-trail \(W\) in \((G^{12},M)\) is called **alternating** if signs of valencies alternate along \(W\). Signing \(M\) is called **inner balanced** if for any \(v\in V(G)-T\), the number of positive edges incident to \(v\) equals the number of negative edges incident to \(v\). We shall need the notion of **bidirected graphs**, which generalize digraphs and admit three possible kinds of edges: a usual **directed edge** (ingoing for one endpoint and outgoing for another), a **positive edge** (which is ingoing for both its endpoints) and a **negative edge** (which is outgoing for both its endpoints). The definition of a bidirected walk or a bidirected trail is similar to Definition 1 with the only difference that for any internal vertex \(v_{i}\) exactly one of \(e_{i},e_{i+1}\) is ingoing to \(v_{i}\) and another is outgoing from \(v_{i}\). Refer to [10, Ch. 36] for details. The notion of inner Eulerianness is extended to bidirected graphs as follows: a bidirected graph \(G\) is **inner Eulerian** with respect to terminal set \(T\) if for any \(v\in V(G)-T\) the number of edges ingoing to \(v\) is equal to the number of edges outgoing from \(v\). Similarly to the undirected case, triple \((G,T,cap)\) consisting of bidirected graph \(G\), terminal set \(T\) and capacity function \(cap\) is called a **bidirected network**. We rely on two theorems of a similar kind, one of which is due to Cherkassky [2] and Lovasz [7], and another is due to Babenko and Karzanov [1, Th. 1.1]: [Min-max formula for \(T\)-trail packings in inner Eulerian undirected graphs [2], [7]] Let \((G,T,\mathbf{1})\) be an inner Eulerian network. Then the value of a maximum packing of \(T\)-trails equals \(\frac{1}{2}\sum_{t\in T}\lambda(\{t\},T-\{t\})\). Such a packing can be chosen integer and can be constructed in polynomial time. [Min-max formula for \(T\)-trail packings in inner Eulerian bidirected graphs [1]] Let \((G,T,\mathbf{1})\) be an inner Eulerian bidirected network. Then the value of a maximum packing of bidirected \(T\)-trails equals \(\frac{1}{2}\sum_{t\in T}\lambda(\{t\},T-\{t\})\). Such a packing can be chosen integer and can be constructed in polynomial time. Note that the value of a maximum packing in the latter theorem does not depend on actual directions of edges. As we mentioned before, signings encode a certain family of odd \(T\)-walks. The following theorem describes why it is important for us. Let \((G^{12},M,T,\mathbf{1})\) be a signed valence network with an inner balanced signing \(M\). Let \(\mathcal{P}\) be a maximum packing of odd \(T\)-trails in \((G^{12},T,\mathbf{1})\), and \(\mathcal{S}\) be a maximum packing of alternating \(T\)-trails in \((G^{12},M,T,\mathbf{1})\). Then \(\|\mathcal{P}\|=\|\mathcal{S}\|\). Additionally, \(\mathcal{P}\) and \(\mathcal{S}\) can be chosen integer and can be constructed in polynomial time. Proof.: Construct an auxiliary bidirected graph \(\overleftrightarrow{G^{12}}\) corresponding to the signed valence graph \((G^{12},M)\) as follows: edges in \(\overleftrightarrow{G^{12}}\) correspond to valences in \(E(G^{12})\); an edge is positive if the sign of the valence is "+" and negative otherwise; \(M\) being inner balanced implies that \(\overleftrightarrow{G^{12}}\) is inner Eulerian, therefore Theorem 4.2 is applicable to \((\overleftrightarrow{G^{12}},T,\mathbf{1})\). Also note that bidirected \(T\)-trails in \(\overleftrightarrow{G^{12}}\) correspond to alternating \(T\)-trails in \((G^{12},M)\). Refer to Figures (a)a and (b)b for an example. Note that \(G^{12}\) is automatically inner Eulerian due to each vertex in \(V(G^{12})\) being adjacent to an even number of valencies, therefore Theorem 4.2 is applicable to \((G^{12},T,\mathbf{1})\). It follows that maximum packing \(\mathcal{P}\) of odd \(T\)-trails in \((G^{12},T,\mathbf{1})\) and maximum packing \(\overleftrightarrow{\mathcal{P}}\) of bidirected \(T\)-trails in \((\overleftrightarrow{G^{12}},T,\mathbf{1})\) are of the same value \(\frac{1}{2}\sum_{t\in T}\lambda(\{t\},T-\{t\})\). The latter packing can be chosen integer and can be constructed in polynomial time, and then transformed into a maximum integer packing \(\mathcal{S}\) of alternating \(T\)-trails in signed valence network \((G^{12},M,T,\mathbf{1})\). A signed valence network \((G^{12},M,T,\mathbf{1})\) with inner balanced signing \(M\) is \((p,q)\)**-tight** if 1) there exists an integer \(T\)-trail packing in \((G^{12},T,\mathbf{1})\) of value \(p+q\) and 2) the number of "-" valencies adjacent to terminals \(T\) is \(q\). Given a signed valence network \((G^{12},M,T,\mathbf{1})\) with a \((p,q)\)-tight inner balanced signing \(M\), there exists an integer packing \(\mathcal{P}+\mathcal{Q}\) in \((G^{12},M,T,\mathbf{1})\) of value \(p+q\), where \(\mathcal{P}\) consists of at least \(p\) odd alternating \(T\)-trails and \(\mathcal{Q}\) consists of at most \(q\) even alternating \(T\)-trails. Moreover, \(T\)-trails in \(\mathcal{P}+\mathcal{Q}\) can be chosen so as to avoid passing through terminals \(T\) as intermediate vertices. Proof.: The first tightness property implies existence of an integer \(T\)-trail packing in \((G^{12},T,\mathbf{1})\) of value \(p+q\). Then, by Theorem 4.2 we get a packing of \(p+q\) alternating \(T\)-trails in \((G^{12},M,T,\mathbf{1})\). Break this packing into two parts \(\mathcal{P}\) and \(\mathcal{Q}\), where \(\mathcal{P}\) consists of odd \(T\)-trails and \(\mathcal{Q}\) consists of even \(T\)-trails. The second tightness property implies \(\|\mathcal{Q}\|\leq q\) as each trail in \(\mathcal{Q}\) has a - valence incident to a terminal, therefore \(\|\mathcal{P}\|\geq p\), as needed. W.l.o.g. all these \(T\)-trails do not contain terminals as intermediate vertices (for otherwise, if some alternating \(T\)-trail \(W\) visits \(t\in T\) as its intermediate vertex, then \(W\) can be split into two subtrails \(W_{1},W_{2}\) at \(t\); among \(W_{1}\), \(W_{2}\) at least one, say \(W_{1}\) is a valid alternating \(T\)-trail; replace \(W\) with \(W_{1}\) and repeat). ### Initial signing and terminal evacuation In this section we present an algorithm for constructing a tight inner balanced signing \(M\). This is done with the help of network \((\widetilde{G},\widetilde{T},\widetilde{cap})\) from Section 3. Note that since \(cap=\mathbf{2}\), we have \(\widetilde{cap}=\mathbf{1}\). Degrees of vertices in \(\widetilde{G}\) coincide with degrees of their pre-images in \(G\), therefore \(\widetilde{G}\) is also inner Eulerian. Consider a maximum multiflow \(\mathcal{F}\) in \((\widetilde{G},\widetilde{T},\mathbf{1})\) with commodity graph \(H_{T}\). Theorem 3.1 ensures that \(\mathcal{F}\) can be chosen integer, i.e. \(\mathcal{F}\) is a collection of edge-disjoint \(\widetilde{T}\)-trails (endowed with weight \(1\)) in \(\widetilde{G}\) connecting vertex pairs of form \(t_{1}-t_{2}^{\prime}\) for distinct \(t_{1},t_{2}\in T\). Each of these \(T\)-trails is odd, therefore their pre-images are odd \(T\)-trails in \(G^{12}\) (see Figure 4). Denote the packing of these odd \(T\)-trails in \((G^{12},T,\mathbf{1})\) (taken with weight \(1\)) as \(\mathcal{P}\). Let \(p:=\|\mathcal{P}\|\). The proof of Theorem 3.1 implies: \(\mathcal{P}\) _is a maximum odd \(T\)-trail packing in \((G^{12},T,\mathbf{1})\)._ Consider the subgraph \(\widetilde{Z}\) of \(\widetilde{G}\) consisting of edges not appearing in \(T\)-trails of \(\mathcal{F}\). Since any vertex \(v\in V(\widetilde{G})-\widetilde{T}\) has even degree in \(\widetilde{G}\), \(\widetilde{Z}\) is also inner Eulerian with respect to terminals \(\widetilde{T}\). Therefore \(\widetilde{Z}\) decomposes into two families of edge-disjoint trails: a collection of cyclic trails and a collection of \(\widetilde{T}\)-trails. The former ones correspond to even cyclic trails in \(G^{12}\) (due to biparticity of \(\widetilde{G}\)); denote the packing (with unit weights) of these even cyclic trails in \((G^{12},T,\mathbf{1})\) as \(\mathcal{E}\). The latter ones may be further subdivided into two categories: (i) \(t_{1}-t_{2}\) or \(t_{1}^{\prime}-t_{2}^{\prime}\) trails for distinct \(t_{1},t_{2}\in T\); and (ii) \(t-t^{\prime}\) trails for \(t\in T\). (Note that \(t_{1}-t_{2}^{\prime}\) trails for \(t_{1}\neq t_{2}\) cannot appear due to maximality of \(\mathcal{F}\).) The first category corresponds to even \(T\)-trails in \((G^{12},T,\mathbf{1})\). The second category corresponds to odd cyclic trails passing through terminals in \((G^{12},T,\mathbf{1})\). Denote the packings in \((G^{12},T,\mathbf{1})\) (with unit weights) corresponding to these two categories as \(\mathcal{Q}\) and \(\mathcal{R}\) respectively, and let \(q:=\|\mathcal{Q}\|\) and \(r:=\|\mathcal{R}\|\). Figure 3: Correspondence between signed valence graph with inner balanced signing \((G^{12},M,T,\mathbf{1})\) and inner Eulerian bidirected graph \((\widetilde{G^{12}},T,\mathbf{1})\). Terminals are crosses, other vertices are black dots. Note that \(\mathcal{P}+\mathcal{Q}+\mathcal{R}+\mathcal{E}\) is an integer packing of (possibly cyclic) trails in \((G^{12},T,\mathbf{1})\) that traverses each edge in \(G^{12}\) exactly once. Starting from this moment we forget about graph \(\widetilde{G}\) and release the notation \((\cdot)^{\prime}\) of its meaning of symmetry in \(\widetilde{G}\). Perform **terminal evacuation** as follows. For each terminal \(t\in T\) introduce a new terminal \(t^{\prime}\) connected to \(t\) by a certain number of edges. Namely, extend each \(t_{1}-t_{2}\) trail (\(t_{1}\) and \(t_{2}\) may coincide) in \(\mathcal{P}+\mathcal{Q}+\mathcal{R}\) with new \(t^{\prime}_{1}-t_{1}\) and \(t_{2}-t^{\prime}_{2}\) valencies in \(G^{12}\), obtaining a new valence graph \(G^{\prime 12}\) and new integer packings \(\mathcal{P}^{\prime},\mathcal{Q}^{\prime},\mathcal{R}^{\prime}\) in \((G^{\prime 12},T^{\prime},\mathbf{1})\). Note that originally each terminal \(t\in T\) had an even number of adjacent edges in \(G^{12}\), therefore it serves as an endpoint for an even number of trails in \(\mathcal{P}+\mathcal{Q}+\mathcal{R}\) (counting endpoints of \(\mathcal{R}\) twice). Hence, for any \(t\in T\) the number of added \(t^{\prime}-t\) valencies is even, therefore the underlying graph \(G^{\prime}\) is well-defined and can be constructed by adding half the number of \(t^{\prime}-t\) valencies. Note that odd (resp. even) \(T^{\prime}\)-trails in \(G^{\prime 12}\) correspond to odd (resp. even) \(T\)-trails in \(G^{12}\). Now construct signing \(M^{\prime}\) for \(G^{\prime 12}\) by: turning trails in \(\mathcal{P}^{\prime}\) and \(\mathcal{R}^{\prime}\) into odd alternating trails starting and ending with + valencies; turning trails in \(\mathcal{Q}^{\prime}\) into even alternating trails (in any of two possible ways); turning each (cyclic) trail in \(\mathcal{E}\) alternating (in any of two possible ways). Clearly, such \(M^{\prime}\) is inner balanced w.r.t. \(T^{\prime}\). Also \(M^{\prime}\) is \((p,q)\)-tight. Indeed, \(\mathcal{P}^{\prime}+\mathcal{Q}^{\prime}\) is a \(T^{\prime}\)-trail packing of value \(p+q\). Finally, "-" valencies adjacent to terminals \(T^{\prime}\) correspond to \(T^{\prime}\)-trails in \(\mathcal{Q}^{\prime}\), hence there are exactly \(q\) of them. Hence we proved the following theorem. Given network \((G,T,\mathbf{2})\) with inner Eulerian \(G\) such that the maximum value of an odd \(T\)-walk packing in \((G,T,\mathbf{2})\) is \(p\), it is possible to construct in polynomial time a signed valence network \((G^{\prime 12},M^{\prime},T^{\prime},\mathbf{1})\) with an inner balanced signing \(M^{\prime}\) such that: * \(M^{\prime}\) is \((p,q)\)-tight for some \(q\); * any packing \(\mathcal{P}^{\prime}\) of odd \(T^{\prime}\)-trails in \((G^{\prime},T^{\prime},\mathbf{2})\) can be transformed into a packing \(\mathcal{P}\) of odd \(T\)-trails in \((G,T,\mathbf{2})\) of the same value in polynomial time. ### Subcubization In this section we prove that it is sufficient to solve the problem only for graphs with degree of non-terminal vertices not exceeding \(3\), which simplifies the subsequent case splitting. Valence network \((G^{12},T,\mathbf{1})\) is called **inner subcubic**, if \(\deg v\leq 3\) for any \(v\in V(G)-T\). Figure 4: Four possible trail-like components of \(\widetilde{G}\) and their images in \(G^{12}\): an odd \(T\)-trail, an odd cyclic trail passing through terminal, an even \(T\)-trail and an even cyclic trail. Terminals are crosses, other vertices are black dots. Define the **supercubicity** of \(G\) to be \[s(G):=\sum_{v\in V(G)-T}\max\{0,\deg v-3\}.\] Obviously, \(s(G)=0\) for inner subcubic networks. Let \((G^{12},M,T,\mathbf{1})\) be a signed valence network with a \((p,q)\)-tight inner balanced signing \(M\). Apply Lemma 23 to construct an integer packing \(\mathcal{P}+\mathcal{Q}\), where \(\mathcal{P}\) (resp. \(\mathcal{Q}\)) consists of at least \(p\) (resp. at most \(q\)) odd (resp. even) alternating \(T\)-trails. Consider an inner vertex \(v\) of degree \(d\geq 4\). Denote edges incident to \(v\) in \(G\) as \(\delta_{G}(v)=\{e_{1},\ldots,e_{d}\}\). Whenever some trail \(W\) in \(\mathcal{P}+\mathcal{Q}\) passes through \(v\), it contains a pair of consequent valencies corresponding to some edges \(\{e_{i},e_{j}\}\) in \(E(G)\); call \((e_{i},e_{j})\) for \(i\leq j\) an (ordered) **transit pair**. (Note that this ordering of \(e_{i}\) and \(e_{j}\) is not related to the order in which these edges are passed by \(W\).) Clearly, whenever an alternating trail passes through a transit pair, it takes valencies of opposite signs. Valencies corresponding to edges in \(\delta_{G}(v)\) not traversed by any of \(W_{i}\) could also be (arbitrarily) divided into pairs of opposite signs (due to signs balance). Fix some division; it generates (by replacing valencies with their preimages in \(G\)) more pairs \((e_{i},e_{j})\) for \(i\leq j\) that we also regard as transit. Totally we get exactly \(d\) transit pairs. One can partition the set of incident edges \(\delta_{G}(v)\) into two subsets \(L\sqcup R\) such that \(|L|,|R|\geq 2\) and there are at most two transit pairs \((e_{i},e_{j})\) (call them **split transit pairs**) such that \(e_{i},e_{j}\) belong to distinct subsets, i.e. \(e_{i}\in L\), \(e_{j}\in R\) or \(e_{i}\in R\), \(e_{j}\in L\). Proof.: Suppose there exists a transit pair \((e_{i},e_{j})\) for \(i<j\). Define \(L:=\{e_{i},e_{j}\}\) and \(R:=\delta_{G}(v)-L\). Each split transit pair must use another valence of \(e_{i}\) or \(e_{j}\), hence there could be at most two such pairs. On the other hand, if all transit pairs are of the form \((e_{i},e_{i})\), then an arbitrary partition \(L\sqcup R\) with \(|L|,|R|\geq 2\) will do. Construct a new valence network \((G^{\prime},M^{\prime},T^{\prime}=T,\mathbf{1})\) (Figure 5) by replacing vertex \(v\) with three vertices \(u\), \(v^{\prime}\), \(w\) and two edges \(uv^{\prime}\), \(v^{\prime}w\), and also replacing \(v\) with \(u\) in all edges from \(L\) and replacing \(v\) with \(w\) in all edges from \(R\). There are either \(0\) or \(2\) split transit pairs; if there are two of them, extend these trails by inserting valencies of \(uv^{\prime}\) and \(v^{\prime}w\) with suitable signs so that signing \(M^{\prime}\) is inner balanced. If there are no transit pairs, simply make both \(uv^{\prime}\) and \(v^{\prime}w\) have one positive and one negative valence. Thus, we also obtain a new packing \(\mathcal{P}^{\prime}+\mathcal{Q}^{\prime}\) in \((G^{\prime 12},M^{\prime},T,\mathbf{1})\), where \(\mathcal{P}^{\prime}\) (resp. \(\mathcal{Q}^{\prime}\)) contains at least \(p\) (resp. at most \(q\)) odd (resp. even) alternating \(T\)-trails not passing through terminals as intermediate vertices. \(s(G^{\prime})=s(G)-1\) for the resulting \(G^{\prime}\). Proof.: First of all, \(\deg v^{\prime}=2<3\), so we do not need to consider \(v^{\prime}\) when calculating the change of supercubicity. Then, \(\deg u=1+|L|\geq 3\) and \(\deg w=1+|R|\geq 3\); also Figure 5: Subcubization at \(v\) \(\deg u+\deg w=2+|L|+|R|=d+2\) and finally \(\deg v=d\). We conclude: \[s(G^{\prime})-s(G)=\max\{0,\deg u-3\}+\max\{0,\deg w-3\}-\max\{0, \deg v-3\}=\] \[(\deg u-3)+(\deg w-3)-(\deg v-3)=(\deg u+\deg w)-\deg v-3=(d+2)-d-3=-1\,.\] Repeat these transformations until there are no more inner vertices with degree more than 3. We obtain an inner subcubic signed valence network \((G^{\prime 12},M^{\prime},T^{\prime}=T,\mathbf{1})\) with an inner balanced signing \(M^{\prime}\). Note that any \(T^{\prime}\)-trail \(W^{\prime}\) in \((G^{\prime 12},T^{\prime},\mathbf{1})\) may easily be transformed into a \(T\)-trail \(W\) in \((G^{12},T,\mathbf{1})\) of the same parity by performing all actions in the reverse order and removing added parts of \(W^{\prime}\), if there are any. The resulting signing \(M^{\prime}\) is \((p,q)\)-tight. Proof.: The total number of "\(-\)" valencies adjacent to terminals does not change during subcubization. Also, packing \(\mathcal{P}^{\prime}+\mathcal{Q}^{\prime}\) has the same value as \(\mathcal{P}+\mathcal{Q}\), i.e. \(p+q\). Hence we proved the following theorem. If \((G^{12},M,T,\mathbf{1})\) is signed valence network with a \((p,q)\)-tight inner balanced signing \(M\), it is possible to construct a signed valence network \((G^{\prime 12},M^{\prime},T^{\prime}=T,\mathbf{1})\) with a \((p,q)\)-tight inner balanced signing \(M^{\prime}\) such that: * \(G^{\prime}\) is inner subcubic; * any packing \(\mathcal{P}^{\prime}\) of odd \(T^{\prime}\)-trails in \((G^{\prime},T^{\prime},\mathbf{2})\) may be transformed into packing \(\mathcal{P}\) of odd \(T\)-trails in \((G,T,\mathbf{2})\) of the same value in polynomial time. ### Regularization Let \((G^{12},M,T,\mathbf{1})\) be an inner subcubic signed valence network with a \((p,q)\)-tight inner balanced signing \(M\). Construct an integer packing \(\mathcal{P}+\mathcal{Q}\) of at least \(p\) odd alternating \(T\)-trails (denoted by \(\mathcal{P}\)) and at most \(q\) even alternating \(T\)-trails (denoted by \(\mathcal{Q}\)) using Lemma 4. Suppose there is edge \(xy\) in \(E(G)\) that is irregular for some \(T\)-trail \(W\) in \(\mathcal{P}+\mathcal{Q}\), i.e. \(W\) traverses both of \(xy\)'s valencies in \(G^{12}\). Denote the fragment of \(W\) between two occurrences of valencies of \(xy\) (but not including them) by \(C\). Note that \(xy\) is not adjacent to any terminal since all \(T\)-trails in \(\mathcal{P}+\mathcal{Q}\) are assumed to avoid passing through terminals as intermediate vertices. Consider cases as follows: **Case 1** (Figure 5(a)): valencies of \(xy\) have opposite signs and \(W\) traverses them in the same direction. Simplify \(W\) by dropping occurrences of both of these valencies. **Case 2** (Figure 5(b)): valencies of \(xy\) have opposite signs and \(W\) traverses them in the opposite directions. Simplify \(W\) by dropping occurrences of both of these valencies together with \(C\). **Case 3** (Figure 5(c)): valencies of \(xy\) have same signs and \(W\) traverses them in the same direction. Simplify \(W\) by dropping one of the occurrences of these valencies together with \(C\). **Case 4** (Figures 5(d) and 5(e)): valencies of \(xy\) have same signs and \(W\) traverses them in the opposite directions. Assume that \(xy\) is chosen such that \(C\) is the shortest possible. W.l.o.g. \(y\) belongs to \(C\). Finally assume that both valencies of \(xy\) are \(\star\); the remaining case is done analogously. In Case 4, \(\deg y=3\) and \(C\) starts with a negative valence of some edge \(yu\) and terminates with a negative valence of some edge \(vy\) with \(u\neq v\). Proof.: If \(\deg y=1\), the inner Eulerianess of \(y\) is contradicted as both of its adjacent valencies are positive. If \(\deg y=2\), two valencies of the remaining adjacent edge are both negative and \(W\) must follow both of them in order to be alternating. This contradicts the choice of \(xy\) with the shortest \(C\). Finally \(\deg y=3\) and \(C\) starts with some valence of \(yu\) and terminates with some valence of \(vy\), both of which are negative. If \(u=v\), this would again contradict the choice of \(xy\) with the shortest \(C\). Consider two remaining valencies of \(yu\) and \(yv\). For signing to be balanced at \(y\), one of them must be + and another must be -. Therefore, one of \(yu\) and \(yv\) has both a + and a - valence; assume it is \(yu\), the other case is done analogously. Call \(yu\)**redundant** and obtain a new signed valence network \((G^{\prime 12},M^{\prime},T^{\prime}=T,\mathbf{1})\) by removing \(yu\) in \(G^{\prime}\). **Lemma 32**.: _New signing \(M^{\prime}\) is inner balanced and \((p,q)\)-tight._ Proof.: Signing \(M^{\prime}\) is inner balanced since we remove two valencies of the same edge of opposite signs. Also, the removed edge is not adjacent to a terminal, therefore the total number of "-" valencies adjacent to terminals is preserved. Let us prove that a packing of \(T\)-trails of value at least \(p+q\) still remains. Namely, we alter \(\mathcal{P}+\mathcal{Q}\) so that none of its \(T\)-trails passes through valencies of the removed edge \(yu\). W.l.o.g. let \(e^{1}\) be the valence of \(e=yu\) that is the initial or the final valence of \(C\). Alter \(W\) by removing both valencies of \(xy\) and \(C\), obtaining a (non-alternating) subtrail \(W^{\prime}\) avoiding \(e^{1}\). Note that the remaining valence \(e^{2}\) may either: (i) not belong to any trail in \(\mathcal{P}+\mathcal{Q}\); (ii) belong to \(C\); (iii) belong to the same trail \(W\) outside of \(C\); (iv) belong to another trail in \(\mathcal{P}+\mathcal{Q}\). In subcases (i,ii) (Figure 6d) \(e^{2}\) is no longer used by any trail in \(\mathcal{P}+\mathcal{Q}\); replace \(W\) with \(W^{\prime}\). In subcases (iii,iv) (Figure 6e), consider trail containing \(e^{2}\) and replace \(e^{2}\) in it with the \(y-u\) fragment of \(C\) that is different from \(e^{1}\) (note that \(C\) is not used by \(W^{\prime}\) anymore). Figure 6: Regularization cases. Repeat the procedure until no more irregular edges exist. In Cases 1-3 the signed valence graph does not change, but the total length of odd \(T\)-trails in the packing decreases. Therefore, this step may be iterated until either there are no irregular edges or Case 4 happens and we obtain a new signed graph \((G^{\prime 12},M^{\prime})\). In other words, \((|E(G)|,L)\), where \(L\) is the total length of \(T\)-trails in \(\mathcal{P}\), decreases lexicographically in each case. Thus the total number of iterations is polynomial. Let us summarize the result of this section by the following theorem. If \((G^{12},M,T,\mathbf{1})\) is an inner subcubic signed valence network with a \((p,q)\)-tight inner balanced signing \(M\), then it is possible to construct an integer packing of odd \(T\)-trails in \((G,T,\mathbf{2})\) of value at least \(p\) in polynomial time. ### Concluding the proof Proof of Theorem 4.: Let \((G,T,\mathbf{2})\) be a inner Eulerian network and let \(p\) be the value of a maximum odd \(T\)-walk packing in it. Apply Theorem 4.2 to construct a signed valence network \((G^{\prime 12},M^{\prime},T^{\prime},\mathbf{1})\) with a \((p,q)\)-tight inner balanced signing \(M^{\prime}\) (for some \(q\)). By Theorem 4.2 the latter network can be replaced by a subcubic signed valence network \((G^{\prime 12},M^{\prime\prime},T^{\prime\prime},\mathbf{1})\) with a \((p,q)\)-tight inner balanced signing \(M^{\prime\prime}\). Now Theorem 4.2 implies the existence of packing \(\mathcal{P}^{\prime\prime}\) of odd \(T^{\prime\prime}\)-trails of value \(p\) in \((G^{\prime\prime},T^{\prime\prime},\mathbf{2})\). Finally reverse the changes applied to network: \(\mathcal{P}^{\prime\prime}\) gives rise to packing \(\mathcal{P}^{\prime}\) of odd \(T^{\prime}\)-trails of the same value \(p\) in \((G^{\prime},T^{\prime},\mathbf{2})\) (by Theorem 4.2); in its turn, \(\mathcal{P}^{\prime}\) generates packing \(\mathcal{P}\) of odd \(T\)-trails of value \(p\) in \((G,T,\mathbf{2})\) (by Theorem 4.2), as needed. Note that all of the above steps take polynomial time.
2306.02622
What Makes Entities Similar? A Similarity Flooding Perspective for Multi-sourced Knowledge Graph Embeddings
Joint representation learning over multi-sourced knowledge graphs (KGs) yields transferable and expressive embeddings that improve downstream tasks. Entity alignment (EA) is a critical step in this process. Despite recent considerable research progress in embedding-based EA, how it works remains to be explored. In this paper, we provide a similarity flooding perspective to explain existing translation-based and aggregation-based EA models. We prove that the embedding learning process of these models actually seeks a fixpoint of pairwise similarities between entities. We also provide experimental evidence to support our theoretical analysis. We propose two simple but effective methods inspired by the fixpoint computation in similarity flooding, and demonstrate their effectiveness on benchmark datasets. Our work bridges the gap between recent embedding-based models and the conventional similarity flooding algorithm. It would improve our understanding of and increase our faith in embedding-based EA.
Zequn Sun, Jiacheng Huang, Xiaozhou Xu, Qijin Chen, Weijun Ren, Wei Hu
2023-06-05T06:50:09Z
http://arxiv.org/abs/2306.02622v1
What Makes Entities Similar? A Similarity Flooding Perspective for Multi-sourced Knowledge Graph Embeddings ###### Abstract Joint representation learning over multi-sourced knowledge graphs (KGs) yields transferable and expressive embeddings that improve downstream tasks. Entity alignment (EA) is a critical step in this process. Despite recent considerable research progress in embedding-based EA, how it works remains to be explored. In this paper, we provide a similarity flooding perspective to explain existing translation-based and aggregation-based EA models. We prove that the embedding learning process of these models actually seeks a fix-point of pairwise similarities between entities. We also provide experimental evidence to support our theoretical analysis. We propose two simple but effective methods inspired by the fixpoint computation in similarity flooding, and demonstrate their effectiveness on benchmark datasets. Our work bridges the gap between recent embedding-based models and the conventional similarity flooding algorithm. It would improve our understanding of and increase our faith in embedding-based EA. Machine Learning, Knowledge Graph Embedding, Knowledge Graph Embedding ## 1 Introduction A knowledge graph (KG) is a set of relational triplets. Each triplet is in the form of (_subject entity_, _relation_, _object entity_), denoted by \((s,r,o)\) for short. A relational triplet indicates a relation between two entities, such as (_ICML 2023_, _host in_, _Hawai_). Different KGs are created by harvesting various webs of data. They could cover complementary knowledge from different sources and thus aid in resolving the incompleteness issue of each single KG. In recent years, representing multi-sourced KGs in a unified embedding space, as illustrated in Figure 1, has shown promising potential in promoting knowledge fusion and transfer (Trivedi et al., 2018). It uses entity alignment (EA) between different KGs to jump-start joint and transferable representation learning. EA refers to the match of identical entities from different KGs, such as "_ICML_" and "_International Conference on Machine Learning_". The goal of multi-sourced KG embedding is learning to distinguish between identical and dissimilar entities in different KGs while capturing their respective graph structures. By aligning the embeddings of identical entities, an entity in one KG can indirectly capture the graph structures of its counterpart in another KG, resulting in more informative representations to benefit downstream tasks. Therefore, as a fundamental task, embedding-based EA has drawn increasing attention (Chen et al., 2017; Guo et al., 2019; Sun et al., 2020; Zhao et al., 2022; Zeng et al., 2021; Zhang et al., 2022; Guo et al., 2022). The key of embedding-based EA lies in how to generate entity embeddings for alignment learning. Existing techniques fall into two groups, translation-based (Chen et al., 2017; Sun et al., 2017, 2019) and aggregation-based models. A translation-based model adopts TransE (Bordes et al., 2013) or its variants for embedding learning. Given a triplet \((s,r,o)\), TransE interprets a relation embedding as the translation vector from the subject entity embedding to the object entity. Another group of EA models uses graph convolutional networks (GCNs) (Kipf and Welling, 2017) to generate an entity representation by aggregating its neighbor embeddings. Despite the considerable technical progress in embedding-based EA, a critical question remains unanswered, i.e., _what makes entity embeddings similar in an EA model?_ This Figure 1: Illustration of representing two KGs in a unified space. question may also cause some researchers to misunderstand and distrust embedding-based EA techniques. Besides, the connection between embedding-based EA models and traditional symbolic methods remains unexplored. Under the circumstances, we seek to answer the question. We present a similarity flooding (SF) perspective to understand and improve embedding-based EA with both theoretical analysis and experimental evidence. SF is a widely-used algorithm for matching structured data (Melnik et al., 2002, 2001). We show that the essence of recent embedding-based EA is also a variant of SF, and learning embeddings is only a means. Our main contributions are summarized as follows: * We present the first theoretical analysis of embedding-based EA techniques to understand how they work. We provide a similarity flooding perspective to unify the basic translation- and aggregation-based EA models. We also build a close connection between embedding-based and traditional symbolic-based EA via the unified perspective of fixpoint computation for entity similarities. This work would improve our understanding of and increase our faith in embedding-based models. * We propose two simple but effective methods based on our theoretical analysis to improve EA. The first is a variant of similarity flooding that computes the fixpoint of entity similarities using the entity compositions induced from TransE or GCN. This method does not need to learn KG embeddings. The second is inspired by the fact that the similarity fixpoint indicates an embedding fixpoint. It introduces a self-propagation connection in neighborhood aggregation to let entity embeddings have a chance of propagating back to themselves. * We conduct experiments on DBP15K (Sun et al., 2017) and OpenEA (Sun et al., 2020) to validate the effectiveness of our EA methods and provide experimental evidence to support our theoretical conclusions. The source code is available at our GitHub repository1. Footnote 1: [https://github.com/nju-websoft/Unify-EA-SF](https://github.com/nju-websoft/Unify-EA-SF) ## 2 Preliminaries We first introduce the EA task, and then discuss the basic translation-based and aggregation-based models. We would like to know how to represent an entity in these models so that we can learn more about what factors influence entity similarities. Finally, we introduce the SF algorithm. ### Problem Definition Formally, let \(\mathcal{X}\) and \(\mathcal{Y}\) be the entity sets of the source and target KGs, respectively. In the supervised setting, we are given a set of seed entity alignment pairs \(\mathcal{A}\) as training data. For an aligned entity pair \((x,y)\in\mathcal{A}\), where \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\), the KG embeddings for EA are expected to hold: \(\mathbf{x}=\arg\min_{x^{\prime}\in\mathcal{X}}\pi(\mathbf{x}^{\prime}, \mathbf{y})\). Hereafter, we use boldface type to denote vector embeddings, e.g., \(\mathbf{x}\) and \(\mathbf{y}\) for the embeddings of \(x\) and \(y\), respectively. \(\pi(\mathbf{x},\mathbf{y})\) is a distance measure. In this paper, we consider the Euclidean distance, i.e., \(\pi(\mathbf{x},\mathbf{y})=||\mathbf{x}-\mathbf{y}||_{2}\), where \(||\cdot||_{2}\) denotes the \(L_{2}\) vector norm. It indicates that, if \(x\) and \(y\) are aligned entities, \(\mathbf{y}\) is expected to be the nearest cross-KG neighbor of \(\mathbf{x}\) in the embedding space. To achieve this goal, given a small set of seed alignment, \(\mathcal{A}\subset\{(x,y)|x\equiv y\}\), as training data, the general objective of alignment learning is to minimize the embedding distance of entity pairs in \(\mathcal{A}\)(Chen et al., 2017): \[\min_{(x,y)\in\mathcal{A}}\pi(\mathbf{x},\mathbf{y}). \tag{1}\] Although many models introduce various negative sampling methods (Sun et al., 2018) to generate dissimilar entity pairs and learn to separate the embeddings of dissimilar entities, Eq. (1) is the most common and indispensable learning objective, which is our focus in this paper. ### TransE-based EA The typical learning objective of TransE-based models is to solve two optimization problems, i.e., translational embedding learning and alignment learning, as shown in Eqs. (1) and (2), respectively. \[\min_{(s,r,o)\in\mathcal{T}}\|\mathbf{s}+\mathbf{r}-\mathbf{o}\|_{2}^{2}, \quad\text{s.t.}\quad\|\mathbf{e}\|_{2}^{2}=1,\,\forall\,e\in\mathcal{X}\cup \mathcal{Y}, \tag{2}\] where \(\mathcal{T}\) is the set of triplets and \(e\) denotes an entity. Considering that the two optimization problems have a trivial optimal solution with all entities and relations having zero vectors, most EA models normalize each entity embedding to a unit vector. Therefore, we further introduce a Lagrange term \(\lambda_{e}\sum_{e\in\mathcal{X}\cup\mathcal{Y}}(||\mathbf{e}||_{2}^{2}-1)\), where \(\lambda_{e}\) is the Lagrange multiplier. The combined optimization problem is \[\small\mathcal{L}(\mathbf{\Theta})=\sum_{(s,r,o)\in\mathcal{T}}\|\mathbf{s} +\mathbf{r}-\mathbf{o}\|_{2}^{2}+\sum_{(x,y)\in\mathcal{A}}\|\mathbf{x}- \mathbf{y}\|_{2}^{2}+\lambda_{e}\sum_{e\in\mathcal{X}\cup\mathcal{Y}}(|| \mathbf{o}||_{2}^{2}-1), \tag{3}\] where \(\mathbf{\Theta}\) denotes the entity and relation embeddings. The optimization problem then shifts to solving the following equation: \(\bigtriangledown_{\mathbf{\Theta},\lambda_{e}}\mathcal{L}(\mathbf{\Theta})= \mathbf{0}\). Then, we can derive the representations of relations and entities in the model. **Deriving relation representations.** We first consider relation embeddings and take the relation \(r\) as an example. We are interested in the gradients of the loss in Eq. (3) with respect to \(r\): \(\bigtriangledown_{r}\mathcal{L}(\mathbf{\Theta})=\bigtriangledown_{r}\sum_{( s,r,o)\in\mathcal{T}}\|\mathbf{s}+\mathbf{r}-\mathbf{o}\|_{2}^{2}\), where \(\mathcal{T}_{r}\) denotes the set of triplets involving \(r\). Letting the above derivative be zero, we can derive \[\mathbf{r}=\frac{1}{|\mathcal{T}_{r}|}\sum_{(s,r,o)\in\mathcal{T}_{r}}(\mathbf{ o}-\mathbf{s}). \tag{4}\] The equation aligns with the motivation of TransE that represents a relation as the translation vector between its subject and object entity embeddings. Given this equation, we can use the final entity embeddings to represent a relation. **Deriving entity representations.** An entity may appear as the subject or object in a triplet. To simplify the formulations without information loss, we introduce reverse triplets following the convention in KG embedding models (Guo et al., 2019). For each triplet \((s,r,o)\), we add a new triplet \((o,r^{-1},s)\), where \(r^{-1}\) denotes the reverse relation of \(r\). In this way, we only need to consider the outgoing edges of an entity, i.e., the triplets with the given entity as the subject. The original ingoing edges are considered by including their reverse edges. We use \(\mathcal{T}_{e}\) to denote the triplets with \(e\) as the subject. Specifically, given entity \(e\), we are interested in the gradients of the loss in Eq. (3) with respect to embedding \(\mathbf{e}\), i.e., \(\bigtriangledown_{\mathbf{e}}\mathbf{C}(\mathbf{\Theta})=\bigtriangledown_{ \mathbf{e}}\sum_{(e,r)\in\mathcal{T}_{e}}\|\mathbf{e}+\mathbf{r}-\mathbf{o} \|_{2}^{2}+\mathbb{1}_{\exists(e,\hat{e})\in\mathcal{A}}\bigtriangledown_{ \mathbf{e}}\|\mathbf{e}-\hat{\mathbf{e}}\|_{2}^{2}+\mathbb{\alpha}_{e}\bigtriangledown _{\mathbf{e}}(||\mathbf{e}||_{2}^{2}-1)\), where \(\mathbb{1}\) is an indicator function. By setting the gradients to be zero vectors, we obtain \(\mathbf{e}=\frac{1}{|\mathcal{T}_{e}|+\lambda_{e}}\sum_{(e,r,o)\in\mathcal{T}_ {e}}(\mathbf{o}-\mathbf{r})+\mathbb{1}_{\exists(e,\hat{e})\in\mathcal{A}}( \mathbf{e}-\hat{\mathbf{e}})\). With proper EA training strategies, e.g., parameter sharing (Sun et al., 2020), \(e\) and \(\hat{e}\) would have the same embeddings, i.e., \(\mathbf{e}-\hat{\mathbf{e}}=\mathbf{0}\). In addition, we can apply normalization to \(\mathbf{e}\) to ensure \(||\mathbf{e}||=1\), and then we can replace \(|\mathcal{T}_{e}|+\lambda_{e}\) with \(|\mathcal{T}_{e}|\). Thus, we obtain \(\mathbf{e}=\frac{1}{|\mathcal{T}_{e}|}\sum_{(e,r,o)\in\mathcal{T}_{e}}( \mathbf{o}-\mathbf{r})\). Note that, in this equation, we still need relation embeddings to represent an entity. To get free of relation embeddings, we can replace them with the composition of related subject and object entity embeddings as shown in Eq. (4), and get \[\mathbf{e}=\frac{1}{|\mathcal{T}_{e}|}\sum_{(e,r,o)\in\mathcal{T}_{e}}\Big{(} \mathbf{o}-\frac{1}{|\mathcal{T}_{r}|}\sum_{(s^{\prime},r,o^{\prime})\in \mathcal{T}_{r}}(\mathbf{o}^{\prime}-\mathbf{s}^{\prime})\Big{)}. \tag{5}\] In this way, we represent an entity by the composition of its related entities in the same KG. ### GCN-based EA In a CGN-based EA method, an entity is first represented by aggregating its neighbors. For brevity, we consider a one-layer GCN layer (Kipf and Welling, 2017) with mean-pooling as the aggregation function, i.e., \(G(x)=\frac{1}{|N(x)|}\sum_{x^{\prime}\in N(x)}\mathbf{x}^{\prime}\). The entity representation in GCNs is: \[\mathbf{e}=\frac{1}{|N(e)|}\sum_{e^{\prime}\in N(e)}\mathbf{e}^{\prime}. \tag{6}\] Then, given the output representations, we minimize the embedding distance of identical entities in seed entity alignment for alignment learning, as shown in Eq. (1). Finally, we use \(k\)NN search to find the counterpart for a given entity. ### Similarity Flooding Similarity flooding (Melnik et al., 2002) is an iterative graph matching technique based on fixpoint computation. It is a fundamental algorithm and widely used in a variety of graph matching contexts, such as ontology mapping and database schema matching (Shoviako and Euzenat, 2013). Given two input graphs \(G_{1}\) and \(G_{2}\) with the aim of finding the mapping of identical nodes, the similarity flooding algorithm first creates a pairwise connectivity graph (PCG), which is an auxiliary data structure for similarity propagation. As shown in Figure 2, in a PCG, a node is an entity pair \((x_{1},y_{1})\) with the similarity of \(\sigma(x_{1},y_{1})\) (called a mapping pair), where the two entities are from the two graphs, respectively, i.e., \(x_{1}\in G_{1}\) and \(y_{1}\in G_{2}\). An edge \(\big{(}(x_{1},y_{1}),r_{1},(x_{2},y_{2})\big{)}\) of the PCG is induced from the two graphs having \((x_{1},r_{1},x_{2})\in G_{1}\) and \((y_{1},r_{1},y_{2})\in G_{2}\). The relation \(r_{1}\) would be further given a weight, called the propagation coefficient, which ranges from \(0\) to \(1\) and can be computed in different ways (Melnik et al., 2001). The directed weighted edge \(\big{(}(x_{1},y_{1}),r_{1},(x_{2},y_{2})\big{)}\) indicates how well the similarity of \((x_{1},y_{1})\) propagates to its neighbor \((x_{2},y_{2})\). Then, the algorithm propagates the similarity of each node (i.e., mapping pair) over the PCG using fixpoint computation and finally outputs the node mappings. The fixpoint formula for similarity flooding is \[\Omega=\texttt{normalize}\big{(}\Omega_{0}+\Omega+\varphi(\Omega_{0}+\Omega) \big{)}, \tag{7}\] where \(\Omega_{0}\) is the node similarity matrix, and \(\varphi\) is the propagation function. In conventional graph matching methods, \(\Omega_{0}\) can be computed by string matching. In our work, we follow the supervised setting of embedding-based EA, and use seed entity alignment to initialize \(\Omega_{0}\). _Remark 2.1_.: The pairwise connectivity graph construction requires the alignment of edge labels in the two graphs. _Remark 2.2_.: The propagation coefficients of edges in the pairwise connectivity graph are computed heuristically. ## 3 Connecting Embedding-based EA and SF Given the derived entity representations from TransE or GCN, we can compute entity similarities. Specifically, given two entity sets, \(\mathcal{X}=\{x_{1},x_{2},\ldots,x_{n}\}\) and \(\mathcal{Y}=\{y_{1},y_{2},\ldots,y_{m}\}\), we denote the derived entity representations by \(\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\}\) and \(\{\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{m}\}\), respectively. Their pairwise similarity matrix is \[\Omega=(\mathbf{x}_{1};\mathbf{x}_{2};\ldots;\mathbf{x}_{n})^{\top}(\mathbf{y}_ {1};\mathbf{y}_{2};\ldots;\mathbf{y}_{m})\in\mathbb{R}^{n\times m}. \tag{8}\] Figure 2: Illustration of how to build the pairwise connectivity graph given two graphs (redrawn based on (Melnik et al., 2002)). The similarity matrix determines entity alignment pairs. ### Unifying TransE- and GCN-based EA **Theorem 3.1**.: _The TransE-based EA model seeks a fixpoint of pairwise entity similarities via embedding learning._ Proof.: Eq. (5) shows that we can represent an entity with a composition of other entities. Thus, we can represent an entity (taking \(x_{i}\in\mathcal{X}\) for example) as \[\mathbf{x}_{i}=\lambda_{i,1}\mathbf{x}_{1}+\lambda_{i,2}\mathbf{x}_{2}+\cdots+ \lambda_{i,n}\mathbf{x}_{n}=\sum_{k=1}^{n}\lambda_{i,k}\mathbf{x}_{k}, \tag{9}\] where \(\lambda_{i,k}\) denotes the composition coefficient of entities \(x_{i}\) and \(x_{k}\), which can be computed from Eq. (5). Then, the similarity of entities \(x_{i}\in\mathcal{X}\) and \(y_{j}\in\mathcal{Y}\) can be calculated using the inner product2 as follows: Footnote 2: The inner product of two normalized vectors is equal to their cosine similarity. \[\omega_{i,j}=\mathbf{x}_{i}\cdot\mathbf{y}_{j}=\sum_{k=1}^{n}\sum_{l=1}^{m} \lambda_{i,k}\lambda_{j,l}^{\prime}\mathbf{x}_{k}\cdot\mathbf{y}_{l}=\sum_{k =1}^{n}\sum_{l=1}^{m}\lambda_{i,k}\lambda_{j,l}^{\prime}\omega_{k,l}. \tag{10}\] We can see from the above equation how the similarity of two entities affects their related neighbors. Let the matrix \(\Lambda=(\lambda_{i,j})_{i=1,j}^{n,m}=1\) consist of the lambda values for the source KG, and \(\Lambda^{\prime}=(\lambda_{i,j}^{\prime})_{i=1,j}^{m,m}\) for the target KG. Let \(\Omega=(\omega_{i,j})_{i=1,j}^{n,m}\) denote the pairwise entity similarities of the two KGs. We can rewrite Eq. (10) as \[\Lambda\Omega(\Lambda^{\prime})^{\top}=\Omega, \tag{11}\] where \((\Lambda^{\prime})^{\top}\) is the transposed matrix of \(\Lambda^{\prime}\). This equation shows that the entity embedding similarities learned by the translation-based model have a fixpoint of \(\Omega\). Next, we can connect aggregation-based EA models and similarity flooding in a similar way. **Theorem 3.2**.: _The GCN-based EA model seeks a fixpoint of pairwise entity similarities via embedding learning._ Proof.: Please refer to the proof of Theorem 3.1. The difference lies in how to compute the lambda values. We show how to calculate lambda values below. **Lambda values for TransE.** The lambda values for TransE are computed by counting the number of related triples: \[\lambda_{i,j}=\frac{1}{|\mathcal{T}_{x_{i}}|}\Big{(}|R(x_{i},x_{j})|+\sum_{r\in R }\frac{|\mathcal{T}_{x_{i},r}|}{|\mathcal{T}_{x_{i}}|}|(|\mathcal{T}_{x_{j},r} |-|\mathcal{T}_{x_{j},r^{-1}}|)\Big{)}, \tag{12}\] where \(R(x_{i},x_{j})\) denotes the set of relations that connect entities \(x_{i}\) and \(x_{j}\), and \(R\) is the set of all relations. \(\mathcal{T}_{x_{i},r}\) denotes the set of relation triplets with \(x_{i}\) as the subject and \(r\) as the relation. \(\mathcal{T}_{r}\) is the set of relation triplets with \(r\) as the relation. \(r^{-1}\) is the reverse relation for \(r\). **Lambda values for GCN.** For GCN, we have \[\lambda_{i,j}=\frac{\mathbb{I}_{(x_{i},r,x_{j})\in\mathcal{T}}}{|\mathcal{T}_{x _{i}}|}, \tag{13}\] where \(\mathbb{I}\) is an indicator function that returns 1 if there is a relation between \(x_{i}\) and \(x_{j}\), and 0 otherwise. _Remark 3.3_.: Embedding learning is just a means and the objective is to seek a fixpoint of pairwise entity similarities. ### An Interpretation of Embedding-based EA Given the fixpoint view of EA, we further discover a mathematical interpretation of TransE- and GCN-based models. We show that identical entities have isomorphic structures in the entity compositions of embedding-based EA. **Theorem 3.4**.: _The entity alignment pairs found by the above embedding-based models yield a function \(f:\{1,2,\ldots,n\}\rightarrow\{0,1,2,\ldots,m\}\), such that \(\forall i,j,f(i)>0\wedge f(j)>0\rightarrow\lambda_{f(i),f(j)}^{\prime}\approx \lambda_{i,j}\)._ Proof.: Let us consider aligning \(\mathcal{Y}\) with itself. We have \[\Lambda^{\prime}\mathbf{I}_{m}(\Lambda^{\prime})^{\top}\approx\mathbf{I}_{m}, \tag{14}\] where \(\mathbf{I}_{m}\) is an identity matrix. Suppose that the alignment found by the above embedding-based models is \(\hat{\mathcal{A}}\), which can be denoted by a 0-1 matrix \(\hat{\Omega}\) such that \(\hat{\omega}_{i,j}=1\) if and only if \((x_{i},x_{j})\in\hat{\mathcal{A}}\). Similar to most EA settings, we assume that in \(\hat{\mathcal{A}}\), each entity is aligned to at most one entity in another KG. Notice that \(\hat{\Omega}\) approximately equals a fixpoint of Eq. (7). Thus, we have \[\hat{\Omega}^{\top}\Lambda\hat{\Omega}(\Lambda^{\prime})^{\top}\approx\hat{ \Omega}^{\top}\hat{\Omega}=\hat{\mathbf{I}}_{m}, \tag{15}\] where \(\hat{\mathbf{I}}_{m}\) is a diagonal matrix, where \(\hat{\mathbf{I}}_{j,j}=1\) if and only if \(y_{j}\) appears in one pair in \(\hat{\mathcal{A}}\). As \(\Lambda^{\prime}\mathbf{I}_{m}(\Lambda^{\prime})^{\top}\approx\mathbf{I}_{m}\), we have \(\hat{\Omega}^{\top}\Lambda\hat{\Omega}\approx\hat{\mathbf{I}}_{m}\Lambda^{\prime}\). Let \(f\) be a function defined as \[f(i)=\begin{cases}j,&(x_{i},y_{j})\in\hat{\mathcal{A}}\\ 0,&\forall y_{j}\in\mathcal{Y},(x_{i},y_{j})\not\in\mathcal{A}\end{cases}. \tag{16}\] When \(f(i)>0\) and \(f(j)>0\), we have \((\hat{\Omega}^{\top}\Lambda\hat{\Omega})_{f(i),f(j)}=\lambda_{i,j}\), i.e., \(\lambda_{f(i),f(j)}^{\prime}\approx\lambda_{i,j}\). Based on Theorem 3.4, we find that for each KG, the entity compositions derived from EA models generate a matrix (e.g., \(\Lambda\)) that only depends on graph structures. It finds a mapping function that makes the two KGs' matrices the same and this function determines the alignment results. Although different KGs may have heterogeneous structures, the entity compositions in embedding-based EA models reconstruct a new structure (represented by \(\Lambda\)), in which aligned entities have isomorphic subgraphs. _Remark 3.5_.: If we view these matrices as edge weights between nodes in KGs, these embedding-based EA models mathematically conduct graph matching. ## 4 Experimental Evidence In this section, we propose two methods to improve EA: similarity flooding via entity compositions and self-propagation in GCNs. We evaluate them on benchmark datasets, providing experimental evidence to support our theorem. ### Similarity Flooding via Entity Compositions We have shown by Eqs. (5) and (6) that the entity representations derived from TransE- and GCN-based models can be reformulated to be independent from relations. Our theorems in Section 3 show that entity similarities are determined by other entity similarities, and entity embeddings are unnecessary in this computation. Then, a natural question arises: _Is the embedding learning process a prerequisite for achieving the fixpoint of entity similarities_? Given Eq. (11), we design a similarity flooding style algorithm to propagate the entity similarities that are computed based on the entity composition representations induced from an embedding model. It is presented in Algorithm 1. We first derive the entity compositions from the embedding model. Then, we calculate the lambda values \(\Lambda\) and \(\Lambda^{\prime}\) in the compositions. We initialize the similarity matrix \(\Omega\) to be a zero matrix and set the values that indicate seed EA similarities to be 1. The similarity matrix is further updated to achieve the fixpoint as shown in Eq. (11). After each update, we normalize the values to range from \(-1\) to \(1\). The computation is performed in an iterative manner until it converges or reaches the maximum number of iterations. We hereby discuss the advantages of the algorithm. First, it does not need relation alignment. It represents an entity without using relations and does not need to build the PCG. As different KGs usually have heterogeneous schemata, it is difficult to obtain the accurate relation alignment. By contrast, the conventional similarity flooding algorithm relies on relation alignment to build the PCG. Second, our algorithm does not need to compute propagation coefficients for similarity flooding. The lambda values act as "propagation coefficients", but they are calculated by counting the number of related triplets without using heuristic methods. Third, our algorithm does not need to learn embeddings, but it needs an embedding model to derive the entity compositions. Our optimization objective is to directly achieve the fixpoint of entity similarities. Embedding-based models seek this goal by an indirect way of updating embeddings. Our algorithm has the disadvantage of requiring matrix manipulation. If the KG scale is large, it would consume a lot of memory. We can solve this problem by using advanced and parallel matrix manipulation implementations. #### 4.1.1 Evaluation We implement two variants of our algorithm, namely TransFood and GCNFfood, using TransE and GCN, respectively. **Baselines.** We choose the translation-based model MTransE and aggregation-based model GCN-Align as baselines. * **MTransE** (Chen et al., 2017) is one of the earliest studies that explore translational embeddings for EA. It uses TransE (Bordes et al., 2013) to learn the entity embeddings of two KGs meanwhile learning a linear mapping to find identical entities. * **GCN-Align** (Wang et al., 2018) is the first work that considers GCNs for KG EA. It employs the vanilla GCN (Kipf and Welling, 2017) to generate entity embeddings and uses the marginal ranking loss with uniform negative sampling for alignment learning. **Datasets.** We consider two datasets in our experiment. One is the widely-used dataset DBP15K (Sun et al., 2017) It aims to align the cross-lingual entities extracted from DBpedia (Lehmann et al., 2015). It has three EA settings: ZH-EN (Chinese-English), JA-EN (Japanese-English) and FR-EN (French-English). The triples in these KGs are extracted from the infobox data of multilingual Wikipedia. They have similar rather than identical schemata because the data is not mapped to a unified ontology. Each setting has \(15,000\) pairs of identical entities for alignment learning and test. We follow the data splits of DBP15K and use \(30\%\) of entity alignment as training data. The other dataset is OpenEA (Sun et al., 2020) and we choose its 15K V1 versions of D-W (DBpedia-Wikidata) and D-Y (DBpedia-YAGO), in each of which the two KGs have different schemata. Each setting also has \(15,000\) entity alignment pairs and we follow its data splits and use \(20\%\) of entity alignment as training data. **Metrics.** Following the conventions, we choose Hits@\(k\) (\(k=1,10\)) and mean reciprocal rank (MRR) as metrics to assess EA performance. Hits@\(k\) measures the proportion of correctly-aligned entities ranked in the top \(k\). MRR is the average of the reciprocal ranks. Higher Hits@\(k\) and MRR scores indicate better performance. **Main results.** We present the results in Table 1. We can observe that the proposed TransFlood achieves much better performance than MTransE on all datasets. For example, on FR-EN, the Hits@1 score of TransFlood is \(0.347\), outperforming MTransE by \(0.103\). We find that, as a learning model, MTransE is easy to overfit. Our model is also derived from TransE but our iteration algorithm can enable our model to get a more stable solution than the learning method. For aggregation-based EA, our GCNFood achieves comparative Hits@1 results and better Hits@10 scores compared to GCN-Align. Our GCNFood only considers one-hop neighbors to generate entity similarities (i.e., a one-layer GCN), whose information is less than that in GCN-Align (a two-layer GCN). However, its advantage lies in that it converges directly to the fixpoint, while the embedding learning method cannot guarantee this. Overall, TransFlood and GCNFood that do not need learning can achieve comparable or even better performance than embedding learning baselines. **Running time comparison.** We compare the running time of our algorithm variants against MTransE and GCN-Align on ZH-EN. This experiment is conducted using a personal workstation with an Intel Xeon E3 3.3GHz CPU, 128GB memory and a NVIDIA GeForce GTX 1080Ti GPU. The results are shown in Figure 3. We observed similar results on the other two datasets. MTransE uses the least time because it is a shallow model that can be easily optimized. GCN-Align takes the most time. We find that it converges very slowly and takes many training epochs. Our TransFlood and GCNFood take very similar time, which is also less than that of GCN-Align. In our algorithm, resolving Eq. (11) costs the most in training time. Overall, our algorithm, which does not need to learn embeddings, can achieve comparable or even better performance in both effectiveness and efficiency than embedding learning models. **Results using text features.** Our similarity flooding algorithm can also use text features to improve performance. We use multilingual word embeddings (Bojanowski et al., 2017) to encode entity names for computing the similarity matrix, which is further combined with \(\Omega\) in our Algorithm 1. We conduct experiments on DBP15K and present the results in Table 2. We choose RDGCN (Wu et al., 2019) as a baseline. We can see that our TransFlood + Text and GCNFood + Text achieve slightly lower results than RDGCN. On FR-EN, TransFlood + Text achieves comparable results with RDGCN. Moreover, by using text features, both TransFlood and GCNFood get greatly improved. These results show the generalization ability of our algorithm. ### Self-propagation in Neighborhood Aggregation Based on our theoretical analysis of embedding-based EA and similarity flooding, we derive a new aggregation scheme for EA: self-propagation and neighbor aggregation. As previously stated, an embedding-based EA model aims to establish a fixpoint of pairwise entity similarities by updating entity embeddings throughout the training process. Considering that entity similarities are computed using entity embeddings, the output of GCNs also achieves a fixpoint. We can rewrite neighborhood aggregation as \[\mathbf{e}=f\big{(}\mathbf{e},\oplus_{z\in N_{e}}(\mathbf{z})\big{)}, \tag{17}\] which means that the entity embeddings remain "unchanged" after aggregation. For brevity, we use the function \(G()\) to denote a GCN layer and consider a two-layer GCN. Given input embedding \(\mathbf{e}^{0}\), in the fixpoint, we expect to hold \[\mathbf{e}^{2}=G(\mathbf{e}^{1})=\mathbf{e}^{1}=G(\mathbf{e}^{0})=\mathbf{e}^ {0}. \tag{18}\] However, this equation has limitations. First, in this case, the aggregation function degenerates into an identity mapping. Second, it almost loses the neighborhood information. To resolve the issues, inspired by (Klicpera et al., 2019), we enable the GCN output to have a probability of backing to the input. The aggregation function is rewritten as: \[\mathbf{e}^{i+1}=(1-\alpha)\oplus_{z\in N_{e}}(\mathbf{z})+\alpha f(\mathbf{ e}^{i}), \tag{19}\] \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline Models & \multicolumn{3}{c}{DBP15K 2H-EN} & \multicolumn{3}{c}{DBP15K 1A-EN} & \multicolumn{3}{c}{DBP15K FR-EN} & \multicolumn{3}{c}{OpenEA D-W 15K} & \multicolumn{3}{c}{OpenEA D-Y 15K} \\ \cline{2-13} & Hits@1 & Hits@1 & Hits@1 & Hits@10 & MRR & Hits@1 & Hits@1 & Hits@10 & MRR & Hits@1 & Hits@10 & MRR \\ \hline MTransE & 0.308 & 0.614 & - & 0.279 & 0.575 & - & 0.244 & 0.556 & - & 0.259 & - & 0.354 & 0.463 & - & 0.559 \\ TransFlood (ours) & **0.315** & **0.707** & 0.451 & **0.372** & **0.757** & 0.505 & **0.347** & **0.752** & 0.484 & **0.294** & 0.699 & **0.427** & **0.503** & 0.880 & **0.641** \\ \hline GCN-Align & **0.413** & 0.744 & - & **0.399** & 0.745 & - & **0.373** & 0.745 & - & **0.364** & - & 0.461 & 0.465 & - & 0.536 \\ GCNFood (ours) & 0.349 & **0.761** & 0.490 & 0.376 & **0.770** & 0.512 & 0.349 & **0.761** & 0.490 & 0.358 & 0.739 & **0.486** & **0.478** & 0.754 & **0.583** \\ \hline \hline \end{tabular} \end{table} Table 1: EA results on DBP15K as well as OpenEA D-W and D-Y. The best scores in each group are marked in bold. The results of MTransE are taken from (Sun et al., 2017). The results of GCN-Align are taken from its paper. “-” denotes their unreported metrics. Figure 3: Total running time (in seconds) on ZH-EN. where \(\alpha\) is a hyper-parameter indicating the probability of backing to the input. Here, we use \(f()\) to denote a dense layer. \(\mathbf{e}^{0}\) is randomly initialized as the input embedding of entity \(e\). The output of the GCN, i.e., \(\mathbf{e}^{2}\) for a two-layer GCN, is used for alignment learning and search. Please note that, although the conventional GCNs also consider the entity itself in neighbor aggregation, the work (Klicpera et al., 2019) shows that they would still lose the local focus of the entity itself during layer-by-layer aggregation. #### 4.2.1 Properties of Self-propagation Taking a deep learning perspective, we find that the proposed self-propagation has several good properties. **Model complexity.** The proposed self-propagation can be easily combined with any aggregation function without adding additional computational complexity. It only introduces a dense layer for feature transformation. Considering that the number of entity embedding parameters is much larger than that of a dense layer, we argue that the parameter complexity of self-propagation remains similar to that of other aggregation functions. **Relation to PageRank-based GCNs.** PageRank-based GCNs introduce the possibility of resetting the neighborhood aggregation to its initial state during training (Klicpera et al., 2019; Roth & Liebig, 2022). These studies are relevant to random walks with restarts on graphs where the random walk has a probability of backing to the start node after several steps. The idea is similar to ours. The difference is that we do not seek the representation of an entity to return to itself after several times of neighborhood aggregation. Instead, we seek to increase the local focus on the entity representation itself within the iterative neighborhood aggregation. Self-propagation is also helpful to resolve the over-smoothing issue. **Relation to residual learning.** The self-propagation can be regarded as a special case of residual learning (He et al., 2016) because it builds a skipping connection between two GCN layers. Given the input \(\mathbf{x}\), let \(F(\mathbf{x})\) be a representation function (e.g., the \(G()\) in our paper), and \(H(\mathbf{x})\) be the expected output representation. Residual learning indicates that directly optimizing \(F(\mathbf{x})\) to fit \(H(\mathbf{x})\) is more difficult than letting \(F(\mathbf{x})\) fit the residual part \(H(\mathbf{x})-\mathbf{x}\). For aggregation-based EA, we cannot let \(H(\mathbf{x})=\mathbf{x}\), in which case the function \(F()\) has no representation ability. Therefore, we introduce the transformation function \(f()\), and let \(G(\mathbf{x})\) fit \(H(\mathbf{x})-f(\mathbf{x})\). A related work (Guo et al., 2019) shows that the skipping connection would also improve the optimization of KG embeddings. #### 4.2.2 Evaluation We present our experimental results on DBP15K and OpenEA in terms of Hits@1, Hits@10 and MRR scores. **Implementation.** The performance of an EA model relates to not only the embedding learning model (e.g., TransE or GCN) but also other modules, including the alignment learning loss, the negative sampling method, and even the tricks in deep learning such as the parameter initialization method and the loss optimizer. To study the real effectiveness of self-propagation, we do not develop a new aggregation-based model from scratch. Instead, we choose four representative aggregation-based models: GCN-Align (see Section 4.1.1), AliNet, Dual-AMN and RoadEA, and incorporate self-propagation into them to see performance changes. * **AliNet**(Sun et al., 2020) extends GCN-Align by introducing distant neighbors in the aggregation function. Its learning objective is to minimize the limit-based loss with truncated negative sampling (Sun et al., 2018). It concatenates the output of multiple layers as representations for alignment learning and search. * **Dual-AMN**(Mao et al., 2021) is the state-of-the-art aggregation-based model according to our knowledge. It designs several advanced implementations, including the proxy matching attention, normalized hard sample mining and loss normalization. It achieves prominent performance in both effectiveness and efficiency. * **RoadEA**(Sun et al., 2022) is a recent GCN-based EA method that considers relations in neighborhood aggregation. It combines relation embeddings and their corresponding neighbor embeddings as relation-neighbor representations and uses graph attention networks (Velickovic et al., 2017) to aggregate them. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{ZH-EN} & \multicolumn{3}{c}{JA-EN} & \multicolumn{3}{c}{FR-EN} \\ \cline{2-10} & Hits@1 & Hits@10 & MRR & Hits@1 & Hits@10 & MRR & Hits@1 & Hits@10 & MRR \\ \hline RDGCN (Wu et al., 2019) & 0.708 & 0.846 & - & 0.767 & 0.895 & - & 0.886 & 0.957 & - \\ \hline TransFood & 0.315 & 0.707 & 0.451 & 0.372 & 0.757 & 0.505 & 0.347 & 0.752 & 0.484 \\ GCNFood & 0.349 & 0.761 & 0.490 & 0.376 & 0.770 & 0.512 & 0.349 & 0.761 & 0.490 \\ \hline TransFood + Text & 0.670 & 0.786 & 0.713 & 0.747 & 0.868 & 0.794 & 0.881 & 0.949 & 0.908 \\ GCNFood + Text & 0.651 & 0.823 & 0.716 & 0.712 & 0.882 & 0.777 & 0.842 & 0.957 & 0.887 \\ \hline \hline \end{tabular} \end{table} Table 2: EA results using text features on DBP15K. For each baseline model, we adopt its official code and incorporate the proposed self-propagation into its aggregation function. To be specific, in each of their layers, we add a self-propagation connection between their input and output. We leave other modules, including the alignment learning loss, the negative sampling method, and the alignment search strategy, unchanged. As a result, we get four GCN-based model variants, namely "GCN-Align + SPA", "AliNet + SPA", "Dual-AMN + SPA", and "RoadEA + SPA". **Settings.** To ensure a fair comparison, the hyper-parameter values in our experiment follow the default settings of the corresponding baselines. The only exception is that the embedding dimensions of the input and two GCN layers in AliNet+SP are \(384\), \(384\) and \(384\), respectively, which are different from the original settings of \(500\), \(400\) and \(300\) in AliNet. The reason that we keep these layers with the same output dimension is that we can directly compare the representations of an entity in different AliNet layers (see Section 4.2.2). Note that AliNet concatenates the output of all layers as the final entity representations for alignment learning and search. In our model, the final embedding dimension is \(384+384+384=1152\), slightly smaller than that of AliNet (\(500+400+300=1200\)). We find that such a small dimension difference has no observed impact on performance. In our models, \(\alpha=0.1\) for all datasets. **Main results.** Table 3 presents the EA results of baselines and our model variants on DBP15K. We can see that our model variants can bring stable improvement on DBP15K, especially on Hits@\(1\) and MRR, compared with the corresponding baselines. For example, AliNet+SPA outperforms AliNet by \(0.036\) on Hits@\(1\). Even when compared to the state-of-the-art model Dual-AMN, our Dual-AMN+SPA still achieves higher performance, especially on JA-EN and FR-EN, establishing a new state-of-the-art. As we have discussed in Section 4.2.2, Dual-AMN has many advanced designs to improve performance. Boosting its performance to a higher level is much more difficult than that for GCN-Align and AliNet. We find that RoadEA fails to achieve promising results on D-Y. We think this is because DBpedia and YAGO have an unbalanced number of relations, which affects the relational attention mechanism in RoadEA. However, our self-propagation still improves it, showing good robustness. To summarize, this comparison demonstrates the effectiveness and generalization of the proposed self-propagation for EA. We conduct additional experiments in the following two subsections to further investigate the reasons for the good performance of self-propagation. **Effectiveness of self-propagation against over-smoothing.** The over-smoothing issue of GCNs refers to the fact that the output representations tend to be similar if too many layers are used for neighborhood aggregation (Oono and Suzuki, 2020; Chen et al., 2020). It is obvious that such an issue has a negative impact on embedding-based EA. The default settings of GCN layer numbers in GCN-Align, AliNet and Dual-AMN are all \(2\). To investigate the over-smoothing issue in EA, we show in Figure 4 the Hits@\(1\) results of these baselines (in blue) and our model variants (in red) on ZH-EN when their layer numbers are set as \(1,2,3,4\), respectively. Both GCN-Align and AliNet suffer from over-smoothing. Their results decrease as the GCNs go deeper with more than two layers. By adding the self-propagation connection, their performance degradation is reduced. By contrast, Dual-AMN shows good robustness against over-smoothing. Its performance changes little when the layer number increases. Dual-AMN+SPA also benefits from such robustness. Dual-AMN uses the normalized hard sample mining method with a large number of negative examples, enabling dissimilar entities to have distinguishable representations. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{DBP15K ZH-EN} & \multicolumn{3}{c}{DBP15K JA-EN} & \multicolumn{3}{c}{DBP15K FR-EN} & \multicolumn{3}{c}{OpenEA D-W 15K} & \multicolumn{3}{c}{OpenEA D-Y 15K} \\ \cline{2-13} **Layer output representation comparison.** We further compare the output representation distance of the last two layers in AliNet and AliNet+SPA. Figure 5 shows the average Euclidean distance w.r.t. the first 80 training epochs on ZH-EN. We can see that the output representation distance of both AliNet and AliNet+SPA becomes smaller as the validation performance increases. Furthermore, by adding the proposed self-propagation connection, the layer output distance of AliNet+SPA is smaller than that of AliNet. These results provide experimental evidence to support our design of self-propagation to connect two GCN layers and increase the local focus on the entity embedding itself. ## 5 Related Work Our work is relevant to multi-sourced KG embedding learning and iteration-based node matching methods for graphs. ### Multi-sourced KG Embeddings Multi-sourced KG representation learning starts with the research on embedding-based EA. An embedding-based EA model learns and measures entity embeddings to compute entity similarities. It usually has two learning objectives. One is for embedding learning and the other is for alignment learning. Translation-based EA models (Chen et al., 2017; Zhu et al., 2017; Sun et al., 2017, 2019; Pei et al., 2019) adopt TransE (Bordes et al., 2013) or its variants for embedding learning. Aggregation-based EA models adopt GNNs to generate entity embeddings, including the vanilla GCNs (Wang et al., 2018), multi-hop GCNs (Sun et al., 2020), relational GCNs (Yu et al., 2020), graph attention networks (Zhu et al., 2020; Mao et al., 2021; Sun et al., 2022), self-supervised GCNs (Liu et al., 2022) and temporal GCNs (Xu et al., 2022). Our proposed self-propagation is a plug-in for GNNs. It adds a direct connection between entity representations and the aggregated neighbor representations. In addition to the above two types of basic models for EA, other studies consider using semi-supervised or active learning techniques to augment EA (Sun et al., 2018; Chen et al., 2018; Li & Song, 2022; Berrendorf et al., 2021; Liu et al., 2021; Zeng et al., 2021) or introduce some text features (e.g., entity names, attributes and descriptions) (Sun et al., 2017; Trisedya et al., 2019; Wu et al., 2019) or temporal information (Xu et al., 2021, 2022) to enhance embedding learning. These studies are not relevant to our work. Interested readers can refer to the survey (Zeng et al., 2021; Zhao et al., 2022; Zhang et al., 2022) for more details. However, our work can also benefit from side features. ### Iteration-based Graph Matching Computing node similarities in graphs is a long-standing research topic in many areas, such as databases. Our work is relevant to iteration-based similarity computation methods, including similarity flooding (Melnik et al., 2002), SimRank (Jeh & Widom, 2002) and NetAlignMP (Bayati et al., 2013). Their key assumption is that "two nodes are similar if their neighbors are similar". They first compute the similarity of some pairs of nodes. Then, they propagate these similarities to other related node pairs using different heuristic rules iteratively, until they achieve a fixpoint of node pairwise similarities. Our work shows that the embedding-based EA models follow the same key assumption as the conventional iteration-based graph alignment methods. We build a connection between the two types of methods, which would help users acquire deep insights into them. ## 6 Conclusions and Future Work In this paper, we present a similarity flooding perspective to understand translation-based and aggregation-based EA models. We prove that these models essentially seek a fixpoint of entity pairwise similarities through embedding learning. Based on this finding, we propose two methods, i.e., similarity flooding via entity compositions and self propagation, for improving EA. Experiments on benchmark datasets demonstrate their effectiveness. Our work fills the gap between recent embedding-based EA and the conventional iteration-based graph matching. We think there are two promising directions for future work. The first is to develop neural-symbolic EA models that take advantage of both the representation learning ability of neural models and the interpretability of conventional symbolic methods. The second is, given EA, to learn more expressive and transferable multi-sourced KG embeddings to improve downstream knowledge-enhanced tasks. A KG-enhanced task can be extended into a multi-sourced KG-enhanced task. The latter can benefit from the knowledge transfer in multi-sourced KGs and thus get further improvement. ## Acknowledgments This work is funded by the National Natural Science Foundation of China (No. 62272219) and the Alibaba Group through Alibaba Research Fellowship Program. Figure 5: Output representation distance of the last two layers in AliNet and AliNet+SPA on ZH-EN.
2304.03345
A note on regular polyhedra over finite fields
Grothendieck proposed a theory of regular polyhedra over finite fields in Section 4 of \textit{Esquisse d'un Programme}. He isolates certain key parameters from the automorphism groups of regular polyhedra, which can be extended to any genus and specialized to various rings. In this note we give an interpretation of his sketched theory which explains some of his observations. We are able to compute some explicit examples and address a question Grothendieck raised about them in connection to dessins d'enfants. Finally, we highlight some of Grothendieck's observations which remain unexplained by our current approach.
Caleb Ji
2023-04-06T19:47:58Z
http://arxiv.org/abs/2304.03345v1
# A note on regular polyhedra over finite fields ###### Abstract Grothendieck proposed a theory of regular polyhedra over finite fields in Section 4 of _Esquisse d'un Programme_. He isolates certain key parameters from the automorphism groups of regular polyhedra, which can be extended to any genus and specialized to various rings. In this note we give an interpretation of his sketched theory which explains some of his observations. We are able to compute some explicit examples and address a question Grothendieck raised about them in connection to dessins d'enfants. Finally, we highlight some of Grothendieck's observations which remain unexplained by our current approach. ## 1 Introduction ### Historical background In order to solve the quintic equation, Klein studied the algebraic geometry of the Platonic solids by considering them as finite branched covers of \(\mathbb{CP}^{1}\) in his book _Lectures on the Icosahedron_. This point of view on regular polyhedra is known today as part of the theory of dessins d'enfants, following Sections 2 and 3 of Grothendieck's proposal _Esquisse d'un Programme_[1]. The new element that Grothendieck adds is the action of the absolute Galois group \(\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) on the dessins, which led to his reflections on anabelian geometry. Since then, much work has been done both on calculations with dessins and in anabelian geometry. However, this present note is concerned with Grothendieck's new point of view on regular polyhedra over finite fields, which is the subject of Section 4 of [1]. This topic can be viewed as the study of dessins d'enfants that correspond to Galois coverings of \(\mathbb{P}^{1}-\{0,1,\infty\}\). The Galois condition restricts the combinatorial nature of the dessin considered and leads to a natural generalization of regular polyhedra to any base ring. Despite giving an outline of the theory and raising some concrete questions, this part of Grothendieck's proposal has not seemed to attract any attention. One possible reason for this is that some of the results he states seem difficult to properly understand and reproduce. The purpose of this note is to give one plausible interpretation of his theory and compute some examples with it. Though this interpretation does not explain all of Grothendieck's observations in this section, we hope that it will serve as an advertisement for others who may be interested in pursuing these ideas further. In the remainder of this section we will give a quick overview of dessins d'enfants. The purpose of Section 2 is to explain Grothendieck's definition of the universal regular polyhedron and his method of defining regular polyhedra over arbitrary base rings in detail. In Section 3 we will present some explicit calculations regarding regular polyhedra over finite fields, motivated by a question Grothendieck raised concerning which finite regular maps can be obtained from regular polyhedra over finite fields. **Acknowledgements.** We would like to thank Will Sawin for some interesting comments. ### Review of dessins d'enfants Here we give a brief introduction to dessins d'enfants, focusing only on what we need to study the Galois case. More comprehensive accounts can be found in e.g. [2]. A dessin d'enfant is given by a surface \(X\) with a graph drawn on it that divides the surface into open cells. A flag is defined to be the set consisting of a vertex, the midpoint of an edge containing the vertex, and the center of a face containing the edge. Following [1], define the cartographic group \(\underline{C}_{2}\) by \[\underline{C}_{2}\coloneqq\langle\sigma_{v},\sigma_{e},\sigma_{f}|\sigma_{v}^ {2}=\sigma_{e}^{2}=\sigma_{f}^{2}=(\sigma_{v}\sigma_{f})^{2}=1\rangle.\] Note that \(\underline{C}_{2}\) acts on the set of flags of \(X\) by allowing \(\sigma_{v},\sigma_{e},\sigma_{f}\) to act by reflections of vertices, edges, and faces respectively. It will be useful to consider an oriented version, where we consider only the oriented flags, namely the ones who in counterclockwise order are given by face - vertex - edge. Now define the oriented cartographic group by \[\underline{C}_{2}^{+}\coloneqq\langle\rho_{v},\rho_{e},\rho_{f}|\rho_{f}\rho _{e}\rho_{v}=\rho_{e}^{2}=1\rangle.\] Note that \(\underline{C_{2}}^{+}\) is the index 2 subgroup of \(\underline{C}_{2}\) generated by \(\rho_{v}=\sigma_{f}\sigma_{e},\rho_{e}=\sigma_{v}\sigma_{f},\rho_{0}=\sigma_ {e}\sigma_{v}\). Then \(\underline{C_{2}}^{+}\) acts on the set of oriented flags of a dessin by allowing the elements \(\rho_{v},\rho_{e},\rho_{f}\) to correspond to the counterclockwise rotation of the flag around the vertex, edge, and face, respectively. Every dessin is determined by the action of the oriented cartographic group on one of its flags. Thus a dessin corresponds to a quotient of \(\underline{C_{2}}^{+}\). Because \(\underline{C_{2}}^{+}\) is isomorphic to the fundamental group of \(\mathbb{P}^{1}\) minus 3 points with the additional relation \(\rho_{1}^{2}=1\), we see that the dessin in fact corresponds to a finite cover of \(\mathbb{P}^{1}\) branched only at \(0,1,\infty\) with ramification order 1 or 2 above 1. Note that by composing with the morphism \(f:\mathbb{P}^{1}\to\mathbb{P}^{1}\) defined by \(f(z)=4z(1-z)\), we can impose the latter condition on any curve satisfying the former. Thus by Belyi's theorem, an algebraic curve can be defined over \(\overline{\mathbb{Q}}\) if and only if it can be written in this way. ### The Galois case We now focus on the case when the dessin gives a Galois covering of \(\mathbb{P}^{1}-\{0,1,\infty\}\), which means that the automorphism group of the graph acts simply transitively on the set of flags. These dessins correspond to the finite quotients of \(\underline{C_{2}}^{+}\). However, as Grothendieck notes, it is important to actually consider all quotients of \(\underline{C_{2}}^{+}\). We will henceforth refer to'regular polyhedra' in this generalized sense. Let us begin with some examples. For each pair of positive integers \((p,q)\), we may consider the dessin associated to the quotient \[G_{p,q}=\underline{C}_{2}^{+}/\rho_{0}^{p}=\rho_{2}^{q}=1.\] The corresponding dessin may be drawn by beginning with a flag and transforming it according to all elements of \(G_{p,q}\). As long as \(p,q\geq 3\), the newly imposed conditions imply that we obtain a regular tiling with \(p\) faces to a vertex and \(q\) edges to a face. Note that \(G_{p,q}\) is finite if and only if \(p\cdot\frac{(q-2)\pi}{q}<2\pi\). These cases correspond precisely to the classical regular polyhedra, shown in Table 1. If \(p\cdot\frac{(q-2)\pi}{q}=2\pi\), then we obtain a regular tiling of the Euclidean plane. The three options here are \((p,q)=(3,6)\) (hexagons), \((4,4)\) (squares), and \((6,3)\) (triangles). Apart from these two finite lists of cases, we obtain regular tilings of the hyperbolic plane. The groups \(G_{p,q}\) do not exhaust all possible quotients of \(F_{2}\), whether we restrict to the finite ones or not. However, as Grothendieck pointed out in Section 4 of [1], one can realize the infinite regular polyhedra as covers of finite ones in a natural way. Namely, one defines the 'universal regular 2-polyhedron' via two parameters that all regular polyhedra satisfy, and by reducing these parameters modulo \(p\) one obtains finite regular polyhedra defined over \(\mathbb{F}_{p}\). This approach also allows one to define regular polyhedra over any base ring. One question Grothendieck raised is whether all finite regular polyhedra, or equivalently algebraic curves over \(\overline{\mathbb{Q}}\), can be obtained through regular polyhedra over finite fields. ## 2 The universal regular polyhedron and specialization To define the universal 2-polyhedron, Grothendieck says the following in Section 4 of [1]. "For such a polyhedron, we find a canonical basis (or flag) of the ambient affine or projective space, such that the operations of the cartographic group \(\underline{C}_{n}\), generated by the fundamental reflections \(\sigma_{i}(0\leq i\leq n)\), are written in that basis by universal formulae, in terms of the \(n\) parameters \(\alpha_{1},\ldots,\alpha_{n}\), which can be geometrically interpreted as the doubles of the cosines of the "fundamental angles" of the polyhedron." We will now determine the formulas for \(n=2\). Given any regular polyhedron, let \(\theta\) be the angle of each face and let \(\gamma\) be the dihedral angle between two faces. Fix a basis \(v_{0},v_{1},v_{2}\) given by the vertex, midpoint of an edge, and center of a face of a flag. Let \(\sigma_{v},\sigma_{e},\sigma_{f}\) represent reflection of a vertex, edge, and face respectively. Then by elementary geometry, we obtain: \[\sigma_{0}(v_{0}) =2v_{1}-v_{0},\] \[\sigma_{1}(v_{1}) =(1-\cos\theta)v_{0}-v_{1}+(1+\cos\theta)v_{2},\] \[\sigma_{2}(v_{2}) =(1-\cos\gamma)v_{1}-v_{2}.\] _Remark_.: We note that these formulas are in terms of the cosines of the angles of the polyhedrons, not their doubles as Grothendieck states. We have been unable to resolve this discrepancy. Then we obtain \[\rho_{v}=\sigma_{2}\circ\sigma_{1}=\begin{bmatrix}1&1-\cos\theta&(1-\cos \gamma)(1-\cos\theta)\\ 0&-1&-1+\cos\gamma\\ 0&1+\cos\theta&-1+(1-\cos\gamma)(1+\cos\theta)\end{bmatrix},\] \begin{table} \begin{tabular}{c|c} (p,q) & Polyhedron \\ \hline (5, 3) & tetrahedron \\ (5, 4) & cube \\ (4, 3) & octahedron \\ (5, 5) & dodecahedron \\ (5, 5) & icosahedron \\ \end{tabular} \end{table} Table 1: The classical regular polyhedra \[\rho_{e}=\sigma_{2}\circ\sigma_{0}=\begin{bmatrix}-1&0&0\\ 2&1&1-\cos\gamma\\ 0&0&-1\end{bmatrix},\] \[\rho_{f}=\sigma_{1}\circ\sigma_{0}=\begin{bmatrix}-1&-1+\cos\theta&0\\ 2&1-2\cos\theta&0\\ 0&1+\cos\theta&1\end{bmatrix}.\] In order to interpret regular polyhedra over arbitrary base rings, Grothendieck isolates these universal formulae in terms of \(\cos\theta\) and \(\cos\gamma\) as the key to giving a sound definition of regular polyhedra. He goes on to say: "In this game, there is no question of limiting oneself to finite regular polyhedra, nor even to regular polyhedra whose facets are of finite order, i.e. for which the parameters \(\alpha_{i}\) are roots of suitable "semicyclotomic' equations, expressing the fact that the "fundamental angles" (in the case where the base field is \(\mathbb{R}\)) are commensurable with \(2\pi\)." Thus for any ring \(R\), the regular polyhedra over \(R\) are defined through the above formulas where elements in \(R\) are substituted for \(\cos\theta\) and \(\cos\gamma\). We see that when \(R=\mathbb{R}\), each of the groups \(G_{p,q}\) considered above are obtained through particular choices of \(\cos\theta\) and \(\cos\gamma\). This approach also allows us to define specialization of regular polyhedra. This is explained in Section 4 of [1] as follows. "The situation is entirely different if we start from an infinite regular polyhedron, over a field such as \(\mathbb{Q}\), for instance, and "specialise" it to the prime fields \(\mathbb{F}_{p}\) (a well-defined operation for all \(p\) except a finite number of primes). It is clear that every regular polyhedron over a finite field is finite - we thus find an infinity of finite regular polyhedra as \(p\) varies, whose combinatorial type, or equivalently, whose automorphism group varies "arithmetically" with \(p\). This situation is particularly intriguing in the case where \(n=2\), where we can use the relation made explicit in the preceding paragraph between combinatorial \(2\)-maps and algebraic curves defined over number fields. In this case, an infinite regular polyhedron defined over any infinite field (and therefore, over a sub-\(\mathbb{Z}\)-algebra of it with two generators) thus gives rise to an infinity of algebraic curves defined over number fields, which are Galois coverings ramified only over 0, 1 and \(\infty\) of the standard projective line." Indeed, by considering the matrices defining \(\rho_{v},\rho_{e},\rho_{f}\) to be in \(GL(3,R)\), we obtain various quotients of \(F_{2}\), which are necessarily finite if \(R\) is finite. ## 3 Regular polyhedra over finite fields Grothendieck raised the following question in Section 4 of [1]: "Exactly which are the finite regular 2-maps, or equivalently, the finite quotients of the 2-cartographic group, which come from regular 2-polyhedra over finite fields? Do we obtain them all, and if yes: how?" In this section we will make some computations towards answering this question and at the same time give some examples of specialization. Because a single algebraic curve may be realized as a finite regular 2-map in multiple ways, it is also interesting to ask the same question where we are only interested in the isomorphism class of the curve in question. Since regular polyhedra are given by finite quotients of \({C_{2}}^{+}\), we can identify them by finite quotients of the group with presentation \(\langle\rho_{v},\rho_{f}|\rho_{v}^{b}=\rho_{f}^{q}=\rho_{v}\rho_{f}\rho_{v}\rho_{ f}=1\rangle\) where \(\rho_{v}\) and \(\rho_{f}\) have order \(p\) and \(q\) respectively. One can use the Riemann-Hurwitz formula to compute the genus of the associated algebraic curve \(X\) in terms of \(p\), \(q\), and the order of the group. Indeed, say the map has \(V\) vertices, \(E\) edges, and \(F\) faces. If the order of the group is \(|G|\), then we obtain a degree \(|G|=2E\) map from \(X\) to \(\mathbb{P}^{1}\) ramified at 0, 1, and \(\infty\). The total ramification order over these points is given by \(3|G|-V-E-F\). On the other hand, we have \(E=\frac{pV}{2}=\frac{qF}{2}\). Therefore we have \[2g_{X}-2=-2|G|+3|G|-V-E-F=E\left(1-\frac{2}{p}-\frac{2}{q}\right).\] We will now give some examples for \(g=0,1\). ### Genus 0 If \(g=0\), then it is easy to classify all finite regular polyhedra. Indeed, this was already done in Table 1. To include the case when either \(p\) or \(q\) is less than 3, we also have the cases \((p,q,E)=(2,n,n),(n,2,n)\) for \(n\in\mathbb{Z}_{>0}\). We will now list the universal formulas for the five classical regular polyhedra and consider their reductions to finite fields. Looking at the values of the cosines, this operation is well-defined except when \(p=2,3\) for the tetrahedron and octahedron, always for the cube, except \(p=2,3,5\) for the dodecahedron, and except \(p=2,5\) for the icosahedron. Setting \(x=\cos\theta,y=\cos\gamma\), we recall that the formulas for the rotations are given by \[\rho_{v}=\begin{bmatrix}1&1-x&(1-x)(1-y)\\ 0&-1&-1+y\\ 0&1+x&-1+(1+x)(1-y)\end{bmatrix},\rho_{e}=\begin{bmatrix}-1&0&0\\ 2&1&1-y\\ 0&0&-1\end{bmatrix},\rho_{f}=\begin{bmatrix}-1&-1+x&0\\ 2&1-2x&0\\ 0&1+x&1\end{bmatrix}.\] The values of \(\cos\theta\) and \(\cos\gamma\) for the regular polyhedra are given in the following table. **Proposition 3.1**.: _The classical regular polyhedra retain their automorphism groups when reduced to \(\mathbb{F}_{p}\) when this reduction is defined, except that the cube in \(\mathbb{F}_{2}\) has automorphism group \(S_{3}\)._ Proof.: The automorphism group of the tetrahedron is \(A_{4}\). Its normal subgroups are the trivial one, one of order 4, and itself. Therefore, if its specialization at some prime \(p\) can only change if the automorphism group becomes trivial or of order 3. For each \(p\), it is clear that \(\rho_{v}\) and \(\rho_{f}\) do not reduce to the identity and are distinct, so the result follows. The normal subgroups of \(S_{4}\) are the trivial one, one of order 4, \(A_{4}\), and itself. One checks that for the cube in \(\mathbb{F}_{2}\), the element \(\rho_{v}\) still has order 3 but \(\rho_{f}\) has order 2. This forces the automorphism group to be the quotient of \(S_{4}\) by its normal subgroup of order 4, which is isomorphic to \(S_{3}\). In other characteristics \(\rho_{f}\) still has order 4 and \(\rho_{v}\) has order 3, so we obtain \(S_{4}\). This last statement also holds for the octahedron in characteristics larger than 3. \begin{table} \begin{tabular}{c|c|c} Polyhedron & \(\cos\theta\) & \(\cos\gamma\) \\ \hline tetrahedron & \(\frac{1}{2}\) & \(\frac{1}{3}\) \\ cube & 0 & 0 \\ octahedron & \(\frac{1}{2}\) & \(-\frac{1}{3}\) \\ dodecahedron & \(\frac{1-\sqrt{5}}{4}\) & \(-\frac{\sqrt{5}}{5}\) \\ icosahedron & \(\frac{1}{2}\) & \(-\frac{\sqrt{5}}{3}\) \\ \end{tabular} \end{table} Table 2: Angles of the classical regular polyhedra The automorphism group of the dodecahedron and icosahedron are \(A_{5}\), which is simple. It is easy to check that \(\rho_{e}\) and \(\rho_{f}\) never reduce to the identity modulo \(p\), so their automorphism groups remain \(A_{5}\). _Remark_.: This confirms Grothendieck's statement in [1] that an icosahedron specialized to a finite field remains an icosahedron. However, he seems to claim this in all cases. ### Genus 1 Every finite regular polyhedron of genus 1 must satisfy \((p,q)=(3,6),(4,4)\), or \((6,3)\) where as before \(p\) and \(q\) are the orders of the elements \(\rho_{v}\) and \(\rho_{f}\) respectively. Taking into account the isomorphism between \(G_{3,6}\) and \(G_{6,3}\), we see they can all be realized as finite quotients of \(G_{4,4}=\underline{C_{2}}^{+}/\rho_{v}^{4}=\rho_{f}^{4}=1,G_{6,3}=\underline{C_ {2}}^{+}/\rho_{v}^{6}=\rho_{f}^{3}=1\) modulo \(p\). We begin by giving an explicit example of specializing these infinite polyhedra to finite base rings. **Proposition 3.2**.: _For \(n\geq 2\), reducing \(G_{4,4}\) modulo \(n\) gives the map defined by an \(n/2\times n/2\) square grid if \(n\) is even and an \(n\times n\) square grid if \(n\) is odd, which in both cases correspond to the elliptic curve defined by \(y^{2}=x^{3}+x\), with lattice \(\langle 1,i\rangle\) and \(j\)-invariant \(1728\)._ Proof.: In the case of \(G_{4,4}\) we have \(\cos\theta=0,\cos\gamma=-1\), so the matrices of interest are given by \[\rho_{v}=\begin{bmatrix}1&1&2\\ 0&-1&-2\\ 0&1&1\end{bmatrix},\rho_{f}=\begin{bmatrix}-1&-1&0\\ 2&1&0\\ 0&1&1\end{bmatrix}.\] Taken in \(\operatorname{GL}(3,\mathbb{Z})\) these two matrices generate \(G_{4,4}\), which is represented by an infinite square grid which we may identify with the coordinate plane. Reducing modulo \(n\) is equivalent to considering their image in \(\operatorname{GL}(3,\mathbb{Z}/n\mathbb{Z})\). The corresponding map can thus be computed by analyzing which transformations of the flag are represented by a matrix equivalent to the identity in \(\operatorname{GL}(3,\mathbb{Z}/n\mathbb{Z})\), and quotienting out by the subgroup they generate. Starting with the flag with coordinates \(V=(1,0),E=(1/2,0),F=(1/2,1/2)\), we see that the matrices \(I_{3}+R\) and \(I_{3}+U\), where \[R=\begin{bmatrix}-2&-2&-2\\ 2&2&2\\ 0&0&0\end{bmatrix},U=\begin{bmatrix}0&0&0\\ -2&-2&-2\\ 2&2&2\end{bmatrix}\] send it to the corresponding flag in the square to the right and to the square above, respectively. On the other hand, the rotations are represented by the matrices \[\begin{bmatrix}-1&-1&0\\ 2&1&0\\ 0&0&1\end{bmatrix},\begin{bmatrix}-1&0&0\\ 0&-1&0\\ 2&2&1\end{bmatrix},\begin{bmatrix}1&1&0\\ -2&-1&0\\ 2&1&1\end{bmatrix}.\] One also sees that the matrices corresponding to shifting the flag to the square above and to the right for each of these three other flags in the same square are of the same form as \(R\) and \(U\). One deduces from this that the only transformations equivalent to the identity are those which move the original flag to the same flag a multiple of \(n/2\) squares horizontally and vertically for \(n\) even, and a multiple of \(n\) squares for \(n\) odd. The elliptic curve associated to this dessin is well-known to be the one with \(j\)-invariant \(1728\) and lattice given by \(\langle 1,i\rangle\); see e.g. [3]. In the case of a 1 by 1 square, an explicit Belyi function is given by \(f(z)=\frac{c}{\wp_{i}^{2}(z)}\), where \(c\) is a constant used to ensure that the function is 1 at the midpoints of the edges and \(\wp_{i}\) is the Weierstrass \(\wp\)-function on the lattice \(\langle 1,i\rangle\). In the case of a \(k\) by \(k\) square, we can simply replace this function by \(f(z)=\frac{c}{\wp_{i}^{2}(kz)}\) _Remark_.: In this example, if we restrict \(n\) to being equal to a prime \(p\) we clearly do not obtain every possible map because the degree of each map will then be of the form \(p^{2}\). However, order considerations do not rule out the possibility that we can obtain them using arbitrary regular polyhedra over finite fields. Indeed, since the order of \(GL(3,\mathbb{F}_{q})\) is divisible by any \(n\) for an appropriate choice of \(q\), there is no obvious restriction on what order regular polyhedra obtained in such a way may have. For an example of a map that can be obtained as a quotient of \(G_{4,4}\) not of such a simple form, see [4], Figure 3.7. One can use a similar approach to reducing \(G_{6,3}\) modulo \(n\) and obtain a triangular grid rather than a square grid. In this case, one can use the meromorphic function \(f(z)=\frac{c}{\rho_{e^{i\pi/3}}(z)^{3}}\), scaled appropriately, to exhibit an explicit Belyi function on the elliptic curve with \(j\)-invariant \(0\) and lattice given by \(\langle 1,e^{i\pi/3}\rangle\). ### Additional remarks Under the interpretation we have taken, one may try to complete the classification of which regular maps and algebraic curves can be obtained over finite fields in genus \(1\). In higher genus one can ask the same question, which seems to be more difficult. However, another line of work involves fully understanding the observations of Grothendieck in Section 4 of [1]. Indeed, in this note we have pointed out some instances where our interpretation does not account for everything outlined by Grothendieck. In particular, we highlight again the fact that our universal formulas defining the reflections and rotations are defined over \(\mathbb{Z}[\cos\theta,\cos\gamma]\), rather than using the doubles of the cosines as Grothendieck states. For an octahedron we have \(\cos\theta=\frac{1}{2}\), so under our conventions it is not clear how to consider the octahedron over \(\mathbb{F}_{2}\). This point seems significant because Grothendieck describes the octahedron over \(\mathbb{F}_{2}\) as part of an interesting phenomenon, which he describes as follows. "Thus, examining the Pythagorean polyhedra one after the other, I saw that the same small miracle was repeated each time, which I called the combinatorial paradigm of the polyhedra under consideration. Roughly speaking, it can be described by saying that when we consider the specialisation of the polyhedra in the or one of the most singular characteristic(s) (namely characteristics \(2\) and \(5\) for the icosahedron, characteristic \(2\) for the octahedron), we read off from the geometric regular polyhedron over the finite field (\(\mathbb{F}_{4}\) and \(\mathbb{F}_{5}\) for the icosahedron, \(\mathbb{F}_{2}\) for the octahedron) a particularly elegant (and unexpected) description of the combinatorics of the polyhedron." One possible approach may be to use a choice of basis different from the canonical one given by a flag. This may allow, say, the octahedron to be reduced to \(\mathbb{F}_{2}\), though it appears that the application of this method would be specific to the polyhedron in question. Another approach may be to consider the polyhedron in projective space, which Grothendieck indeed mentioned in [1] as a superior alternative to considering them in affine space. We conclude that though the approach described in this note explain some of Grothendieck's ideas on regular polyhedra, some variations will likely be necessary to realize them fully.
2303.17397
Efficient Thermal Transport across Molecular Chains in Hybrid 2D Lead Bromide Perovskites
We report measurements of the heat capacity and cross-plane thermal conductivity of 2D (CxH2x+1NH3)2[MAPbBr3]n-1PbBr4 (MA = methylammonium) lead bromide perovskites (2D LHPs) at room temperature as a function of both the octahedral layer thickness (n = 1,2,3) and the organic spacer chain length (x=4,5,6,7,8) using differential scanning calorimetry (DSC) and frequency domain thermoreflectance (FDTR) respectively. We observe ultralow thermal conductivities (0.18-0.51 W/m K) for all 2D LHPs studied, but surprisingly minimal suppression of thermal conductivity with respect to bulk MAPbBr3 (0.5 W/m K). Cross-plane thermal conductivity is found to increase monotonically as a function of both the octahedral layer thickness (0.18-0.26 W/m K for n=1-3) and the organic chain length (0.18-0.51 W/m K for x=4-8). Additionally, we measure heat capacities that are well described by composite theory, suggesting bulk-like phonon density-of-states within the separate organic and inorganic subphases of the layered structure. The striking observation of increasing thermal conductivity with increasing organic phase fraction (i.e. increasing organic chain length) indicates efficient thermal transport along the ordered alkyl chain backbone. Our experimental results agree most closely with a predictive model of ballistic phonon transport with diffuse interface scattering - rather than normal thermal conduction within each phase. This study indicates the potential for synthesizing 2D LHPs with thermal conductivity that exceeds the bulk perovskite phase, while also shedding light on relevant phonon transport pathways in 2D LHPs.
Nabeel S. Dahod, Watcharaphol Paritmongkol, William A. Tisdale
2023-03-30T14:10:10Z
http://arxiv.org/abs/2303.17397v1
# Efficient Thermal Transport across Molecular Chains in Hybrid 2D Lead Bromide Perovskites ###### Abstract We report measurements of the heat capacity and cross-plane thermal conductivity of 2D (C\({}_{\textrm{x}}\)H\({}_{\textrm{2x+}}\)NH\({}_{\textrm{3}}\))\({}_{\textrm{2}}\)[MAPbBr\({}_{\textrm{3}}\)]\({}_{\textrm{n}}\)\({}_{\textrm{p}}\)PbBr\({}_{\textrm{4}}\) (MA = methylammonium) lead bromide perovskites (2D LHPs) at room temperature as a function of both the octahedral layer thickness (n = 1,2,3) and the organic spacer chain length (x=4,5,6,7,8) using differential scanning calorimetry (DSC) and frequency domain thermoreflectance (FDTR) respectively. We observe ultralow thermal conductivities (o.18 - o.51 W/m\(\cdot\)K) for all 2D LHPs studied, but surprisingly minimal suppression of thermal conductivity with respect to bulk MAPbBr\({}_{\textrm{3}}\) (o.5 W/m\(\cdot\)K). Cross-plane thermal conductivity is found to increase monotonically as a function of both the octahedral layer thickness (o.18-o.26 W/m\(\cdot\)K for n=1-3) and the organic chain length (o.18-o.51 W/m\(\cdot\)K for x=4-8). Additionally, we measure heat capacities that are well described by composite theory, suggesting bulk-like phonon density-of-states within the separate organic and inorganic subphases of the layered structure. The striking observation of increasing thermal conductivity with increasing organic phase fraction (i.e. increasing organic chain length) indicates efficient thermal transport along the ordered alkyl chain backbone. Our experimental results agree most closely with a predictive model of ballistic phonon transport with diffuse interface scattering - rather than normal thermal conduction within each phase. This study indicates the potential for synthesizing 2D LHPs with thermal conductivity that exceeds the bulk perovskite phase, while also shedding light on relevant phonon transport pathways in 2D LHPs. 2D, Ruddelsden-Popper, perovskite, crystal, thermal transport, nanoscale, ballistic, superlattice, layered, vibrational, thermal conductivity, diffuse, interface, hybrid, heat capacity, specific heat, molecular junction, conduction, heat **Introduction:** Hybrid organic-inorganic perovskites (HOIPs) have emerged over the past decade as a potential candidate to supplant conventional semiconductors for use in a variety of device applications, including efficiency-competitive solar cells[1, 2, 3], color tunable light emitting diodes (LEDs)[4, 5], sensitive photodetectors[6], and lasers[7]. These materials are denoted via the chemical formula ABX\({}_{3}\), and exhibit a crystal structure in which a small organic cation (A), such as methylammonium (MA), is enclosed within metal (B) halide (X) octahedra. HOIPs exhibit high optical absorption coefficients, small exciton binding energies, and long/balanced electron-hole diffusion lengths[8, 9]. In addition to these 3D semiconductors, a variety of nanostructured HOIPs have been developed as alternative material architectures. One increasingly relevant nanostructured perovskite is the 2D Ruddlesden-Popper organic-inorganic lead halide perovskite (2D LHP), in which atomically thin sheets of the traditional perovskite structure are separated by bilayers of larger organic cations.[10] 2D LHPs have shown improved stability and promising performance metrics in solar cells and LEDs in particular.[11, 12, 13, 14, 15, 16] These periodic layered nanostructures can be grown as macroscopically large crystals, with long-range order and minimal grain boundaries. Additionally, since the perovskite sheets are thin enough to exhibit quantum confinement, the size of this phase can be controlled independently for further tunability of the optical and electronic properties.[17, 18] Finally, the organic subphase can be manipulated to potentially improve performance, particularly in terms of the exciton binding energy and photoluminescence quantum yield.[19, 20, 21] In addition to optical and electronic properties, thermal and vibrational phenomena in bulk (3D) and 2D LHPs are of emerging interest[22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. Thermal management is of critical concern in the development of any material for widespread use in optoelectronic device applications, particularly when heating can degrade material performance.[37, 38] In addition to mitigating device-heating, engineering electron-crystal phonon-glass materials that dissipate heat slowly while efficiently conducting charge opens the door for further applications such as thermoelectric devices.[39] Unlike conventional semiconductors, bulk HOIPs are characterized by ultralow thermal conductivities (0.34-0.73 W/mK), which are up to two orders of magnitude lower than those for traditional inorganic semiconductors.[22, 23, 24, 25, 26, 27, 28, 29, 30] Unlike metals and highly-doped semiconductors, where charge carriers contribute significantly to overall thermal transport, phonons are believed to be exclusively responsible for thermal transport in HOIPs.[22] From kinetic theory, the phonon thermal conductivity can be expressed in terms of the product of the mean-free path (\(\Lambda\), MFP), group velocity (\(\nu\)), and heat capacity (C) of the phonons responsible for heat transport as \(k=\frac{1}{3}C\nu\Lambda\). Preliminary reports have identified that the ultralow measured thermal conductivities likely stem from a combination of factors such as low phonon group velocities and high anharmonic scattering between phonon modes associated with the lead-halide lattice and the small organic cation enclosed within it.[39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 221, 232, 233, 240, 241, 242, 251, 252, 26, 271, 28, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 34, 35, 36, 37, 38, 39, 31, 33, 34, 36, 38, 39, 32, 34, 35, 37, 39, 33, 36, 39, 34, 37, 38, 39, 35, 39, 36, 37, 39, 38, 39, 39, 37, 38, 39, 39, 31, 34, 35, 36, 38, 39, 37, 39, 38, 39, 39, 31, 35, 39, 32, 36, 37, 38, 39, 39, 32, 33, 34, 35, 36, 39, 37, 38, 39, 39, 33, 38, 39, 34, 39, 35, 36, 37, 39, 38, 39, 37, 39, 39, 38, 39, 39, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 39, 30, 32, 34, 36, 39, 35, 37, 39, 38, 39, 31, 36, 39, 37, 39, 32, 38, 39, 33, 39, 34, 35, 37, 39, 36, 38, 39, 37, 38, 39, 39, 38, 39, 39, 31, 39, 32, 33, 35, 36, 39, 37, 38, 39, 39, 32, 39, 33, 34, 35, 36, 37, 39, 38, 39, 39, 32, 39, 33, 35, 37, 39, 36, 38, 39, 37, 39, 38, 39, 39, 30, 33, 31, 32, 33, 36, 39, 37, 38, 39, 39, 31, 38, 39, 32, 33, 39, 34, 35, 36, 39, 37, 38, 39, 39, 38, 39, 39, 30, 33, 32, 33, 33, 35, 39, 36, 37, 38, 39, 39, 31, 39, 32, 33, 33, 34, 35, 37, 39, 38, 39, 32, 33, 36, 39, 37, 39, 38, 39, 39, 30, 33, 34, 35, 38, 39, 31, 32, 34, 36, 39, 33, 37, 38, 39, 39, 32, 33, 35, 39, 34, 36, 37, 38, 39, 39, 35, 39, 36, 39, 37, 38, 39, 39, 38, 39, 39, 30, 31, 32, 33, 34, 35, 36, 39, 37, 38, 39, 39, 38, 39, 31, 32, 34, 35, 39, 36, 37, 39, 38, 39, 39, 32, 33, 37, 38, 39, 33, 39, 34, 38, 39, 35, 39, 36, 37, 39, 38, 39, 39, 39, 37, 39, 39, 31, 38, 39, 39, 32, 39, 33, 34, 35, 36, 37, 39, 38, 39, 39, 30, 31, 32, 34, 36, 39, 37, 39, 39, 33, 38, 39, 30, 32, 34, 35, 36, 38, 39, 37, 39, 38, 39, 39, 39, 31, 33, 39, 32, 35, 39, 33, 36, 39, 37, 38, 39, 39, 30, 31, 32, 34, 39, 33, 35, 39, 36, 37, 38, 39, 39, 31, 34, 35, 39, 37, 39, 38, 39, 39, 30, 32, 35, 39, 36, 37, 39, 38, 39, 39, 30, 33, 37, 39, 31, 38, 39, 32, 39, 33, 34, 35, 36, 39, 37, 38, 39, 39, 30, 33, 38, 39, 31, 39, 32, 33, 35, 39, 34, 36, 37, 38, 39, 35, 39, 36, 37, 39, 38, 39, 39, 30, 31, 32, 34, 35, 39, 36, 38, 39, 37, 39, 38, 39, 39, 30, 33, 32, 39, 33, 34, 35, 37, 39, 31, 38, 39, 32, 35, 39, 33, 36, 39, 37, 38, 39, 39, 30, 33, 34, 38, 39, 32, 39, 33, 35, 39, 34, 36, 39, 35, 37, 39, 36, 37, 38, 39, 39, 31, 39, 32, 33, 36, 39, 33, 37, 38, 39, 39, 30, 34, 35, 39, 36, 37, 38, 39, 39, 30, 31, 32, 35, 39, 33, 36, 37, 38, 39, 30, 34, 39, 31, 35, 39, 32, 36, 37, 38, 39, 30, 33, 34, 35, 36, 37, 39, 38, 39, 30, 33, 39, 31, 34, 35, 39, 32, 36, 37, 38, 39, 31, 35, 39, 36, 39, 37, 38, 39, 39, 30, 34, 39, 35, 37, 39, 36, 38, 39, 37, 39, 38, 39, 39, 30, 31, 32, 39, 33, 36, 39, 37, 38, 39, 30, 34, 39, 35, 38, 39, 32, 39, 33, 36, 37, 39, 38, 39, 30, 34, 39, 35, 39, 36, 37, 39, 38, 39, 39, 30, 31, 32, 39, 33, 37, 39, 32, 38, 39, 33, 39, 30, 34, 39, 35, 36, 39, 37 perovskite.[48] Similarly, Raman signatures that may be assigned to longitudinal acoustic phonons have been identified at frequencies below 30 cm-[2, 47, 2, 28, 7] These results suggest the possibility that superlattice modes may contribute to thermal conductivity in much the same way as observed in inorganic superlattices. However, significant anharmonic interaction between the organic and inorganic subphases could diminish the importance of this contribution, a crossover well established for a variety of inorganic superlattices.[47, 48, 49, 50, 44, 45, 46, 47] Giri _et al._ measured thermal conductivity in selected 2D iodide and bromide perovskites, finding thermal conductivities in the range 0.10-0.19 W/m\(\cdot\)K, which is lower than the corresponding value in bulk MAPbBr\({}_{3}\) (0.51 \(\pm\) 0.12 W/m\(\cdot\)K) or MAPbI\({}_{3}\) (0.34 \(\pm\) o.o8 W/m\(\cdot\)K)[23]. In 2D alkylammonium lead iodide perovskites, specifically, cross-plane thermal conductivity has been studied as a function of octahedral layer thickness[35] and alkyl chain length[46]. In both studies, 2D lead iodide perovskites exhibited ultralow thermal conductivities (0.06-0.37 W/m\(\cdot\)K) that were, in some cases, heavily suppressed relative to the corresponding bulk lead iodide perovskite. Here, we investigate thermal transport in a series of systematically varied 2D alkylammonium lead bromide perovskite crystals. We report measurements of the heat capacity and thermal conductivity of 2D (C\({}_{x}\)H\({}_{2x+}\)NH\({}_{3}\))\({}_{2}\)[MAPbBr\({}_{3}\)]\({}_{n}\cdot\)PbBr\({}_{4}\) perovskites at room temperature as a function of both the octahedral layer thickness of the perovskite layer (n = 1,2,3) and the organic spacer chain length (x=4,5,6,7,8) using differential scanning calorimetry (DSC) and frequency domain thermoreflectance (FDTR), respectively. We observe ultralow thermal conductivities (0.18 - 0.51 W/mK) for all 2D LHPs studied, that in some cases were only minimally suppressed relative to the bulk. Significantly, the thermal conductivity was found to increase monotonically as a function of both the octahedral layer thickness and the organic chain length. We find good agreement between experimental observations and a predictive model for ballistic phonon transport with diffuse interface scattering[4]. We discuss the implications of these findings for tuning composite thermal conductivity in 2D LHPs and contrast our observations to other reports. **Results & Discussion:** _Frequency Domain Thermoreflectance (FDTR)_ FDTR is a widely utilized optical metrology technique for monitoring thermal transport in solids, small crystals, thin films, and across molecular junctions such as self-assembled monolayers.[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 250, 209, 226, 208, 209, 232, 241, 251, 252, 253, 254, 255, 256, 257, 258, 261, 270, 271, 272, 273, 274, 275, 276, 277, 278, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 332, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 83, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 112, 114, 115, 116, 117, 118, 119, 120, 131, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 175, 176, 177, 178, 179, 181, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 231, 242, 250, 209, 226, 208, 209, 211, 227, 273, 274, 275, 276, 277, 278, 283, 284, 285, 286, 287, 288, 299, 31, 32, 334, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 100, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 132, 133, 144, 145, 146, 147, 148, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 178, 179, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 200, 211, 22, 22, 231, 24, 250, 209, 210, 22, 231, 250, 209, 211, 22, 231, 250, 209, 226, 208, 209, 211, 22, 231, 250, 209, 227, 273, 274, 275, 276, 277, 278, 283, 284, 285, 286, 287, 288, 299, 300, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 100, 102, 103, 104, 105, 106, 107, 108, 109, 110, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 131, 142, 143, 145, 146, 147, 148, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 209, 210, 222, 231, 24, 250, 209, 210, 22, 231, 250, 209, 211, 22, 231, 250, 209, 226, 209, 227, 273, 274, 275, 276, 277, 278, 299, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 100, 102, 103, 104, 105, 106, 107, 108, 109, 112, We implemented a home-built FDTR system (Figure 1; see Supporting Information for further details) to analyze macroscopic 2D LHP crystals synthesized using a cooling-induced crystallization process [55]. All crystals were adhered to a glass substrate and coated with a thermally evaporated Au film (too nm). Two coaxially aligned continuous-wave lasers were focused on the sample of interest through an objective lens, one of which ("pump", \(\lambda\)=488nm) was intensity modulated at kHz to MHz frequency while the other ("probe", \(\lambda\)=532nm) was not. The Au film transducer exhibits linear thermoreflectance at the laser wavelengths/powers employed, leading to modulation of the reflected probe beam intensity at the pump heating frequency, but phase-shifted by an amount that depends on the thermal transport characteristics of the underlying perovskite sample. The reflected probe light is separated from the pump and its phase lag across a range of pump frequencies is catalogued using a lock-in detection scheme. This frequency response is fit to a continuum model of heat transport based on the diffusion equation for the system geometry in order to extract the effective thermal conductivity of the sample [56]. Since we operate using heater spot sizes (-2 \(\upmu\)m FWHM) larger than the penetration depths of the measurement (-200-1000 nm), this extracted value specifically corresponds to an effective cross-plane thermal conductivity of the macroscopic crystal in the surface-normal direction. In order to obtain reliable measurements of the thermal conductivity, the transducer thickness, laser spot size, and heat capacity are measured separately using profilometry, a CCD camera, and DSC respectively. Further details on the FDTR technique, sample preparation, and crystal synthesis are provided in the Supplementary Information. Figure 2: _Left_: Schematic of 2D LHP and approximation of it as a periodic multilayered structure. Parameters used to approximate each layer are shown within their respective shaded region, and correspond to the sub-layer thickness (d), volumetric heat capacity (C) and sound velocity (\(\nu\)). Black arrows show phonon transport as represented by model in Eq. 1. Phonons move ballistically through each layer and scatter diffusively (annihilatively) at the interfaces. _Right_: Measured thermal conductivity using FDTR for n = 1 BAPbBr and n = 2,3 BA:MAPbBr 2D LHPs at room temperature (black) and comparison to the theoretical model in Eq. 1 (red). At room temperature, the experimentally measured thermal conductivity of PbBr 2D LHPs containing butylamine as the organic spacer (n=1 C4:PbBr and n=2,3 C4:MAPbBr) increases monotonically with octahedral layer thickness from \(k=0.19\pm 0.07\) for n=1 to \(k=0.28\pm 0.08\) W/m\(\cdot\)K for n=3 (Figure 2). These values represent an approximately 50% reduction in the thermal conductivity compared to the bulk MAPbBr3 value, which was measured to be \(k=0.49\pm 0.1\) W/m\(\cdot\)K on the same apparatus - consistent with previous reports [23]. While suppression of phonon transport is well documented in nanostructured semiconductors [57], the magnitude of the reduction that we observe is surprisingly small. For example, colloidal semiconductor nanocrystal solids show a thermal conductivity reduction of two orders of magnitude relative to their bulk values, due largely to impedance mismatch at the organic-inorganic interfaces between the semiconductor crystallites and their surface-bound molecular ligands [53]. For the 2D LHPs studied here, even if no reduction in the thermal conductivity within each subphase is assumed and a simple series-resistance model \(\big{(}\frac{L_{inorganic}}{k_{inorganic}}+\frac{L_{organic}}{k_{organic}}=\frac{L_{ 2DLHP}}{k_{2DLHP}}\big{)}\) is used to estimate the "upper limit" of the thermal conductivity for the 2D LHP in the diffusive transport regime (plotted in orange in Figure 2), the low thermal conductivity of the organic subphase (\(\sim\)0.1 W/mK for most short-chain alkanes in their pure phase) actually predicts a lower thermal conductivity than measured (\(\sim\)0.15-0.21 W/mK for n=1-3 C4 PbBr 2D LHPs). This is especially surprising given that this representation does not account for any phonon scattering at the organic-inorganic interfaces between layers. The relatively modest reduction in thermal conductivity of 2D LHPs relative to their bulk perovskite counterparts suggests that the average MFP of heat carrying phonons in 2D LHP crystals is not appreciably larger than the thickness of the inorganic layers (1-2 nm). This is largely in agreement with several recent computational and experimental studies, and reinforces the prevailing belief that thermal transport in bulk HOIPs is limited by short MFP phonons [30, 32, 33]. Additionally, and more importantly, since our measured thermal conductivity in 2D LHPs is actually higher than that predicted from traditional composite theory (even when neglecting interface resistances), the dominant phonon scattering mechanism within 2D LHPs is fundamentally different than in the bulk organic and inorganic reference phase. In order to understand the monotonic increase observed as a function of the octahedral layer thickness, we utilize an expression for the cross-plane thermal conductivity, \(k\), of periodic multilayer solids derived from the Boltzmann transport equation (BTE) by Chen _et al._ for use with all-inorganic superlattices (Eq. 1), \[k=\frac{1}{2}\left(\frac{1}{C_{1}v_{1}}+\frac{1}{C_{2}v_{2}}\right)^{-1}\frac{(d _{1}+d_{2})}{2},\hskip 28.452756pt(Eq.\ref{eq:1})\] where the subscripts denote different sub-phases of the layered material, \(C_{i}\) is the volumetric heat capacity, \(v_{i}\) is the sound velocity, and \(d_{i}\) is the layer thickness.[44] This expression is obtained for a periodic structure under the relaxation time approximation, and examines the limiting case in which layers are very thin (with periodicity comparable to the MFP of phonons within either film) and exhibit infinite long-range order with minimal defects or heterogeneities in the layers. As a result, the transport within each layer is ballistic and scattering only occurs at the interfaces, hence the series representation of the resistances \(\left(\frac{1}{cv}\right)\) in each layer. This representation implies fully diffuse inelastic phonon scattering, in which phonons scattered at an interface share no phase relationship with their previous state. Accordingly, this assumption reflects the lowest predicted thermal conductivities for a model invoking ballistic transport and appropriately fits inorganic superlattices in which diffuse scattering is dominant at interlayer interfaces.[44] Each of these assumptions, while limiting for most inorganic superlattices, are actually rather appropriate for 2D LHPs as described above. The assumption of ballistic transport within each perovskite layer is reasonable given that the octahedral layer thicknesses studied here (<2 nm) are comparable to or less than the phonon MFP of bulk HOIPs.[39, 33, 34] (Although, it is worth noting that scattering within the perovskite layers - as quantified through the addition of a series resistance \(\left(\frac{d_{inorganic}}{k_{MAPBBr,bulk}}\right)\) corresponding to diffusive transport in the octahedral layer - would minimally impact the predicted conductivity.) More significant is the assumption of ballistic transport through the butylammonium organic spacer layer. This assumption calls for replacing the ultralow thermal conductivities of liquid alkanes with significantly faster ballistic thermal transport across the molecular junctions that comprise the organic subphase. Though initially surprising, such behavior has been observed elsewhere as ballistic heat transport across the molecular junctions within self-assembled monolayers[58, 59]. We find reasonable agreement between the diffuse inelastic superlattice (DIS) model represented by Eq 1 and our experimental data when using bulk values for the heat capacity and sound speed within the perovskite layers, sound speeds for tethered aliphatic chains extracted from previous studies on SAMs[60] and colloidal nanocrystals[61], and heat capacities for liquid alkanes (which have comparable densities to the organic subphase in 2D LHPs). While it is routine to approximate the phonon group velocity as the sound speed, using the volumetric heat capacity can become tenuous when optical phonons that contribute significantly to the heat capacity do not contribute to thermal transport. This, however, does not seem to be the case for HOIPs since optical phonons contribute heavily to thermal conductivity[30]. ### Heat Capacity of 2D LHPs We experimentally measured the heat capacity of the 2D LHP crystals used in this study by differential scanning calorimetry (DSC) and found the values to be consistent with those estimated from mass-weighted averages of the bulk values (Figure 3). This finding further reinforces the view that each nanoscale subphase retains its bulk heat capacity and sound velocity. Similar conclusions have been reached for other hybrid organic-inorganic nanomaterials such as colloidal quantum dot solids, wherein the vibrational density of states (vDOS) of the nanomaterial is simply the sum of the vDOS of the organic (ligand) and inorganic components[53]. Figure 3: Experimentally measured (purple) specific heat of 2D LHPs with various organic spacer molecules (chain length C) and octahedral layer thicknesses (n) using DSC agrees with predictions (red) using a mass-weighted average of the bulk heat capacities (\(C_{2DLHP}=fC_{MAPDBYP,bulk}+(1-f)C_{alkane,bulk}\)), where \(f\) is the mass fraction of the perovskite subphase. Increasing the octahedral layer thickness of 2D LHPs, using the DIS model, should lead to an increase in the thermal conductivity solely from the increase in the period thickness, as we observe (i.e. there are fewer scattering interfaces per unit length). Within this model framework, the interfaces provide the same thermal resistance regardless of the octahedral layer thickness, and the phonon transport within the perovskite layer is ballistic. This also potentially explains why the measured increase in thermal conductivity moving from n=2 to n=3 for C4:MAPbBr is less than that predicted by the DIS model (Figure 2). As the octahedral layer thickness approaches the phonon MFP, the assumption of purely ballistic transport breaks down. The decrease in measured thermal conductivity for n=3 C4:MAPbBr with respect to theory may be due to phonon-phonon scattering within the perovskite layers. ### Dependence on Organic Chain Length To probe the limits of ballistic transport within the organic subphase, we measured room temperature thermal conductivities for n=1 PbBr 2D LHPs with varying organic spacer chain lengths (butylammonium-octylammonium, C4-Cs). Strikingly, the cross-plane thermal conductivity of the 2D LHP crystals increased monotonically with increasing organic spacer chain length from 0.19 \(\pm\) 0.07 W/m\(\cdot\)K for n=1 butylammonium:PbBr to 0.51 \(\pm\) 0.13 W/m\(\cdot\)K for n=1 octylammonium:PbBr (Figure 4a). As before, the measured thermal conductivity is well-approximated by the DIS model, within instrumental error. Further analysis of the model predictions reveals that varying the length of the organic carbon chain separating the perovskite layers actually influences the thermal conductivity through multiple channels. Firstly, the period thickness of the organic layer increases with the length of the carbon chain, reducing the spatial frequency of scattering interfaces. Secondly, the sound speed through the bound organic molecules increases by a factor of 3 moving from butylammonium to octylammonium, while the heat capacity decreases only modestly (<10%) over the same range. Thus, the thermal transmittance (\(C_{2}v_{2}\), the product of heat capacity and sound velocity) and the period thickness both increase with the length of the organic spacer molecule, without affecting the perovskite layer (Figure 4b). It is particularly noteworthy that the measured thermal conductivity of n=1 octylammonium:PbBr actually meets that of bulk MAPbBr. While striking, this observation is consistent with expectations from the DIS model. In fact, this behavior highlights an interesting consequence of the DIS model: that the thermal conductivity of 2D LHPs is, in principle, not limited by the same mechanisms that lead to ultralow thermal conductivity of bulk perovskites. The primary driver for this behavior appears to be the ability of the molecular chains to sustain ballistic transport across greater distances than the perovskite phase. As long as the transport within both sub-phases remains ballistic, the only determinants for thermal conductivity are the thermal transmittance ratio \(\left(\frac{c_{2}v_{2}}{c_{1}v_{1}}\right)\) and the period thickness. Thus, for relatively efficient ballistic transport (large thermal transmittance) of the organic sub-phase the thermal conductivity will reach a theoretical maximum dependent solely on the period thickness and the thermal transmittance of the perovskite layer. It must be stressed that such a maximum is actually far above what we probe herein, but could significantly exceed the thermal conductivity of bulk MAPbBr. Cross-plane thermal conductivity in other 2D LHPs perovskites was studied recently using FDTR[35] and the related technique, time-domain thermoreflectance (TDTR)[26]. Similar to the 2D lead bromide results presented here, the 2D lead iodides also exhibited ultralow thermal conductivity (o.o6-o.37 W/m\(\cdot\)K) that was suppressed relative to the corresponding bulk value. However, unlike the 2D lead bromide results presented here, the 2D lead iodides exhibited _decreasing_ thermal conductivity with increasing octahedral layer thickness[35] and increasing organic chain length[26]. It is unclear whether the opposite trends observed for bromides and iodides arises from true chemical differences in intrinsic material behavior, or from extrinsic effects such as sample quality or measurement methodology. (We note, however, that the thermal conductivities of bulk MAPbBr Figure 4: a) Measured thermal conductivity using FDTR for n = 1 PbBr 2D LHPs at room temperature (black) and comparison to different theoretical models as a function of the aliphatic hydrocarbon chain length of the organic spacer subphase. b) Calculated relative thermal transmittance ratio \(\left(\frac{c_{2}v_{2}}{c_{1}v_{1}}\right)\) for the organic spacers used in (a) based on bulk heat capacities and sound speeds measured from SAMs & alkyl ligands bound to quantum dot surfaces. and BA\({}_{2}\)PbBr\({}_{4}\) measured on our instrument are identical, within error, to previous reports [3, 4, 5, 6, 7] - suggesting that measurement methodology is not the origin of different trends.) Unfortunately, attempts to measure the thermal conductivity of 2D lead iodide perovskites using our FDTR instrument were unsuccessful. 2D lead iodide samples readily degraded during FDTR measurement in our lab, presumably due to strong resonant excitation of the iodide band gap by the 488 nm pump or 532 nm probe lasers - which is not a problem for the blue/UV-absorbing bromide perovskites. ## Conclusions: In summary, we have measured the thermal conductivity of a series of alkylammonium lead bromide 2D LHPs. We have observed ultralow thermal conductivities ranging from 0.18 - 0.51 W/mK, revealing minimal thermal conductivity suppression relative to bulk MAPbBr perovskites. The thermal conductivity was found to increase monotonically as a function of both the octahedral layer thickness and the organic chain length. Using a thermal conductivity model derived using the Boltzmann transport equation for multilayer structures that exhibit ballistic transport with diffuse interface scattering to understand the origins of these trends, we suggest that thermal transport in the 2D bromide perovskites studied is determined by interface scattering and ballistic transport across the organic subphase rather than phonon scattering within either bulk phase. The efficacy of this model suggests that the unexpectedly high thermal conductivities of 2D bromide perovskites relative to other hybrids and nanomaterials stem from several unique facets of this material. First, the short MFPs of bulk MAPbBr\({}_{3}\) perovskites allow for atomically thin inorganic layers that do not exhibit significant suppression of phonon transport. Second, the use of molecular junctions as organic spacers incorporates a subphase that, owing to the unique ballistic transport across these \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & n=1 & n=1 & n=1 & n=1 & n=1 & n=2 & n=3 \\ & C8:PbBr & C7:PbBr & C6:PbBr & C5:PbBr & C4:PbBr & C4:MAPbBr & C4:MAPbBr \\ \hline Thermal & 0.51 \(\pm\) 0.13 & 0.40 \(\pm\) 0.12 & 0.29 \(\pm\) 0.1 & 0.29 \(\pm\) 0.1 & 0.19 \(\pm\) 0.07 & 0.27 \(\pm\) 0.07 & 0.28 \(\pm\) 0.08 \\ Conductivity & & & & & & \\ (W/m\(\cdot\)K) & & & & & & \\ \hline Heat Capacity & 1.03 \(\pm\) 0.3 & 0.85 \(\pm\) 0.26 & 0.72 \(\pm\) 0.22 & 0.72 \(\pm\) 0.22 & 0.52 \(\pm\) 0.16 & 0.63 \(\pm\) 0.19 & 0.39 \(\pm\) 0.12 \\ (J/g\(\cdot\)K) & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Summary of Thermal Transport Properties of 2D LHPs at Room Temperature junctions, is actually _more_ conductive than the perovskite layer. Thus, 2D LHPs discard the phonon-scattering pathways that limit bulk perovskites in exchange for interface scattering that can be dexterously controlled via the chemistry of the organic subphase. As a result, 2D LHPs can in principle support thermal conductivities an order of magnitude higher than bulk perovskites. Or, since they may be designed to have bulk-like thermal properties as nanocomposites where advantageous and deviant behavior where bulk properties would be deleterious, these materials can be engineered to have even lower thermal conductivities (as for thermoelectric devices). Most critically, since the thermal transport in these materials is driven by ballistic phonon conduction across the organic subphase, the inorganic subphase, and thus the prodigious optoelectronic properties of 2D LHPs, will be largely uncoupled from these pursuits in thermal engineering. **Methods:** _Synthesis of 2D LHP crystals._ 2D lead halide perovskite (2D LHP) crystals were made _via_ a previously reported cooling-induced crystallization method [55]. Briefly, a solution of PbBr, was prepared by dissolving PbO (99.9+%, (trace metal basis) <10 microns, powder, ACROS Organic) in concentrated aqueous HBr solution (ACS reagent, 48%, MilliporeSigma) under reflux at 130 \({}^{\circ}\)C for 15 minutes. The solution was then allowed to cool to room temperature before a small volume of organic spacer (alkaneamine, L) was added and a white precipitate of \(n=1\) L:PbBr formed. For the syntheses of \(n\) = 1 2D LHPs, this solution containing the white precipitate of \(n=1\) L:PbBr was heated on a hot plate set at 130 \({}^{\circ}\)C until clear. After that, the clear solution was allowed to cool slowly inside a thermos filled with hot sand at 10 \({}^{\circ}\)C to induce crystallization. After a day, crystals of bromide 2D LHPs were collected by suction filtration and dried under reduced pressure for at least 12 hours. For the syntheses of \(n=2\) and 3 2D LHPs, the solution containing the white precipitate of \(n=1\) L:PbBr was mixed with a solution of MABr in concentrated aqueous HBr before the final heating step. Further information on the synthesis and confirmatory material characterization is available in the supporting information. _Frequency Domain Thermoreflectance (FDTR)._ Two continuous wave lasers (Coherent Sapphire 200 CDRH) are used to heat the sample and measure its thermal response: a 488nm pump (which is intensity-modulated via a Conoptics electro-optic modulator) and a 532nm (probe). The former periodically heats the gold surface while the latter continuously monitors the resultant surface temperature modulation at the surface of the transducer via the thermoreflectance of the gold film. The pump is modulated sinusoidally from 100 kHz to 1 MHz. The thermal response is measured using a lock-in amplifier (Zurich Instruments HFzLI), which records the relative phase lag of the measured probe (temperature-induced reflectivity) oscillation as compared to the pump (heat flux) phase. This frequency response is related to the thermal properties of the sample, and the frequency-domain data were fit using a heat transfer model to determine the effective thermal conductivity of the nanostructured macroscopic crystal.[55, 56] Laser spot sizes were imaged using a CCD camera prior to measurement. Measurements were done at several locations to ensure no variation in the signal, but since the spot size varies from acquisition to acquisition only one scan was used for the fitting. The resulting thermal conductivity has an uncertainty of -10% due to input parameter uncertainties (chiefly the spot size of the lasers). Further details on the experimental method employed are available in the supporting information. Differential Scanning Calorimetry (DSC).DSC was implemented using a TA instruments Discovery DSC. Scans were run using -5-10 mg of material sealed in Tzero aluminum pans. Heating and cooling scans were run at no \({}^{\circ}\)C/min ramp/cool rates between 25-60 \({}^{\circ}\)C, with 1 minute isothermals in between scans. The heat capacity of a given sample was measured using a reference material (bulk MAPbBr\({}_{3}\)) with a known heat capacity. Uncertainties reported correspond to the instrument uncertainty as determined by measurement of a second standard (fused silica) using the same measurement scheme. Scan to scan uncertainty was negligible. The lever rule - (\(C_{2DLHP}=fc_{MAPbBr,bulk}+(1-f)C_{Alkane,bulk}\)), where \(f\) is the mass fraction of the perovskite subphase - was used to estimate composite heat capacities using previously measured bulk MAPbBr\({}_{3}\) heat capacities and bulk liquid alkane heat capacities.[62] Relative mass fractions were determined using the chemical formula of the nanomaterials. Online Supporting Information:Supporting information available online: further details of 2D LHP synthesis; characterization by photoluminescence and powder X-ray diffraction (XRD); description and characterization of FDTR instrument, data acquisition, and analysis. **Acknowledgements:** We are grateful to Sam Huberman and Liza Lee for assistance with implementation of the FDTR computational model. We thank Sam Winslow for assistance with DSC measurements, and Mahesh Gangishetty and Dan Congreve for use of their thermal evaporation chamber. Synthesis of 2D perovskite materials was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under award number DE-SCoo19345. Thermal transport measurements were supported by the U.S. National Science Foundation under award 1452857. N.D. was supported by the MIT Energy Initiative Society of Energy Fellows. Profilometry and powder XRD measurements were performed at the MRSEC Shared Experimental Facilities at MIT, supported by the National Science Foundation under award number DMR-1419807. DSC measurements were performed at the MIT Institute for Soldier Nanotechnologies. The ISN is supported and administered for the US Army by the US Army Research Office (ARO), a part of the Army Research Laboratory (ARL), which is itself a part of the US Army Research Development and Engineering Command (RDECOM). The ISN also receives guidance and oversight from the Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) through the Deputy Assistant Secretary of the Army for Research and Technology (DASA(R&T)). ## References * Green et al. 2014 Green, M. A.; Ho-Baillie, A.; Snaith, H. J. The Emergence of Perovskite Solar Cells. _Nat. Photonics_**2014,**8 (7), 506-514. [https://doi.org/10.1038/nphoton.2014.134](https://doi.org/10.1038/nphoton.2014.134). * Tan et al. 2017 Tan, H.; Jain, A.; Voznyy, O.; Lan, X.; Garcia de Arquer, F. P.; Fan, J. Z.; Quintero-Bermudez, R.; Yuan, M.; Zhang, B.; Zhao, Y.; Fan, F.; Li, P.; Quan, L. N.; Zhao, Y.; Lu, Z.-H.; Yang, Z.; Hoogland, S.; Sargent, E. H. Efficient and Stable Solution-Processed Planar Perovskite Solar Cells via Contact Passivation. _Science (80-. )._**2017,**355 (6326), 722-726. [https://doi.org/10.126/science.aai9o81](https://doi.org/10.126/science.aai9o81). * Chen et al. 2014 Chen, Q.; Zhou, H.; Hong, Z.; Luo, S.; Duan, H.-S.; Wang, H.-H.; Liu, Y.; Li, G.; Yang, Y. Planar Heterojunction Perovskite Solar Cells via Vapor-Assisted Solution Process. _J. Am. Chem. Soc._**2014,**136 (2), 622-625. [https://doi.org/10.1021/ja41509g](https://doi.org/10.1021/ja41509g). * Kim et al. 2015 Kim, Y. H.; Cho, H.; Heo, J. H.; Kim, T. S.; Myoung, N. S.; Lee, C. L.; Im, S. H.; Lee, T. W. Multicolored Organic/Inorganic Hybrid Perovskite Light-Emitting Diodes. _Adv. Mater._**2015,**27 (7), 1248-1254. [https://doi.org/10.1002/adma.201403751](https://doi.org/10.1002/adma.201403751). * Yuan et al. 2016 Yuan, M.; Quan, L. N.; Comin, R.; Walters, G.; Sabatini, R.; Voznyy, O.; Hoogland, S.; Zhao, Y.; Beauregard, E. M.; Kanjanaboos, P.; Lu, Z.; Kim, D. H.; Sargent, E. H. Perovskite Energy Funnels for Efficient Light-Emitting Diodes. _Nat. Nanotechnol._**2016,**1 (10), 872-877. [https://doi.org/10.1038/nnano.2016.10](https://doi.org/10.1038/nnano.2016.10). * Zhang et al. 2018 Zhang, M.; Zhang, F.; Wang, Y.; Zhu, L.; Hu, Y.; Lou, Z.; Hou, Y.; Teng, F. High-Performance Photodiode-Type Photodetectors Based on Polycrystalline Formamidinium Lead Iodide Perovskite Thin Films. _Sci. Rep._**2018,**8 (1), 1157. [https://doi.org/10.1038/s41598-018-29147-6](https://doi.org/10.1038/s41598-018-29147-6). * Jia et al. 2017 Jia, Y.; Kerner, R. A.; Grede, A. J.; Rand, B. P.; Giebink, N. C. Continuous-Wave Lasing in an Organic-Inorganic Lead Halide Perovskite Semiconductor. _Nat. Photonics_**2017,**1 (12), 784-788. [https://doi.org/10.1038/s41566-017-0047-6](https://doi.org/10.1038/s41566-017-0047-6). * Mei et al. 2014 Mei, A.; Li, X.; Liu, L.; Ku, Z.; Liu, T.; Rong, Y.; Xu, M.; Hu, M.; Chen, J.; Yang, Y.; Gratzel, M.; Han, H. A Hole-Conductor-Free, Fully Printable Mesoscopic Perovskite Solar Cell with High Stability. _Science (80-. )._**2014,**345 (6194), 295-298. [https://doi.org/10.1126/science.1254763](https://doi.org/10.1126/science.1254763). * Miyata et al. 2015 Miyata, A.; Mitioglu, A.; Plochocka, P.; Portugall, O.; Wang, J. T. W.; Stranks, S. D.; Snaith, H. J.; Nicholas, R. J. Direct Measurement of the Exciton Binding Energy and Effective Masses for Charge Carriers in Organic-Inorganic Tri-Halide Perovskites. _Nat. Phys._**2015**, \(n\) (7), 582-587. [https://doi.org/10.1038/nphys3357](https://doi.org/10.1038/nphys3357). * Stoumpos et al. 2016 Stoumpos, C. C.; Cao, D. H.; Clark, D. J.; Young, J.; Rondinelli, J. M.; Jang, J. I.; Hupp, J. T.; Kanatzidis, M. G. Ruddlesden-Popper Hybrid Lead Iodide Perovskite 2D Homologous Semiconductors. _Chem. Mater._**2016**, _28_ (8), 2852-2867. [https://doi.org/10.1021/acs.chemmater.6boo847](https://doi.org/10.1021/acs.chemmater.6boo847). * Yang et al. 2018 Yang, X.; Zhang, X.; Deng, J.; Chu, Z.; Jiang, Q.; Meng, J.; Wang, P.; Zhang, L.; Yin, Z.; You, J. Efficient Green Light-Emitting Diodes Based on Quasi-Two-Dimensional Composition and Phase Engineered Perovskite with Surface Passivation. _Nat. Commun._**2018**, \(9\) (i), 57o. [https://doi.org/10.1038/s41467-008-02978-7](https://doi.org/10.1038/s41467-008-02978-7). * Wang et al. 2017 Wang, Z.; Lin, Q.; Chmiel, F. P.; Sakai, N.; Herz, L. M.; Snaith, H. J. Efficient Ambient-Air-Stable Solar Cells with 2D-3D Heterostructured Butylammonium-Caesium-Formamidinium Lead Halide Perovskites. _Nat. Energy_**2017**, \(2\) (9), 1735. [https://doi.org/10.1038/nenergy.2017.135](https://doi.org/10.1038/nenergy.2017.135). * Tsai et al. 2016 Tsai, H.; Nie, W.; Blancon, J.-C.; Stoumpos, C. C.; Asadpour, R.; Harutyunyan, B.; Neukirch, A. J.; Verduzco, R.; Crochet, J. J.; Tretiak, S.; et al. High-Efficiency Two-Dimensional Ruddlesden-Popper Perovskite Solar Cells. _Nature_**2016**, _536_ (7616), 312-316. [https://doi.org/10.1038/nature18306](https://doi.org/10.1038/nature18306). * Liang et al. 2016 Liang, D.; Peng, Y.; Fu, Y.; Shearer, M. J.; Zhang, J.; Zhai, J.; Zhang, Y.; Hamers, R. J.; Andrew, T. L.; Jin, S. Color-Pure Violet-Light-Emitting Diodes Based on Layered Lead Halide Perovskite Nanoplates. _ACS Nano_**2016**, _10_ (7), 6897-6904. [https://doi.org/10.1021/acsnano.6boz683](https://doi.org/10.1021/acsnano.6boz683). * Grancini et al. 2017 Grancini, G.; Roldan-Carmona, C.; Zimmermann, I.; Mosconi, E.; Lee, X.; Martineau, D.; Narbey, S.; Oswald, F.; De Angelis, F.; Graetzel, M.; Nazeeruddin, M. K. One-Year Stable Perovskite Solar Cells by 2D/3D Interface Engineering. _Nat. Commun._**2017**, \(8\), 15684. [https://doi.org/10.1038/ncomms15684](https://doi.org/10.1038/ncomms15684). * Smith et al. 2017 Smith, I. C.; Hoke, E. T.; Solis-Ibarra, D.; McGehee, M. D.; Karunadasa, H. I. A Layered Hybrid Perovskite Solar-Cell Absorber with Enhanced Moisture Stability. _Angew. Chemie Int. Ed._**2014**, 53 (42), 11232-11235. [https://doi.org/10.1002/anie.201406466](https://doi.org/10.1002/anie.201406466). * Mao et al. (2018) Mao, L.; Stoumpos, C. C.; Kanatzidis, M. G. Two-Dimensional Hybrid Halide Perovskites: Principles and Promises. _J. Am. Chem. Soc._**2018**, 141 (3), 1171-1190. [https://doi.org/10.1021/jacs.8b10851](https://doi.org/10.1021/jacs.8b10851). * Saparov and Mitzi (2016) Saparov, B.; Mitzi, D. B. Organic-Inorganic Perovskites: Structural Versatility for Functional Materials Design. _Chem. Rev._**2016**, 106 (7), 4558-4596. [https://doi.org/10.1021/acs.chemrev.5boo715](https://doi.org/10.1021/acs.chemrev.5boo715). * Mauck and Tisdale (2019) Mauck, C. M.; Tisdale, W. A. Excitons in 2D Organic-Inorganic Halide Perovskites. _Trends Chem._**2019**, 1 (4), 380-393. [https://doi.org/10.1016/j.trechm.2019.04.003](https://doi.org/10.1016/j.trechm.2019.04.003). * Smith et al. (2019) Smith, M. D.; Connor, B. A.; Karunadasa, H. I. Tuning the Luminescence of Layered Halide Perovskites. _Chemical Reviews_. American Chemical Society March 13, 2019, pp 3104-3139. [https://doi.org/10.1021/acs.chemrev.8boo477](https://doi.org/10.1021/acs.chemrev.8boo477). * Manser et al. (2016) Manser, J. S.; Christians, J. A.; Kamat, P. V. Intriguing Optoelectronic Properties of Metal Halide Perovskites. _Chemical Reviews_. American Chemical Society November 9, 2016, pp 12956-13008. [https://doi.org/10.1021/acs.chemrev.6boo36](https://doi.org/10.1021/acs.chemrev.6boo36). * Pisoni et al. (2014) Pisoni, A.; Jacimovic, J.; Barisic, O. S.; Spina, M.; Gaal, R.; Forro, L.; Horvath, E. Ultra-Low Thermal Conductivity in Organic-Inorganic Hybrid Perovskite CH 3 NH 3 PbI 3. _J. Phys. Chem. Lett._**2014**, 5 (14), 2488-2492. [https://doi.org/10.1021/jz5012109](https://doi.org/10.1021/jz5012109). * Elbaz et al. (2017) Elbaz, G. A.; Ong, W.-L.; Doud, E. A.; Kim, P.; Paley, D. W.; Roy, X.; Malen, J. A. Phonon Speed, Not Scattering, Differentiates Thermal Transport in Lead Halide Perovskites. _Nano Lett._**2017**, 17 (9), 5734-5739. [https://doi.org/10.1021/acs.nanolett.7boz696](https://doi.org/10.1021/acs.nanolett.7boz696). * Dahod et al. (2020) Dahod, N. S.; France-Lanord, A.; Paritmongkol, W.; Grossman, J. C.; Tisdale, W. A. Low-Frequency Raman Spectrum of 2D Layered Perovskites: Local Atomistic Motion or Superlattice Modes? _J. Chem. Phys._**2020**, 153 (4), 044710. [https://doi.org/10.1063/5.0012763](https://doi.org/10.1063/5.0012763). * Christodoulides et al. (2019) Christodoulides, A. D.; Guo, P.; Dai, L.; Hoffman, J. M.; Li, X.; Zuo, X.; Rosenmann, D.; Brumberg, A.; Kanatzidis, M. G.; Schaller, R. D.; Malen, J. A. Signatures of Coherent Phonon Transport in Ultralow Thermal Conductivity Two-Dimensional Ruddlesden Popper Phase Perovskites. _ACS Nano_**2021**, _15_ (3), 4165-4172. [https://doi.org/10.1021/acsnano.0c03595](https://doi.org/10.1021/acsnano.0c03595). * Rasel et al. 2020 Rasel, M. A. J.; Giri, A.; Olson, D. H.; Ni, C.; Hopkins, P. E.; Feser, J. P. Chain-Length Dependence of Thermal Conductivity in 2D Alkylammonium Lead Iodide Single Crystals. _ACS Appl. Mater. Interfaces_**2020**, _12_ (48), 53705-5371. [https://doi.org/10.1021/acsami.oci0894](https://doi.org/10.1021/acsami.oci0894). * Giri et al. 2020 Giri, A.; Chen, A. Z.; Mattoni, A.; Aryana, K.; Zhang, D.; Hu, X.; Lee, S.-H.; Choi, J. J.; Hopkins, P. E. Ultralow Thermal Conductivity of Two-Dimensional Metal Halide Perovskites. _Nano Lett._**2020**, _20_ (5), 3331-3337. [https://doi.org/10.1021/acs.nanolett.0c00214](https://doi.org/10.1021/acs.nanolett.0c00214). * Lin et al. 2022 Lin, M.; Dhanabalan, B.; Biffi, G.; Leng, Y.; Kutkan, S.; Arciniegas, M. P.; Tan, P.; Krahne, R. Correlating Symmetries of Low-Frequency Vibrations and Self-Trapped Excitons in Layered Perovskites for Light Emission with Different Colors. _Small_**2022**, _18_ (15), 2106759. [https://doi.org/10.1002/smll.202106759](https://doi.org/10.1002/smll.202106759). * Heiderhoff et al. 2017 Heiderhoff, R.; Haeger, T.; Pourdavoud, N.; Hu, T.; Al-Khafaji, M.; Mayer, A.; Chen, Y.; Scheer, H.-C.; Riedl, T. Thermal Conductivity of Methylammonium Lead Halide Perovskite Single Crystals and Thin Films: A Comparative Study. _J. Phys. Chem. C_**2017**, _121_ (51), 28306-2831. [https://doi.org/10.1021/acs.jpcc.7bn495](https://doi.org/10.1021/acs.jpcc.7bn495). * Gold-Parker et al. 2018 Gold-Parker, A.; Gehring, P. M.; Skelton, J. M.; Smith, I. C.; Parshall, D.; Frost, J. M.; Karunadasa, H. I.; Walsh, A.; Toney, M. F. Acoustic Phonon Lifetimes Limit Thermal Transport in Methylammonium Lead Iodide. _Proc. Natl. Acad. Sci._**2018**, _15_ (47), 1905-1906. [https://doi.org/10.1073/pnas.181222715](https://doi.org/10.1073/pnas.181222715). * Ge et al. 2018 Ge, C.; Hu, M.; Wu, P.; Tan, Q.; Chen, Z.; Wang, Y.; Shi, J.; Feng, J. Ultralow Thermal Conductivity and Ultrahigh Thermal Expansion of Single-Crystal Organic-Inorganic Hybrid Perovskite CH 3 NH 3 PbX 3 (X = Cl, Br, I). _J. Phys. Chem. C_**2018**, _122_ (28), 15973-15978. [https://doi.org/10.1021/acs.jpcc.8b05919](https://doi.org/10.1021/acs.jpcc.8b05919). * Lee et al. 2017 Lee, W.; Li, H.; Wong, A. B.; Zhang, D.; Lai, M.; Yu, Y.; Kong, Q.; Lin, E.; Urban, J. J.; Grossman, J. C.; Yang, P. Ultralow Thermal Conductivity in All-Inorganic Halide Perovskites. _Proc. Natl. Acad. Sci._**2017**, _114_ (33), 8693-8697. [https://doi.org/10.1073/pnas.171744114](https://doi.org/10.1073/pnas.171744114). * Yue et al. 2016 Yue, S.-Y.; Zhang, X.; Qin, G.; Yang, J.; Hu, M. Insight into the Collective Vibrational Modes Driving Ultralow Thermal Conductivity of Perovskite Solar Cells. _Phys. Rev. B_**2016**, _94_ (n), 115427. [https://doi.org/10.103/PhysRevB.94.115427](https://doi.org/10.103/PhysRevB.94.115427). * Qian et al. 2016 Qian, X.; Gu, X.; Yang, R. Lattice Thermal Conductivity of Organic-Inorganic Hybrid Perovskite CH 3 NH 3 PbI 3. _Appl. Phys. Lett._**2016**, _108_ (6), 063902. [https://doi.org/10.1063/1.4941921](https://doi.org/10.1063/1.4941921). * Hata et al. 2016 Hata, T.; Giorgi, G.; Yamashita, K. The Effects of the Organic-Inorganic Interactions on the Thermal Transport Properties of CH 3 NH 3 PbI 3. _Nano Lett._**2016**, _16_ (4), 2749-2753. [https://doi.org/10.1021/acs.nanolett.6boo457](https://doi.org/10.1021/acs.nanolett.6boo457). * Dahod et al. 2019 Dahod, N. S.; Paritmongkol, W.; Stollmann, A.; Settens, C.; Zheng, S.-L.; Tisdale, W. A. Melting Transitions of the Organic Subphase in Layered Two-Dimensional Halide Perovskites. _J. Phys. Chem. Lett_**2019**, _10_. [https://doi.org/10.1021/acs.jpclett.9boo983](https://doi.org/10.1021/acs.jpclett.9boo983). * Nie et al. 2016 Nie, W.; Blancon, J. C.; Neukirch, A. J.; Appavoo, K.; Tsai, H.; Chhowalla, M.; Alam, M. A.; Sfeir, M. Y.; Katan, C.; Even, J.; Tretiak, S.; Crochet, J. J.; Gupta, G.; Mohite, A. D. Light-Activated Photocurrent Degradation and Self-Healing in Perovskite Solar Cells. _Nat. Commun._**2016**, \(7\), 11574. [https://doi.org/10.1038/ncommsn574](https://doi.org/10.1038/ncommsn574). * Huang et al. 2016 Huang, W.; Manser, J. S.; Kamat, P. V.; Ptasinska, S. Evolution of Chemical Composition, Morphology, and Photovoltaic Efficiency of CH3NH3PbI3 Perovskite under Ambient Conditions. _Chem. Mater._**2016**, _28_ (1), 303-311. [https://doi.org/10.1021/acs.chemmater.5b04122](https://doi.org/10.1021/acs.chemmater.5b04122). * Miyata et al. 2017 Miyata, K.; Atallah, T. L.; Zhu, X.-Y. Lead Halide Perovskites: Crystal-Liquid Duality, Phonon Glass Electron Crystals, and Large Polaron Formation. _Sci. Adv._**2017**, \(3\) (10), 15701469. [https://doi.org/10.1126/sciadv.1701469](https://doi.org/10.1126/sciadv.1701469). * Ferreira et al. 2018 Ferreira, A. C.; Letoublon, A.; Paofai, S.; Raymond, S.; Ecolivet, C.; Ruffle, B.; Cordier, S.; Katan, C.; Saidaminov, M. I.; Zhumekenov, A. A.; Bakr, O. M.; Even, J.; Bourges, P. Elastic Softness of Hybrid Lead Halide Perovskites. _Phys. Rev. Lett._**2018**, _121_ (8), 085502. [https://doi.org/10.103/PhysRevLett.121.085502](https://doi.org/10.103/PhysRevLett.121.085502). * Chen and Neagu 1997 Chen, G.; Neagu, M. Thermal Conductivity and Heat Transfer in Superlattices. _Appl. Phys. Lett._**1997**, _71_ (19), 2761-2763. [https://doi.org/10.1063/1.120126](https://doi.org/10.1063/1.120126). * Chen and Size 1997 Chen, G. Size and Interface Effects on Thermal Conductivity of Superlattices and Periodic Thin-Film Structures. _J. Heat Transfer_**1997**, _19_ (2), 220. [https://doi.org/10.115/1.2824212](https://doi.org/10.115/1.2824212). * Chen 1998 Chen, G. Thermal Conductivity and Ballistic-Phonon Transport in the Cross-Plane Direction of Superlattices. _Phys. Rev. B_**1998**, _57_ (23), 14958-14973-[https://doi.org/10.103/PhysRevB.57.14958](https://doi.org/10.103/PhysRevB.57.14958). * Simkin and Mahan 2000 Simkin, M. V; Mahan, G. D. Minimum Thermal Conductivity of Superlattices. _Phys. Rev. Lett._**2000**, _84_ (5), 927-930. [https://doi.org/10.103/PhysRevLett.84.927](https://doi.org/10.103/PhysRevLett.84.927). * Maldovan 2015 Maldovan, M. Phonon Wave Interference and Thermal Bandgap Materials. _Nat. Mater._**2015**, _14_ (7), 667-674. [https://doi.org/10.103/s0mat43o8](https://doi.org/10.103/s0mat43o8). * Giri et al. 2016 Giri, A.; Braun, J. L.; Hopkins, P. E. Effect of Crystalline/Amorphous Interfaces on Thermal Transport across Confined Thin Films and Superlattices. _J. Appl. Phys._**2016**, _19_ (23), 235305. [https://doi.org/10.1063/1.4953683](https://doi.org/10.1063/1.4953683). * Ravichandran et al. 2014 Ravichandran, J.; Yadav, A. K.; Cheaito, R.; Rossen, P. B.; Soukiassian, A.; Suresha, S. J.; Duda, J. C.; Foley, B. M.; Lee, C.-H.; Zhu, Y.; Lichtenberger, A. W.; Moore, J. E.; Muller, D. A.; Schlom, D. G.; Hopkins, P. E.; Majumdar, A.; Ramesh, R.; Zurbuchen, M. A. Crossover from Incoherent to Coherent Phonon Scattering in Epitaxial Oxide Superlattices. _Nat. Mater._**2014**, _13_ (2), 168-172. [https://doi.org/10.103/s0mat3826](https://doi.org/10.103/s0mat3826). * Guo et al. 2014 Guo, P.; Stoumpos, C. C.; Mao, L.; Sadasivam, S.; Ketterson, J. B.; Darancet, P.; Kanatzidis, M. G.; Schaller, R. D. Cross-Plane Coherent Acoustic Phonons in Two-Dimensional Organic-Inorganic Hybrid Perovskites. [https://doi.org/10.103/s441467-018-04429-9](https://doi.org/10.103/s441467-018-04429-9). * Luckyanova 2012 Luckyanova, M. N. Coherent Phonon Heat Conduction in Superlattices. _Science (8o-. )_. **2012**, 936-939. [https://doi.org/10.126/science.1225549](https://doi.org/10.126/science.1225549). * Yang and Chen 2003 Yang, B.; Chen, G. Partially Coherent Phonon Heat Conduction in Superlattices. _Phys. Rev. B_**2003**, _67_ (19), 19531. [https://doi.org/10.103/PhysRevB.67.19531](https://doi.org/10.103/PhysRevB.67.19531). * Schmidt et al. 2009 Schmidt, A. J.; Cheaito, R.; Chiesa, M. A Frequency-Domain Thermoreflectance Method for the Characterization of Thermal Properties. _Rev. Sci. Instrum._**2009**, _80_, 3-8. [https://doi.org/10.1063/1.3212673](https://doi.org/10.1063/1.3212673). * Majumdar et al. 2014 Majumdar, S.; Sierra-Suarez, J. a.; Schiffres, S. N.; Ong, W.-L.; Higgs, C. F.; McGaughey, A. J. H.; Malen, J. a. Vibrational Mismatch of Metal Leads Controls Thermal Conductance of Self-Assembled Monolayer Junctions. _Nano Lett._**2015**, _15_ (5), 2985-2991. [https://doi.org/10.1021/nl504844d](https://doi.org/10.1021/nl504844d). * Ong et al. 2013 Ong, W.; Rupich, S. M.; Talapin, D. V; McGaughey, A. J. H.; Malen, J. a. Surface Chemistry Mediates Thermal Transport in Three-Dimensional Nanocrystal Arrays. _Nat. Mater._**2013**, _12_ (5), 410-415. [https://doi.org/10.1038/nmat3596](https://doi.org/10.1038/nmat3596). * Losego et al. 2012 Losego, M. D.; Grady, M. E.; Sottos, N. R.; Cahill, D. G.; Braun, P. V. Effects of Chemical Bonding on Heat Transport across Interfaces. _Nat. Mater._**2012**, \(1\) (6), 502-506. [https://doi.org/10.1038/nmat3303](https://doi.org/10.1038/nmat3303). * Paritmongkol et al. 2019 Paritmongkol, W.; Dahod, N. S.; Stollmann, A.; Mao, N.; Settens, C.; Zheng, S. L.; Tisdale, W. A. Synthetic Variation and Structural Trends in Layered Two-Dimensional Alkylammonium Lead Halide Perovskites. _Chem. Mater._**2019**, _31_ (15), 5592-5607. [https://doi.org/10.1021/acs.chemmater.gbo1318](https://doi.org/10.1021/acs.chemmater.gbo1318). * Cahill 2004 Cahill, D. G. Analysis of Heat Flow in Layered Structures for Time-Domain Thermoreflectance. _Rev. Sci. Instrum._**2004**, _75_ (12), 519-5122. [https://doi.org/10.1063/1.1819431](https://doi.org/10.1063/1.1819431). * Johnson et al. 2006 Johnson, J. A.; Maznev, A. A.; Cuffe, J.; Eliason, J. K.; Minnich, A. J.; Kehoe, T.; Sotomayor Torres, C. M.; Chen, G.; Nelson, K. A. Direct Measurement of Room-Temperature Nondiffusive Thermal Transport Over Micron Distances in a Silicon Membrane. [https://doi.org/10.103/PhysRevLett.10.025901](https://doi.org/10.103/PhysRevLett.10.025901). * Wang et al. 2006 Wang, R. Y.; Segalman, R. A.; Majumdar, A. Room Temperature Thermal Conductance of Alkanedithiol Self-Assembled Monolayers. _Appl. Phys. Lett._**2006**, _89_ (17), 17313. [https://doi.org/10.1063/1.2358856](https://doi.org/10.1063/1.2358856). * Wang et al. 2007 Wang, Z.; Carter, J. A.; Lagutchev, A.; Koh, Y. K.; Seong, N.-H.; Cahill, D. G.; Dlott, D. D. Ultrafast Flash Thermal Conductance of Molecular Chains. _Science (80-. )_. **2007**, _317_ (5839), 787-790. [https://doi.org/10.1126/science.145220](https://doi.org/10.1126/science.145220). * DelRio et al. 2009 DelRio, F. W.; Jaye, C.; Fischer, D. A.; Cook, R. F. Elastic and Adhesive Properties of Alkanethiol Self-Assembled Monolayers on Gold. _Appl. Phys. Lett._**2009**, _94_ (13), 131909. [https://doi.org/10.1063/1.311440](https://doi.org/10.1063/1.311440). * Lee et al. 2017 Lee, E. M. Y.; Mork, A. J.; Willard, A. P.; Tisdale, W. A. Including Surface Ligand Effects in Continuum Elastic Models of Nanocrystal Vibrations. _J. Chem. Phys._**2017**, _147_ (4), 04471. [https://doi.org/10.1063/1.4995439](https://doi.org/10.1063/1.4995439). * Knop et al. 1990 Knop, O.; Wasylishen, R. E.; White, M. A.; Cameron, T. S.; Oort, M. J. M. Van. Alkylammonium Lead Halides. Part 2. CH 3 NH 3 PbX 3 (X = Cl, Br, I) Perovskites: Cuboctahedral Halide Cages with Isotropic Cation Reorientation. _Can. J. Chem._**1990**, _68_ (3), 412-422. [https://doi.org/10.139/v90-063](https://doi.org/10.139/v90-063). Supplementary Information for: **Efficient Thermal Transport across Molecular Chains in Hybrid 2D Lead Bromide Perovskites** Nabeel S. Dahod\({}^{1}\), Watcharaphol Paritmongkol\({}^{1,2}\), William A. Tisdale\({}^{*}\) \({}^{1}\)Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA. \({}^{2}\)Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA. *Correspondence to: [email protected]_ ###### Contents * 1 PbBr 2D LHP synthesis * 2 FDTR experimental detail * 3 Estimation of material density * 4 Parameters used for composite models * 5 Alternative models for n-series conductivity trends **1. PbBr 2D LHP Synthesis & Preliminary Characterization** _i. PbBr 2D LHP Synthesis_ Crystals of bromide 2D LHPs were synthesized using a previously published cooling-induced crystallization method. Briefly, a solution of PbBr\({}_{2}\) was prepared by dissolving PbO (99.9+%, (trace metal basis) <10 microns, powder, ACROS Organic) in concentrated aqueous HBr solution (ACS reagent, 48%, MilliporeSigma) under reflux at 130 \({}^{\circ}\)C for 15 minutes. The solution was then allowed to cool to room temperature before a small volume of organic spacer (alkaneamine, L) was added and a white precipitate of \(n\) = 1 L:PbBr formed. For the syntheses of \(n\) = 1 L:PbBr was heated on a hot plate set at 130 \({}^{\circ}\)C until clear. After that, the clear solution was allowed to cool slowly inside a thermos filled with hot sand at 110 \({}^{\circ}\)C to induce crystallization. After a day, crystals of bromide 2D LHPs were collected by suction filtration and dried under reduced pressure for at least 12 hours. For the syntheses of \(n\) = 2 and 3 L:LHPs, the solution containing the white precipitate of \(n\) = 1 L:PbBr was mixed with a solution of MABr in concentrated aqueous HBr before the final heating step. _ii. Photoluminescence Measurements_ In order to confirm that the crystals grown were indeed composed of periodic atomically thin layers of perovskite and organic molecular junctions, photoluminescence spectroscopy was utilized to confirm appropriate quantum confinement given the perovskite quantum well thickness via appearance of an excitonic emission peak (Figure S1). For n=1 PbBr 2D LHPs, this peak is located at \(\sim\)410 nm, whereas for n=2,3 PbBr 2D LHPs the thicker perovskite layers correspond to weaker quantum confinement and excitonic emission at \(\sim\)445 nm and \(\sim\)465 nm, respectively. The inflated linewidth of the n=1 L:2D LHPs in Figure S1b are likely due to reabsorption of the excitonic emission. The re-absorbed light is emitted at lower energies (higher wavelengths), leading to the broader observed linewidth of the 2D LHPs. This is facilitated by the very small Stokes shift in n=1 2D LHPs and has been reported on elsewhere [2]. Photoluminescence spectra of bromide 2D LHPs were collected by a home-built photoluminescence set up. The output of a 365 nm fiber-coupled LED (Thorlabs, M365FP1) was used as an excitation source and was cleaned by a 39o nm short-pass filter (Semrock, FFoi-39o/SP-25) before exciting a sample. The emission from a sample was filtered by a 400 nm long-pass filter (Thorlabs, FELo400) to remove any residual excitation light before being measured by a fiber-coupled spectrometer (Avantes, AvaSpec-UL2o48XL). _iii. Powder X-Ray Diffraction Measurements_ The long-range order present in the crystals, as well as the retention of the perovskite unit cell within the inorganic sublayer was confirmed with powder X-ray diffraction. These results are readily compared with those corresponding to solved crystal structures of other Ruddlesden-Popper 2D LHPs in the literature, confirming synthesis of pure material and Figure **S1**. Summary of representative PL for PbBr 2D LHPs. a) PL for 2D LHPs at various inorganic sheet (octahedral layer) thicknesses. b) PL for n= 1 2D LHPs with various chain length organic spacer subphases (Cx alkylammonium spacer ions). allowing for analysis of structural parameters such as the material density and the organic/inorganic period thicknesses, both of which are readily obtained from analysis of a proposed crystal structure [1, 3, 4]. Representative powder XRD patterns are shown below in Figure s2. Powder X-ray diffraction data was taken using a PANalytical X'Pert Pro MPD X-ray diffractometer (Cu K\(\alpha\) radiation, \(\lambda\) = 1.54184 A) with High-Speed Bragg-Brentano Optics. A o.o4 rad soller slit, a 1\({}^{\circ}\) anti-scatter slit, a 10 mm mask and a programmable divergence slit with an illuminated length of 6 mm were used in the incident beam path. The diffracted beam optics included a o.o4 rad soller slit, a Ni Filter and an automatic receiving slit. The detector was an ultrafast X'Celerator RTMS detector. The angular step in 2\(\theta\) was o.o4\({}^{\circ}\). ## 2 Frequency Domain Thermoreflectance (FDTR) Experimental Detail ### Optical Layout The FDTR experimental setup is illustrated schematically in Figure S3. Low noise continuous wave 488 nm and 532 nm diode lasers (both Coherent Sapphire 200 CDRH laser systems) are employed as the pump and probe, respectively. The latter was chosen to optimally leverage the high coefficient of thermoreflectance of the gold transducer at 532 nm. Wavelength-specific optical isolators are used to prevent backscattered light from entering the laser cavities. The intensity modulation of the pump beam required for the experiment is achieved using an electro-optic modulator (EOM), which is driven by an electronic signal amplified from an arbitrary waveform generator. For experimental simplicity and minimal harmonic noise, sine waves with 1 Vpp are fed via the signal generator to the EOM driver. The now-modulated pump beam is then split via a 10:90 non-polarizing cube beam splitter. The lesser portion of the split pump is sent along a path including a micrometer driven translation stage to a photodetector. This signal is sent to the lock-in amplifier as the reference waveform. The larger portion of the split pump is reflected off of a dichroic and sent through the microscope objective to be reflected off of the sample (coated with a Au transducer). The laser spot is spatially Gaussian and oscillates temporally in intensity according to the waveform driving the EOM, successfully implementing the periodic heat flux necessary for the technique. The probe light is sent through a half waveplate into a polarizing beam splitter (PBS). The reflected light is sent through a quarter waveplate before passing through the dichroic. The use of the PBS and the waveplates allows for efficient separation of the modulated and unmodulated probe light, since light that has reflected off of the sample surface will have a different polarization from that just downstream of the optical isolator. The waveplates can be fine adjusted to optimize reflection of light heading to the sample and signal passed through the PBS post-sample. The laser spots are aligned collinearly following the dichroic, such that the probe monitors the thermoreflectance of roughly the same area that is heated by the pump. After reflection from the sample, the now modulated probe light is collected by the objective and sent back down its original laser path to the PBS. Due to the polarization shift mentioned above the modulated probe light now passes through the PBS before being aligned into another photodetector. The signal measured by this photodetector is sent to the signal input on the lock-in amplifier, which then determines the relative phase lag of the signal modulation from the reference. The micrometer-driven optical delay stage in the 488 nm reference beamline is used to adjust the path length of the reference arm to eliminate absolute phase lag between the reflected probe and reference waveforms. **Figure S3.** Schematic of the optical table layout for FDTR. Colored lines indicate the path of the laser light, with the colors corresponding to those of the pump (blue) and probe (green). When dashed, the lines represent intensity modulated light. Isolators are wavelength specific optical isolators. EOM is a Conoptics model 35o EOM that modulates the intensity of the pump laser light. Amplifier is an analog Conoptics model 25A EOM driver that amplifies a 1 Vpp signal from the signal generator and sends it to the EOM to drive operation. PBS = polarizing beam splitter cube. \(\lambda/2\) and \(\lambda/4\) are half and quarter wave plates. 488 nm filter selectively blocks stray pump laser light. Dichroic passes 532 nm laser light and reflects 488 nm light, 4ox objective (Nikon 4ox S Plan Fluor) focuses and collects collinear pump/probe laser light to/from sample in backscattering geometry. Si fixed gain detectors measure path-matched reference and signal modulations, which are detected via Lock-in detection and sent to a personal computer via USB. ### _ii. Data Acquisition_ Although the operating power for the lasers used is relatively high (200 mW), the surface temperature oscillation required for reliable/physically meaningful data must be relatively small to remain in the linear response regime. Additionally, the steady state temperature rise of the sample should be minimized as much as possible to avoid beam damage and maintain experimental integrity. To this end, a variety of neutral density filters are used to control the power of the reference, pump, and probe beam lines. Typical laser powers (as measured by a power meter at the sample stage) are \(\sim\)250-300 \(\mu\)W, but higher powers can and typically must be used for more conductive samples (\(>\)1 W/mK). Similarly, the spot sizes of the pump/probe lasers must be carefully controlled to maximize signal while also probing the dynamics of interest. The spot sizes represent the relevant length scale for the radial component of heat transfer during an FDTR measurement; variation of the spot size can be used to sensitively probe axial, radial, or mixed transport. For instance, if the spot size is larger than the axial penetration depth at a given frequency (as is the case here), then the pump laser can be treated as a 1D plane heating source and the axial component can be ignored. Conversely, when the radial length scale is much smaller than the axial penetration depth, the radial component of heat transport plays a significant role and cannot be ignored. For our experiments, typical spot sizes of \(\sim\)1.2 \(\mu\)m were found to maximize signal at the above laser powers while remaining within the axial (iD) transport regime. In practice, the spot size must be measured during each individual data acquisition run, which is accomplished through the use of a CCD camera. The phase response of the sample was measured using a Zurich HFzLI lock-in amplifier. Operation was frequency locked using the output from the reference photodetector, but not phase-locked of course as this is the parameter we expect to monitor. In a typical instance of data acquisition, the phase response is measured by sweeping the modulation frequency fed to the EOM from 100 kHz - 1 MHz and cataloging the measured phase lag at each frequency. This frequency range was chosen due to both the lock-in/EOM operating windows and a relevant span of penetration depths for sensitive thermal property measurement (submicron penetration depths for all perovskite samples). ### _iv. Data Fitting and Parameter Extraction_ For consistency with previously reported material thermal conductivities, we utilize the same heat transfer model for analysis posed originally by Cahill, and used ubiquitously since then with respect to FDTR measurements of effective thermal conductivities[5, 6, 7, 8]. Many other models have been posed as potentially superior in directly capturing nonequilibrium transport behavior evident in FDTR experiments without the use of an effective thermal conductivity, but such models typically apply to much higher heating frequencies and with respect to far more conductive samples, and so fall outside the scope of this particular report.[9, 10] In order to extract an effective thermal conductivity of a macroscopic solid such as a 2D lead halide perovskite crystal, Fourier's law and thus the Cahill model remain sufficient. This model was fit to the experimentally measured phase response (phase lag vs. frequency) using a nonlinear least squares Levenberg-Marquardt algorithm. During fitting, the laser spot size was allowed to vary within experimental error and the thermal conductivity was allowed to vary by an order of magnitude from an initial guess. The uncertainty of the measurement was determined by evaluating the absolute change in measured thermal conductivity for the maximum variance of the spot size within the resolution of the CCD. Typical uncertainties are \(\sim\) 10 %, in line with other FDTR measurements previously reported.[11, 12] Several representative measurements/fits are shown below in Figures S4. _iv. Measurement of Relevant System Parameters_ Since an FDTR measurement is sensitive to a variety of system parameters and materials properties, it is important to separately measure all of these parameters except for the parameter of interest, the sample thermal conductivity. With respect to the spot size and transducer thickness/properties, the following approach was used: To measure the spot size, a CCD camera was used to image the laser spots reflected off of the transducer during a measurement. This image was calibrated using a ruler with o.l micrometer gradations, and since the acquired spots are isotropic a line scan through the center of the beam was fit with a Gaussian lineshape to extract the beam waist (representative results shown in Figure S5). Given the high quality of the fit, the primary uncertainty in the spot size comes from the resolution of the image. The thermal conductivity and volumetric heat capacity of the Au transducer were 314 W/mK and 2500 kJ/m\({}^{3}\), as reported elsewhere.[12, 11] To measure the thickness of the gold transducer, stylus profilometry was used. First, a reference gold-coated glass film from the same deposition as the samples of interest was scratched using a razor. Then, a Bruker DXT stylus profilometer was used to measure the thickness of the film via a line-scan across this scratch. A representative line-scan is shown in Figure S6. ## 3 Estimation of the Mass Density of 2D LHPs Although differential scanning calorimetry measurements provide reliable measurements of the specific heat capacity, the heat transfer model for FDTR requires knowledge of the volumetric heat capacity (product of the specific heat capacity and the mass density) of a sample. For 2D LHPs, since the generic crystal structure is fairly well established this is simple to estimate even when the precise structural solution is not available. Within the inorganic (perovskite) subphase, the bond lengths and angles are well approximated by that of the pseudocubic bulk unit cell. Thus, the density within this layer is effectively that of the bulk MAPbBr perovskite. This is the case even for n=1 perovskites without a proper methylammonium cation as the spacer binding group is still an ammonium ion. This estimation is consistent with the assumption that the structure/vibrational spectrum of the inorganic subphase is unchanged from that established for bulk MAPbBr. Within the organic subphase, the period thickness is easily extracted by substracting the periodicity as determined via x-ray diffraction measurements by the inorganic subphase thickness. As has been established elsewhere, these thicknesses are consistent across 2D LHPs with different compositions of the inorganic subphase [1, 3]. As a result, we can confirm our measurement of the organic subphase with those made for n=1 PbI 2D LHPs precisely measured via single-crystal XRD previously by Billing et al. [3, 44]. Knowing the cubic unit cell of the inorganic subphase, octahedral layer thickness, and organic subphase period thickness, the orthorhombic unit cell of PbBr 2D LHPs is readily approximated. Dividing the molecular weight of the 2D LHP as expected from its chemical formula from the volume of this unit cell then directly approximates the mass density of the perovskite. Figure S7 below lists the results of these estimates for the PbBr 2D LHPs studied herein. ## 4 Parameters Used for Composite Models In order to determine the thermal conductivity of the PbBr 2D LHPs using the models proposed in the main text, several materials parameters need to be known. Specifically, the period thicknesses, component heat capacities, and component sound velocities are the necessary inputs. The first of these is measured/estimated using XRD as described in the previous section. The heat capacity and sound speed of bulk MAPbBr\({}_{3}\) perovskite (38o J/kgK and 177 m/s) have been measured elsewhere and herein was used for the inorganic subphase.[15] The heat capacity of bulk alkanes were used to approximate the heat capacity within the organic subphase, a reasonable assumption because the organic molecules are known to exhibit significant dynamic disorder (liquid-like motions) at measurement temperatures and have densities comparable to their bulk liquid counterparts.[1, 4] The sound speed for ballistic phonon transport across the molecular junctions that comprise the organic subphase has been studied extensively for surface-bound alkyl chains, in systems such as self-assembled monolayers and semiconductor nanocrystals.[16, 17, 18] Across all of these measurements, the values for the sound velocity of such phonons are consistent and show essentially linearly proportional sound velocity with increasing alkyl chain length. We use a linear interpolation of the measurements made by Li et al. as an input for calculation of the sound velocity within the organic subphase.[16] ## 5 Alternative models for conductivity trends
2306.12431
On the sum of two powered numbers
Fix a positive real number $\theta$. The natural numbers $m$ with largest square-free divisor not exceeding $m^\theta$ form a set $\mathscr{A}$, say. It is shown that whenever $\theta>1/2$ then all large natural numbers $n$ are the sum of two elements of $\mathscr{A}$. This is nearly best possible.
Jörg Brüdern, Olivier Robert
2023-06-09T15:59:12Z
http://arxiv.org/abs/2306.12431v1
# On the sum of two powered numbers ###### Abstract. We prove that the number of Introduction Let \(\theta\) be a real number and let \(\theta\) be a real number. Let \(\theta=\frac{1}{2}\). Let \(\theta=\frac{1}{2}\). Let \(\theta\) be a real number and let \(\theta^{\prime}\) be the angle between \(\theta\) and \(\theta^{\prime}\). Let \(\theta^{\prime}\) be the angle between \(\theta\) and \(\theta^{\prime}\). By (5) and the range for \(w\), we see \[3^{2-b}w<\frac{2^{a}}{3^{b-2}}<\frac{\sqrt{n}/A}{A\sqrt{n}/27}\leq\frac{27}{A^{2}}.\] This yields \[k(v)<\frac{\sqrt{27}}{A}\sqrt{v}.\] The treatment of \(u\) is similar. We use \(u=2^{a}(U-W)=n-v\). Then \[k(u)\leq 2k(U-W)\leq 2(U-W),\] and hence \[k(u)\leq 2^{1-a}u=\sqrt{u}\,\sqrt{n-v}\,2^{1-a}.\] But \(2^{1-a}\leq 4A/\sqrt{n}\), and then \[k(u)\leq 4A\sqrt{u}.\] Now choose \(m_{1}=u\) and \(m_{2}=v\) to confirm (2) whenever \(n\geq 7\). For \(4\leq n\leq 6\) one may use \(m_{1}=2\) and \(m_{2}=n-2\).
2307.12718
CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle Components
Neural Radiance Fields (NeRFs) have gained widespread recognition as a highly effective technique for representing 3D reconstructions of objects and scenes derived from sets of images. Despite their efficiency, NeRF models can pose challenges in certain scenarios such as vehicle inspection, where the lack of sufficient data or the presence of challenging elements (e.g. reflections) strongly impact the accuracy of the reconstruction. To this aim, we introduce CarPatch, a novel synthetic benchmark of vehicles. In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view. Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques. The dataset is publicly released at https://aimagelab.ing.unimore.it/go/carpatch and can be used as an evaluation guide and as a baseline for future work on this challenging topic.
Davide Di Nucci, Alessandro Simoni, Matteo Tomei, Luca Ciuffreda, Roberto Vezzani, Rita Cucchiara
2023-07-24T11:59:07Z
http://arxiv.org/abs/2307.12718v1
# _CarPatch_: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle Components ###### Abstract Neural Radiance Fields (NeRFs) have gained widespread recognition as a highly effective technique for representing 3D reconstructions of objects and scenes derived from sets of images. Despite their efficiency, NeRF models can pose challenges in certain scenarios such as vehicle inspection, where the lack of sufficient data or the presence of challenging elements (_e.g._ reflections) strongly impact the accuracy of the reconstruction. To this aim, we introduce _CarPatch_, a novel synthetic benchmark of vehicles. In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view. Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques. The dataset is publicly released at [https://aimagelab.ing.unimore.it/go/carpatch](https://aimagelab.ing.unimore.it/go/carpatch) and can be used as an evaluation guide and as a baseline for future work on this challenging topic. Keywords:Synthetic vehicle dataset 3D Reconstruction Neural radiance fields Volumetric rendering RGB-D. Figure 1: A visualization of the _CarPatch_ data: RGB images (left), depth images (center), and semantic segmentation of vehicle components (right). ## 1 Introduction Recent advances in Neural Radiance Fields (NeRFs) [11] strongly improved the fidelity of generated novel views by fitting a neural network to predict the volume density and the emitted radiance of each 3D point in a scene. The differentiable volume rendering step allows having a set of images, with known camera poses, as the only input for model fitting. Moreover, the limited amount of data, _i.e._ (image, camera pose) pairs, needed to train a NeRF model, facilitates its adoption and drives the increasing range of its possible applications. Among these, view synthesis recently emerged for street view reconstruction [12, 19] in the context of AR/VR applications, robotics, and autonomous driving, with considerable efforts towards vehicle novel view generation. However, these attempts focus on images representing large-scale unbounded scenes, such as those from KITTI [9], and usually fail to achieve high-quality 3D vehicle reconstruction. In this paper, we introduce an additional use case for neural radiance fields, _i.e. vehicle inspection_, where the goal is to represent an individual high-quality instance of a given car. The availability of a high-fidelity 3D vehicle representation could be beneficial whenever the car body has to be analyzed in detail. For instance, insurance companies or body shops could rely on NeRF-generated views to assess possible external damages after a road accident and estimate their repair cost. Moreover, rental companies could compare two NeRF models trained before and after each rental, respectively, to assign responsibility for any new damages. This would avoid expert on-site inspection or a rough evaluation based on a limited number of captures. For this purpose, we provide an experimental overview of the state-of-the-art NeRF methods, suitable for vehicle reconstruction. To make the experimental setting reproducible and to provide a basis for new experimentation, we propose _CarPatch_, a new benchmark to assess neural radiance field methods on the _vehicle inspection_ task. Specifically, we generate a novel dataset consisting of 8 different synthetic scenes, corresponding to as many high-quality 3D car meshes with realistic details and challenging light conditions. As depicted in Fig. 1, we provide not only RGB images with camera poses, but also binary masks of different car components to validate the reconstruction quality of specific vehicle parts (_e.g._ wheels or windows). Moreover, for each camera position, we generate the ground truth depth map with the double goal of examining the ability of NeRF architectures to correctly predict volume density and, at the same time, enable future works based on RGB-D inputs. We evaluate the novel view generation and depth estimation performance of several methods under diverse settings (both global and component-level). Finally, since the process of image collection for fitting neural radiance fields could be time consuming in real scenarios, we provide the same scenes by varying the number of training images, in order to determine the robustness to the amount of training data. After an overview of the main related works in Sec. 2, we thoroughly describe the process of 3D mesh gathering, scene setup, and dataset generation in Sec. 3. The evaluation of existing NeRF architectures on _CarPatch_ is presented in Sec. 4. ## 2 Related work We provide a brief overview of the latest updates in neural radiance field, including its significant extensions and applications that have influenced our work. NeRF limitations have been tackled by different works, trying to reduce its complexity, increase the reconstruction quality, and develop more challenging benchmarks. Neural scene reconstruction.The handling of aliasing artifacts is a well-known issue in rendering algorithms. Mip-NeRF [1, 2] and Zip-NeRF [3] have tackled the aliasing issue by reasoning on volumetric frustums along a cone. These approaches have inspired works such as Able-NeRF [16], which replaces the MLP of the original implementation with a transformer-based architecture. In addition to other sources of aliasing, reflections can pose a challenge for NeRF. Several works have attempted to address the issue of aliasing in reflections by taking into account the reflectance of the scene [4, 5, 17]. Moreover, computation is a widely recognized concern. Various works in the literature have demonstrated that it is possible to achieve high-fidelity reconstructions while reducing the overall training time. Two notable works in this direction include NSVF [10], which uses a voxel-based representation for more efficient rendering of large scenes, and Instant-NGP [13], which proposes a multi-resolution hash table combined with a light MLP to achieve faster training times. Other approaches such as DVGO [15] and Plenoxels [8] optimize voxel grids of features to enable fast radiance field reconstruction. TensoRF [7] combines the traditional CP decomposition [7] with a new vector-matrix decomposition method [6] leading to faster training and higher-quality reconstruction. In this work, in order to satisfy real-time performances for vehicle inspection, we select a set of architectures that strike a balance between training time and the quality of the reconstruction. Scene representation benchmarks.One of the most widely used benchmarks for evaluating NeRF is the Nerf Synthetic Blender dataset [11]. This dataset \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Scenes Images/scene Depth Segmentation} \\ \hline Blender [11] & 8 & 300 & ✓ & ✗ \\ Shiny Blender [17] & 6 & 300 & ✓ & ✗ \\ BlendedMVG [20] & 508 & 200-4000 & ✗ & ✗ \\ \hline \(\mathit{CarPatch}_{40}\) & 8 & 240 & ✓ & ✓ \\ \(\mathit{CarPatch}_{60}\) & 8 & 260 & ✓ & ✓ \\ \(\mathit{CarPatch}_{80}\) & 8 & 280 & ✓ & ✓ \\ \(\mathit{CarPatch}_{100}\) & 8 & 300 & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between existing datasets used as benchmarks for neural radiance field evaluation and \(\mathit{CarPatch}\). We provide the same scene by varying the amount of training data (40, 60, 80, and 100 images), allowing users to test the robustness of their architectures. We also release depth and segmentation data for all the images. consists of 8 different scenes generated using Blender1, each with 100 training images and 200 test images. Other synthetic datasets include the Shiny Blender dataset [17], which mostly contains singular objects with simple geometries, and Blend DMVS [20], which provides various scenes to test NeRF implementations at different scales. These works do not provide ground truth information about the semantic meaning of the images. This limitation makes it difficult to study the ability of NeRF to reconstruct certain surfaces compared to others. In our _CarPatch_ dataset, we provide ground truth segmentation of vehicle components in the scene, allowing for the evaluation of architectures on specific parts. Table 1 presents a comparison between the most common datasets used as benchmarks and our proposed dataset. Footnote 1: [http://www.blender.org](http://www.blender.org) ## 3 The _CarPatch_ dataset In this section, we detail the source data and the procedure exploited for generating our _CarPatch_ dataset. In particular, we describe how we gathered 3D models, set up the blender scenes, and designed the image capture process. Figure 2: Sample RGB images (left), depth data (center), and segmentation masks (right) from _CarPatch_, for different car models. ### Synthetic 3D models and scene setup All the 3D models included in _CarPatch_ scenes have been downloaded from Sketchfab2, a large collection of free 3D objects for research use. Table 2 provides a detailed list of all the starting models used. Each of them has been edited in Blender to enhance its realism; specifically, we improved the materials, colors, and lighting in each scene to create a more challenging environment. Footnote 2: [https://sketchfab.com](https://sketchfab.com) The scenes have been set up accordingly to the Google Blender dataset [11]. The lighting conditions and rendering settings were customized to create a more realistic environment. The vehicle was placed at the center of the scene at position (0,0,0), with nine lights distributed around the car and varying emission strengths to create shadows and enhance reflections on the materials' surfaces. To improve realism, we resized objects to match their real-world size. The camera and lights were placed in order to provide an accurate representation of the environment, making the scenes similar to real-world scenarios. ### Dataset building The dataset was built using the Python interface provided in Blender, allowing us to control objects in the environment. For each rendered image, we captured not only the RGB color values but also the corresponding depth map, as well as the pixel-wise semantic segmentation masks for eight vehicle components: bumpers, lights, mirrors, hoods/trunks, fenders, doors, wheels, and windows. Examples of these segmentation masks can be seen in Fig. 1. Please note that all the pixels belonging to a component (_e.g._ doors) are grouped into the same class, regardless of the specific component location (_e.g._ front/rear/right/left door). The byycv3 utility has been used for collecting additional metadata, enabling us to evaluate NeRF models on the RGB reconstruction and depth estimation of the overall vehicle as well as each of its subparts. Footnote 3: [https://github.com/DIFer22/bpycv](https://github.com/DIFer22/bpycv) \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model name & Acronym & \#Triangles & \#Vertices & \#Textures & \#Materials \\ \hline Tesla Model & Tesla & 684.3k & 364.4k & 22 & 58 \\ Smart & Smart & 42.8k & 26.4k & 0 & 31 \\ Ford Raptor & Ford & 257.1k & 156.5k & 12 & 50 \\ BMW M3 E46 & Bmw & 846.9k & 442.4k & 7 & 39 \\ Mercedes GLK & Mbz\({}_{1}\) & 1.3M & 741.4k & 0 & 15 \\ Mercedes CLS & Mbz\({}_{2}\) & 1.0M & 667k & 0 & 18 \\ Volvo S90 & Volvo & 3.3M & 1.7M & 56 & 44 \\ Jeep Compass & Jeep & 334.7k & 189.6k & 7 & 39 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the source 3D models from which our dataset has been generated, including their key features. For the rendering of training images, the camera randomly moved on the hemisphere centered in (0,0,0) and above the ground. The camera rotation angle was sampled from a uniform distribution before each new capture. For building the test set, the position of the camera was kept at a fixed distance from the ground and rotated around the Z-axis with a fixed angle equal to \(\frac{2\pi}{\#test\_views}\) radians before each new capture. In order to guarantee the fairness of the current and future comparisons, we explicitly provide four different versions of each scene, by varying the number of training images (40, 60, 80, and 100 images, respectively). Different versions of the same scene have no overlap in training camera poses, while the test set is always the same and contains 200 images for each scene. We release the code for dataset creation and metrics evaluation at [https://github.com/davidedinuc/carpatch](https://github.com/davidedinuc/carpatch). ## 4 Benchmark This section presents the selection and testing of various recent NeRF-based methods [13, 7, 15] on the presented _CarPatch_ dataset, with a detailed description of the experimental setting for each baseline. Additionally, we assess the quality of the reconstructed vehicles in terms of their appearance and 3D surface reconstruction, utilizing depth maps generated during volume rendering. ### Compared methods To overcome challenges related to illumination and reflective surfaces during the process of reconstructing vehicles, it is crucial to choose an appropriate neural rendering approach. We tested selected approaches on _CarPatch_ without modifying the implementation details available in the original repositories, whenever possible. However, some parameters had to be adjusted in order to fit our models (which are larger compared to reference dataset meshes) to the scene. All tests were performed on a GeForce GTX 1080 Ti. After considering various NeRF systems, we have selected the following baselines: * **Instant-NGP [13].** Since the original implementation of Instant-NGP is in CUDA, we decided to use an available PyTorch implementation4 of this approach in order to have a fair comparison with the other approaches. In our experiments, a batch size of 8192 was maintained, with a scene scale of 0.5 and a total of 30,000 iteration steps. Footnote 4: [https://github.com/kwea123/ngp_pl](https://github.com/kwea123/ngp_pl) * **TensoRF [7].** In our setting, a batch of 4096 rays was used. Additionally, we increased the overall scale of the scene from 1 to 3.5. These adjustments were made after experimentation and careful consideration of the resulting reconstructions. Training lasts 30,000 iterations. * **DVGO [15].** In this work, the training process consists of two phases: a coarse training phase of 5,000 iterations, followed by a fine training phase of 20,000 iterations that aims to improve the model's ability to learn intricate details of the scene. In our experiments, we applied a batch size of 8192 while maintaining the default scene size. ### Metrics The effectiveness of the chosen methods has been assessed thanks to the typical perceptual metrics used in NeRF-based reconstruction tasks, namely PSNR, SSIM [18], and LPIPS [21]. However, the appearance-based metrics are strongly related to the emitted radiance besides the learned volume density. We suggest two supplementary depth-based metrics for the sole purpose of assessing the volume density. Since it is not feasible to obtain ground truth 3D models of the vehicles in real-world scenarios, we utilize the depth map as our knowledge of the 3D surface of the objects. Specifically, we define a depth map as a matrix \[D=\{d_{ij}\},d_{ij}\in[0,R] \tag{1}\] in which each value \(d_{ij}\) ranges from 0 to the maximum depth value \(R\). Furthermore, we estimate the surface normals from the depth maps [14]. Initially, we establish the orientation of a surface normal as: \[\mathbf{d}=\langle d_{x},d_{y},d_{z}\rangle=\left(-\frac{\partial d_{ij}}{ \partial i},-\frac{\partial d_{ij}}{\partial j},1\right)\approx\left(d_{(i+1) j}-d_{ij},d_{i(j+1)}-d_{ij},1\right) \tag{2}\] \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Method & Metric & \multicolumn{3}{c}{Bmw Tesla} & Smart & Mbz\({}_{1}\) & Mbz\({}_{2}\) & Ford & Jeep & Volvo & _Avg_ \\ \hline iNGP [13] & & 39.48 & 39.46 & 39.57 & 36.87 & 39.15 & 33.67 & 35.00 & 35.93 & 37.39 \\ DVGO [15] & PSNR\(\uparrow\) & 39.91 & 39.89 & 40.34 & 37.45 & 39.37 & 33.82 & 35.32 & 36.28 & 37.80 \\ TensoRF [7] & & 40.68 & 39.92 & 40.38 & 38.07 & 40.84 & 34.33 & 34.87 & 36.77 & **38.23** \\ \hline iNGP [13] & & 0.985 & 0.987 & 0.988 & 0.985 & 0.987 & 0.959 & 0.978 & 0.979 & 0.981 \\ DVGO [15] & SSIM\(\uparrow\) & 0.987 & 0.988 & 0.990 & 0.987 & 0.988 & 0.964 & 0.980 & 0.981 & 0.983 \\ TensoRF [7] & & 0.989 & 0.987 & 0.99 & 0.989 & 0.991 & 0.966 & 0.975 & 0.982 & **0.984** \\ \hline iNGP [13] & & 0.029 & 0.029 & 0.02 & 0.028 & 0.024 & 0.062 & 0.036 & 0.032 & 0.032 \\ DVGO [15] & LPIPS\(\downarrow\) & 0.022 & 0.022 & 0.014 & 0.019 & 0.020 & 0.051 & 0.029 & 0.022 & **0.025** \\ TensoRF [7] & & 0.023 & 0.026 & 0.017 & 0.02 & 0.017 & 0.051 & 0.039 & 0.027 & 0.028 \\ \hline iNGP [13] & & 0.640 & 0.369 & 0.377 & 0.496 & 0.500 & 0.406 & 0.558 & 0.674 & 0.503 \\ DVGO [15] & D-RMSE\(\downarrow\) & 0.561 & 0.353 & 0.305 & 0.437 & 0.454 & 0.339 & 0.469 & 0.561 & **0.435** \\ TensoRF [7] & & 0.590 & 0.357 & 0.335 & 0.467 & 0.482 & 0.375 & 0.536 & 0.626 & 0.471 \\ \hline iNGP [13] & & 4.24 & 3.38 & 3.41 & 4.26 & 4.13 & 5.15 & 4.60 & 4.67 & 4.23 \\ DVGO [15] & SN-RMSE\(\downarrow\) & 4.27 & 3.48 & 3.20 & 4.19 & 4.24 & 5.04 & 4.67 & 4.71 & 4.22 \\ TensoRF [7] & & 3.96 & 3.24 & 3.10 & 4.00 & 3.91 & 4.91 & 4.41 & 4.48 & **4.00** \\ \hline \end{tabular} \end{table} Table 3: Quantitative results on the _CarPatch_ test set for each vehicle model. where the first two elements represent the depth gradients in the \(i\) and \(j\) directions, respectively. Afterward, we normalize the normal vector to obtain a unit-length vector \(\mathbf{n}(d_{ij})=\frac{\mathbf{d}}{\|\mathbf{d}\|}\). We assess the 3D reconstruction's quality through the following metrics: * **Depth Root Mean Squared Error (D-RMSE).** This metric measures the average difference in meters between the ground truth and predicted depth maps. \[\text{D-RMSE}=\sqrt{\frac{\sum_{i=0}^{M}\sum_{j=0}^{N}(\hat{d}_{ij}-d_{ij})^{2} }{M\cdot N}}\] (3) * **Surface Normal Root Mean Squared Error (SN-RMSE).** This metric measures the average angular error in degrees between the angle direction of \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Component & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & D-RMSE\(\downarrow\) & SN-RMSE\(\downarrow\) \\ \hline iNGP [13] & & 33.05 & 0.986 & 0.019 & 0.281 & 0.79 \\ DVGO [15] & _bumper_ & 34.41 & 0.989 & 0.011 & **0.236** & 0.72 \\ TensoRF [7] & & **35.49** & **0.991** & **0.010** & 0.311 & **0.68** \\ \hline iNGP [13] & & 28.71 & 0.993 & 0.009 & 0.421 & 0.48 \\ DVGO [15] & _light_ & 29.10 & 0.995 & **0.006** & **0.384** & 0.43 \\ TensoRF [7] & & **29.68** & **0.996** & **0.006** & 0.438 & **0.38** \\ \hline iNGP [13] & & 29.60 & 0.994 & 0.011 & 0.427 & 0.43 \\ DVGO [15] & _mirror_ & 31.16 & **0.996** & **0.007** & **0.345** & **0.38** \\ TensoRF [7] & & **31.68** & **0.996** & 0.008 & 0.372 & 0.39 \\ \hline iNGP [13] & & 32.28 & 0.977 & 0.052 & 0.260 & 1.33 \\ DVGO [15] & _hood/trunk_ & 32.68 & 0.981 & **0.038** & **0.259** & 1.35 \\ TensoRF [7] & & **33.75** & **0.983** & 0.040 & 0.302 & **1.24** \\ \hline iNGP [13] & & 32.44 & 0.990 & 0.021 & 0.253 & 0.87 \\ DVGO [15] & _fender_ & 33.55 & **0.993** & **0.013** & **0.223** & 0.85 \\ TensoRF [7] & & **34.36** & **0.993** & 0.015 & 0.267 & **0.77** \\ \hline iNGP [13] & & 34.19 & 0.969 & 0.079 & 0.182 & 0.67 \\ DVGO [15] & _door_ & 35.48 & 0.977 & **0.042** & **0.173** & 0.74 \\ TensoRF [7] & & **36.25** & **0.979** & 0.051 & 0.191 & **0.62** \\ \hline iNGP [13] & & 33.12 & 0.995 & 0.008 & 0.391 & 0.87 \\ DVGO [15] & _wheel_ & 33.65 & 0.995 & 0.006 & **0.267** & **0.79** \\ TensoRF [7] & & **34.55** & **0.996** & **0.005** & 0.334 & **0.79** \\ \hline iNGP [13] & & 26.44 & 0.897 & 0.166 & 0.879 & 2.52 \\ DVGO [15] & _window_ & 26.54 & **0.899** & **0.147** & **0.779** & 2.57 \\ TensoRF [7] & & **26.74** & 0.896 & 0.160 & 0.834 & **2.38** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative results on the _CarPatch_ test set for each vehicle component averaged over the vehicle models. the ground truth and predicted surface normals. \[\text{SN-RMSE}=\sqrt{\frac{\sum_{i=0}^{M}\sum_{j=0}^{N}(\arccos(\mathbf{n}(\hat{d}_ {ij}))-\arccos(\mathbf{n}(d_{ij})))^{2}}{M\cdot N}} \tag{4}\] D-RMSE and SN-RMSE are computed only for those pixels with a positive depth value in both GT and predicted depth maps. This avoids computing depth estimation errors on background pixels (which have a fixed depth value of 0). ### Results The following section presents both quantitative and qualitative results obtained from the selected NeRF baselines. We will discuss their performance on the _CarPatch_ dataset, by analyzing the impact of viewing camera angle and the number of training images. According to Table 3, all the selected NeRF approaches obtain satisfying results. Although the baselines demonstrate similar performances in terms of appearance scores (PSNR, SSIM, and LPIPS), our evaluation using depth-based metrics (D-RMSE and SN-RMSE) reveals significant differences in the 3D reconstruction of the vehicles. DVGO outperforms its competitors by achieving better depth estimation, resulting in a \(+13.5\%\) improvement compared to iNGP and a \(+7.6\%\) improvement compared to TensoRF. In contrast, TensoRF predicts a more accurate 3D surface with the lowest angular error on the surface normals. Since our use case is related to vehicle inspection, in Table 4 we report results computed on each car component. For this purpose, we mask both GT and predictions using a specific component mask before computing the metrics. However, this would lead to an unbalanced ratio between background and foreground pixels, due to the limited components' area, and finally to a biased metric value. By computing D-RMSE and SN-RMSE only on foreground pixels (see Sec. 4.2), depth-based metrics are not affected by this issue. For PSNR, SSIM, and LPIPS, instead, we compute component-level metrics over the image crop delimited by the bounding boxes around each mask. As expected, it is Figure 3: Performance by varying the number of training images, in terms of PSNR, SSIM, LPIPS, D-RMSE, and SN-RMSE. Despite its lower overall performance, Instant-NGP [13] exhibits low variance with respect to the amount of training data. worth noting that NeRF struggles to reconstruct transparent objects (_e.g._ mirrors, lights, and windows) obtaining the highest errors in terms of depth and normal estimation. However, over the single components, TensoRF outperforms the competitors in most of the metrics and in particular on the surface normal estimation. The errors in the reconstruction of specific components' surfaces can also be appreciated in the qualitative results of Fig. 5. Moreover, we analyze the performances of each method in terms of the number of training images. We trained the baselines on every version of the _CarPatch_ dataset and report the results in Fig. 3. It is worth noting that reducing the number of training images has a significant impact on all the metrics independently of the method. However, Instant-NGP demonstrates to be more robust to the number of camera viewpoints having a smoother drop, especially in terms of LPIPS, D-RMSE, and SN-RMSE. Finally, we discuss how the training camera viewpoints' distribution around the vehicle may affect the performance of each method from certain camera angles. In particular, as depicted in Fig. 4, it is evident how between \(180^{\circ}\) and \(270^{\circ}\) and between \(0^{\circ}\) and \(45^{\circ}\) there are considerable variations in the metrics. Indeed, in these areas the datasets contain more sparsity in terms of camera viewpoints and, as expected, all the methods are affected. Figure 4: Performance by camera viewing angle, in terms of PSNR, SSIM, LPIPS, D-RMSE, and SN-RMSE. Depending on the training camera distribution, all the methods struggle wherever the viewpoints are more sparse (_e.g._ between \(225^{\circ}\) and \(270^{\circ}\)). The red arrow represents where the front of the vehicle is facing. ## 5 Conclusion In this article, we have proposed a new benchmark for the evaluation and comparison of NeRF-based techniques. Focusing on one of the many concrete applications of this recent technology, i.e. _vehicle inspection_, a new synthetic dataset including renderings of 8 vehicles was first created. In addition to the set of RGB views annotated with the camera pose, the dataset is enriched by semantic segmentation masks as well as depth maps to further analyze the results and compare the methods. The presence of reflective surfaces and transparent parts makes the task of vehicle reconstruction still challenging. Proposed additional metrics, as well as new graphical ways of displaying the results, are proposed to make these limitations more evident. We are confident that _CarPatch_ can be of great help as a basis for research on NeRF models in general and, more specifically, in their application to the field of vehicle reconstruction. **Acknowledgements.** The work is partially supported by the Department of Engineering Enzo Ferrari, under the project FAR-Dip-DIEF 2022 "AI platform with digital twins of interacting robots and people".
2304.13156
HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos
We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (HDR) videos, which we call HDR-ChipQA. HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos. The growing adoption of HDR in massively scaled video networks has driven the need for video quality assessment (VQA) algorithms that better account for distortions on HDR content. In particular, standard VQA models may fail to capture conspicuous distortions at the extreme ends of the dynamic range, because the features that drive them may be dominated by distortions {that pervade the mid-ranges of the signal}. We introduce a new approach whereby a local expansive nonlinearity emphasizes distortions occurring at the higher and lower ends of the {local} luma range, allowing for the definition of additional quality-aware features that are computed along a separate path. These features are not HDR-specific, and also improve VQA on SDR video contents, albeit to a reduced degree. We show that this preprocessing step significantly boosts the power of distortion-sensitive natural video statistics (NVS) features when used to predict the quality of HDR content. In similar manner, we separately compute novel wide-gamut color features using the same nonlinear processing steps. We have found that our model significantly outperforms SDR VQA algorithms on the only publicly available, comprehensive HDR database, while also attaining state-of-the-art performance on SDR content.
Joshua P. Ebenezer, Zaixi Shang, Yongjun Wu, Hai Wei, Sriram Sethuraman, Alan C. Bovik
2023-04-25T21:25:02Z
http://arxiv.org/abs/2304.13156v1
# HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos ###### Abstract We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (HDR) videos, which we call HDR-ChipQA. HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos. The growing adoption of HDR in massively scaled video networks has driven the need for video quality assessment (VQA) algorithms that better account for distortions on HDR content. In particular, standard VQA models may fail to capture conspicuous distortions at the extreme ends of the dynamic range, because the features that drive them may be dominated by distortions that prevade the mid-ranges of the signal. We introduce a new approach whereby a local expansive nonlinearity emphasizes distortions occurring at the higher and lower ends of the local luma range, allowing for the definition of additional quality-aware features that are computed along a separate path. These features are not HDR-specific, and also improve VQA on SDR video contents, albeit to a reduced degree. We show that this pre-processing step significantly boosts the power of distortion-sensitive natural video statistics (NVS) features when used to predict the quality of HDR content. In similar manner, we separately compute novel wide-gamut color features using the same nonlinear processing steps. We have found that our model significantly outperforms SDR VQA algorithms on the only publicly available, comprehensive HDR database, while also attaining state-of-the-art performance on SDR content. [table]cap ence (QoE) of millions of viewers. However, the increased data volumes implied by HDR, along with increases of other video dimensions such as resolution and frame rate, and the growing popularity of video, stresses global networks and requires increased compression and other processing protocols. Video quality assessment (VQA) models reliably predicting HDR video quality as they are processed are necessary for monitoring, mediating, and controlling high-volume HDR video traffic. Effective perceptual VQA tools can significantly impact both bandwidth consumption and customer satisfaction. Today, VQA models are used to automatically perceptually optimize per-frame, per-shot, and per-title bitrate ladder decisions at very large scales of commercial deployment. However, advancements on true HDR video quality modeling and algorithm design remains quite limited. The gold standard of video quality measurements are the subjective opinions of a suitable population of viewers. Of course, collecting such subjective scores for videos is an expensive and time-consuming task and cannot be done at the scales of video streaming services. However, if a large enough set of representative HDR videos were collected, with representative distortions, and scored by a large enough group of viewers, then the collected scores could be used to train objective VQA algorithms suitable for deployments at scale. VQA algorithms are categorized as Full-Reference (FR) or No-Reference (NR). NR VQA algorithms do not use any information from the pristine reference versions of the distorted videos, unlike FR VQA algorithms, and instead attempt to predict the quality of distorted videos without additional input. NR VQA is a much harder problem than FR VQA, but is one that needs a solution since quite commonly no reference video is available. While effective FR and NR VQA models exist for SDR content, naive application of these to HDR content may not yield the best results, as we will show. Here we show how perceptually-driven changes can be made to an existing high-performance SDR NR VQA algorithm to improve its quality prediction power on HDR video content. ## 2 Related Work Most existing NR VQA models implicitly assume that their inputs are luma, and not luminance, since images and videos are digitally stored as luma and color-difference channels and not luminance. We will not process luminance values in our proposed models, and wherever we refer to HDR luma, color-difference channels, or R\({}^{\prime}\)G\({}^{\prime}\)B\({}^{\prime}\), we mean RGB light intensity values that have been passed through the PQ OETF and then converted to luma and color-difference channels through the linear transformation defined in BT 2020. To the best of our knowledge, there do not exist any NR VQA models that have been shown to yield enhanced performance on true HDR. Current NR Image Quality Assessment (IQA) algorithms, which do not measure temporal aspects of videos, can be applied on a frame-by-frame basis. Frame-based methods are effective enough in the absence of temporal distortions, high motions, or quality variations over time. It has been shown [7, 8] that this is often the case for user-generated content, on which NR IQA frame algorithms have been found to deliver state-of-the-art predictive power. BRISQUE [9] is an NR IQA algorithm that is based on natural scene statistics (NSS) models, exploiting the fact that images and videos reliably obey statistical regularities in the absence of distortions [10], and that the visual brain has adapted to these statistics to process visual stimuli in an efficient way. For example, the statistics of Mean-Subtracted Contrast-Normalized (MSCN) coefficients of luma are expressed using NSE models to quantify image quality in BRISQUE, and in a related unsupervised model called NIQE [11]. HIGRADE [12], which was originally designed for tonemapped (8-bit) picture IQA, models the statistics of the gradient and log-derivative of each channel in the CIELAB [13] color space. V-BLIINDS [14] is an NR VQA algorithm that models the statistics of the discrete cosine transforms of frame differences, and is an early example of using natural video statistics (NVS) models as the basis for conducting video quality prediction. It also contains features that capture various aspects of motion. ChipQA [15] is a more recent VQA algorithm that models the statistics of space-time chips, which are highly localized space-time slices of MSCN frames. An optimal space-time direction along which to extract quality-aware features is chosen by finding an oriented chip having minimum excess kurtosis. ChipQA also contains color and gradient features. RAPIQUE [16] is an NR VQA algorithm designed to conduct quality analysis on SDR User Generated Content (UGC). RAPIQUE contains various NSS features and features extracted using a Convolutional Neural Network (CNN). All of the features are pooled over time and used to train a support vector regressor. HDR-BVQM [17] consists of BRISQUE features, the log-derivative features defined from HIGRADE, and the temporal features from VBLIINDS. All of the HDR-BVQM features were designed for SDR videos, with no adjustment being made to specify them for application to HDR. Hence, it is really an SDR NR VQA algorithm, although it has been tested on HDR content. VSFA [18] is an NR VQA algorithm that consists of a pre-trained CNN, a Gated Recurrent Unit (GRU) network, and a temporal pooling layer. VSFA achieves state-of-the-art NR quality prediction performance on UGC datasets. NorVDPNet [19] is an NR IQA method intended for HDR. It utilizes a 2D CNN network that is trained on HDR VDP [20] scores between reference and distorted image pairs. NorVDPNet treats HDR VDP scores as a proxy for quality scores. TLVQM [21] is an NR algorithm that uses a large number of hand-designed, low complexity features that are specific to commonly occurring distortions like blockiness, blurring, motion artifacts, jerkiness, interlacing, and so on. Other 'high-complexity features' are sampled at 1 Hz to capture sharpness, blockiness, noise, color, and contrast. The HCF set are defined on the CIELAB space, which was designed for SDR. All of the models ChipQA, HIGRADE, and TLVQM are defined using the CIELAB color space, which was designed for SDR content. Later, when we conduct experimental studies of these and other models on HDR content, we will instead implement them using the HDR-CIELAB [22] space instead of the CIELAB space to improve their performances for fairer comparison. ## 3 Proposed Video Quality Assessment Our new HDR VQA algorithm has 3 parts: a spatial luma component, a spatio-temporal luma component, and a color component. We designed the model to extend the SDR BRISQUE and ChipQA algorithms by redefining their features sets to produce enhanced performance on the HDR VQA problem. A key ingredient of our design is the introduction of a non-linear processing stage that sensitizes the model to distortions affecting the high and low ends of the dynamic range. ### Spatial Luma Component We begin by explaining the way quality-aware spatial features are defined and computed on each frame of a video to be analyzed. #### Statistical features The MSCN coefficients \(\hat{V}[i,j,k_{0}]\) of a luma or R\({}^{\prime}\)G\({}^{\prime}\)B\({}^{\prime}\) channel of a video frame \(V[i,j,k_{0}]\) are defined as : \[\hat{V}[i,j,k_{0}]=\frac{V[i,j,k_{0}]-\mu[i,j,k_{0}]}{\sigma[i,j,k_{0}]+C}, \tag{1}\] where \(i\in 1,2...,M\), \(j=1,2..,N\) are spatial indices, \(k_{0}\) is the frame index, \(M\) and \(N\) are the frame height and width, respectively, the constant \(C\) imparts numerical stability, and where \[\mu[i,j,k_{0}]=\sum_{k=-L}^{k-L}\sum_{k=-L}^{k-L}w[m,l,k_{0}]V[i+m,j+l,k_{0}] \tag{2}\] and \[\sigma[i,j,k_{0}]=\Big{(}\sum_{k=-L}^{k=L}\sum_{k=-L}^{k=L}w[m,l,k_ {0}](V[i+m,j+l,\] \[k_{0}]-\mu[i,j,k_{0}])^{2}\Big{)}^{\frac{1}{2}}\] are the local weighted spatial mean and standard deviation of luma (rms contrast), respectively. The weights \(w=\{w[m,l]|m=-L,\,...,L,l=-L,...,L\}\) are a 2D circularly-symmetric Gaussian weighting function sampled out to 3 standard deviations and rescaled to unit volume, and \(K=L=3\). The MSCN coefficients (1) of pristine SDR videos have been observed to reliably follow a Gaussian distribution [10, 9, 11]. The MSCN coefficients of real-world distorted SDR videos predictably depart from Gaussianity, but can often be effectively modelled as following Generalized Gaussian Distributions (GGD), defined as \[g_{1}(x;\alpha;\beta)=\frac{\alpha}{2\beta\Gamma(\frac{1}{\alpha})}\exp[-( \frac{|x|}{\beta})^{\alpha}] \tag{3}\] where \(\Gamma(.)\) is the gamma function: \[\Gamma(\alpha)=\int_{0}^{\infty}t^{\alpha-1}\exp(-t)dt. \tag{4}\] The estimated shape parameter \(\alpha\) and variance \(\beta\) are estimated and used as features for quality assessment. Statistical regularities of distorted and naturalistic pictures and videos have been observed on SDR content, and it is reasonable to suggest that they should be applicable to HDR videos as well, albeit, with suitable modifications and/or additional processing steps. HDR10 has been designed to produce more accurate and perceptually correct representations of the natural world than SDR content, and thus might be expected to reliably obey appropriate NSS and NVS models. As expected, we have found that the MSCN coefficients of pristine HDR luma frames can be reliably modeled as approximately following a Gaussian probability law. To show this, following [23], we computed the ratio of the difference between the entropy of the fitted Gaussian distribution and the entropy of the empirical distribution (denoted by \(\Delta H\)), to the entropy of the empirical distribution (denoted by \(H\)) on each frame of each reference video in our dataset. We found that the average value of \(\frac{\Delta H}{H}\) was 0.0467, indicating that \(\Delta H\) is a small fraction of \(H\) for nearly all the pristine reference videos. Likewise, the statistics of HDR content subjected to distortion tend to depart from Gaussianity. The average value of \(\frac{\Delta H}{H}\) on all frames of the videos that were downsized to 1080p and compressed at 1 Mbps, for example, was 0.2566. By way of example, consider the histograms of the MSCN coefficients of randomly selected pristine PQ coded HDR video frames, computed from (1) with \(C=4\), as shown in Fig. 1, along with their GGD fits, where the values of the best-fit shape parameters are given in the legend. As may be observed, the shape parameters of the GGD fits are close to 2, indicating that they near Gaussianity. The value of \(C\) that is used to compute the MSCN coefficients of SDR content (which have a digital range of 0-255) is typically taken to be 1 [9, 11]. However, HDR content has a digital range of 0-1023, a four-fold increase of the digital range, hence we used \(C=4\) when computing the MSCN coefficients of HDR content. To capture the correlation structure of spatial frames, we compute the products of neighboring pairs of pixels, as in BRISQUE, and these are modelled as following an Asymmetric Generalized Gaussian Distribution (AGGD). The parameters of the best AGGD fit to the histograms of each of the pairwise products are extracted and used as quality-aware features. #### 3.1.2 Nonlinear expansion of luma/color extremes In order to sensitize our model to distortions that are particular to HDR and wide color gamut (WCG) content, we developed a method that applies an expansive nonlinearity to enhance the ends of the brightness and color scales. It turns out that this approach is remarkably effective. Due to its higher bit depth, HDR excels in representing local contrasts and details without causing saturation of luma or color. For example, as shown in Fig. 2, a video of a bright sky may appear saturated and overexposed and will therefore lack details in SDR, while the greater dynamic range of HDR may allow the delineation of clouds and other details in the same sky. Likewise, distortions occurring on very bright and/or very dark regions (of HDR content) which might not be perceivable in SDR, can become obvious on HDR. We have observed that when true HDR/WCG video signals are distorted by common artifacts arising from compression and transmission, they are still effectively handled by existing VQA models (those designed on SDR). However, the feature responses to HDR-enabled details in bright and dark regions could be masked by the feature responses to regions of the frame that can be represented in both SDR and HDR. Because of this, while distortions on HDR-enabled regions of video frames can be singularly annoying, existing VQA features computed on them may be highly diluted by those computed on other spatial areas of HDR video frames. This may be the principal reason why standard VQA models originally designed for SDR contents do not perform very well on HDR video signals. Our innovation in this regard is to introduce a separate parallel, feature computation pathway, which deploys a simple expansive point nonlinearity prior to the computation of quality-aware statistical features. This is different from the usual forward modeling of the flow of visual information along the visual pathway that we, and others, often have used to define and compute quality-sensitive video features, but prior approaches have failed to adequately access the perceptual effects of HDR distortions, since they are dominated by distortions on regions not expressing brightnesses and colors at the extremes, but usually covering much larger spatial image geographies. By introducing an expansive nonlinearity, the extreme ends of the dynamic ranges of brightness and color are spread, while the middle ranges that dominate standard (SDR-like) feature computations are compressed. Statistical aspects of distortions on regions of extreme local luma/color that would have not been accounted for because they generally occupy much smaller spatial regions, become dominant by preferentially enhancing them in a separate, non-linearly transformed feature channel. The expansive nonlinearity is applied as follows. First, at every spatial frame coordinate, define a window or patch of size \(W\times W\), within which the sample values (luma or color) are linearly mapped to \([-1,1]\). While the exact range of the linear mapping applied prior to the nonlinearity is not significant, using \([-1,1]\) produces symmetric results that are conceptually and algorithmically simpler. Note that the \(W\times W\) windows are heavily overlapping. Later, we will explain the effects of varying the patch dimension \(W\) and the relative merits of mapping entire frames to \([-1,1]\), rather than patches. Then pass the luma or \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) values of each locally-mapped frame through the expansive nonlinearity \[f(x;\delta)=\begin{cases}\exp(\delta x)-1&x>0\\ 1-\exp((-\delta x))&x<0\end{cases} \tag{5}\] which is piecewise monotonic, and plotted in Fig. 3 for \(\delta=1,2,3,4,\) and \(5\). This simple nonlinearity has the effect of expanding the extreme ends of the ranges of luma or color, while compressing these ranges away from the extremes. As we will show, extracting statistical features on videos that have been "HDR sensitized" by applying (5) produces much higher correlations against subjective quality judgments. Of Figure 1: Empirical distributions of MSCN coefficients of pristine HDR frames (blue) with best GGD fits superimposed (orange). Figure 2: Original frames (left) and nonlinearly transformed frames (right) of “Cloud” video using (5) with \(\delta=4\). course, (5) is applied on the rescaled luma and R\({}^{\prime}\)G\({}^{\prime}\)B\({}^{\prime}\) values, which are themselves obtained by applying the PQ OETF on RGB light intensity values. Figs. (b)b and (d)d show the result of \(f(x)\) applied on HDR and SDR frames. \(f(x)\) enhances the local contrast and details that the HDR version is able to represent in the cloudy region in the top left of the frame, which the SDR region is unable to represent due to its limited bit-depth. This enables feature responses to be more attuned to such regions and prevents masking from the feature responses to other regions. Figs. 4 and 5 depict examples of HDR luma frames (tonemapped for viewing in this paper) before and after applying \(f(x;4)\) (using \(\delta=4\) in (5)) on windows of linear dimensions \(W=31\). As may be seen, details near the extremes of the local luma range that may not have been as visible in SDR content due to its limited ability to represent details in bright or dark regions, are amplified, while those in the mid-range of local luma are suppressed. In particular, in Fig (a)a, details are visible in the sun's reflection on the water in the pristine version, and \(f(x)\) enhances the local contrast in that region and hence highlights it in Fig. (b)b. In the distorted version shown in Fig (c)c, local contrast and details are suppressed due to compression and downscaling. \(f(x)\) highlights this in Fig. (d)d, where the sun's reflection on the water does not stand out as much as it does in Fig. (b)b. The statistics and feature responses to these two images will accordingly differ. Similarly, Fig. (c)c shows a compressed and downscaled version of Fig. (a)a. Blocking and downscaling artifacts in dark regions of the distorted version, such as on the T-shirts and coats, become more visible after \(f(x)\) is applied, as can be seen by comparing Fig. (b)b and Fig. (d)d. Feature responses to distortions in such dark regions would otherwise be masked by feature responses to other areas. Thus, (5) functions to highlight distortions that occur in ranges of luma and color regions that are made possible by HDR. Increasing the value of \(\delta\) in (5) crushes larger ranges of pixel values while locally expanding narrower, more extreme ranges of the luma and color ranges. As the ranges of expansion are narrowed, the degrees of expansion are increased. However, very high values of \(\delta\) should be avoided, since then the ranges of expansion near the endpoints of \([-1,1]\) become extremely narrow and excessively amplified, reducing information relevant to HDR. On the other hand, very low values of \(\delta\) may not have a significant effect on the video and fail to highlight distortions at extreme ends of the luma range. After (5) is applied, MSCN coefficients are computed on the nonlinearly processed luma and color channels of each processed video frame. We have found that the MSCN coefficients of the nonlinearly processed luma and R\({}^{\prime}\)G\({}^{\prime}\)B\({}^{\prime}\) values also reliably follow suitably fitted GGD models. The MSCN coefficients are computed using (1), where we fixed \(C=0.001\), since the MSCN coefficients of pixel values processed by (5) tend to tightly cluster about the origin. A few plots of the MSCN coefficients of the nonlinearly mapped luma values from pristine frames, with their best GGD fits superimposed on them are shown in Fig. 6, while MSCN plots of both pristine and distorted frames from the videos "Bonfire," "Football," "Golf," and "Fireworks," before and after they were subjected to (5), are shown in Fig. 7. The GGDs fits are apparently quite good, and it may be observed that the statistics of the distorted HDR videos noticeably deviate from those of pristine HDR videos. As may be seen from Fig. 7, the application of (5) causes the MSCN coefficients of the distorted and pristine video frames to become flatter and less concentrated about the mean. The distributions are more spread out, and have larger variances and flatter tops due to the nonlinear operation. The subsequent Figure 4: Original frame (left) and nonlinearly transformed frame (right) of “Oceana video using (5) with \(\delta=4\). Figure 5: Original frame (left) and nonlinearly transformed frame (right) of “Reachera” video using (5) with \(\delta=4\). Figure 3: Plot of expansive nonlinearity in (5) for five values of the expansion parameter \(\delta\) feature responses therefore capture deviations lying further from the local mean, which tend to be more pronounced on the greater dynamic range of HDR. We have also found that the products of spatially adjacent MSCN coefficients of nonlinearly-processed HDR frames are well described as AGGD. The shape parameter and variance of the best GGD fits to the MSCN coefficients, and the four parameters of the AGGD fits to the histograms of each of the four products of spatially adjacent MSCN coefficients, are all extracted and used as quality-aware features. As in BRISQUE, all of these features are also computed at a reduced scale by downsampling gaussian-smoothed frames by a factor of 2. This allows the trained prediction model to capture naturally multiscale attributes of videos, and of distorted versions of them. ### Color Component The BT 2020 color gamut, which is the color gamut that the HDR10 format uses, spans more than twice the range of the SDR color gamut BT 709. While a WCG provides more diverse, brilliant, and realistic colors, it can also increase the visibility of color distortions relative to SDR. For example, color bleeding is an important category of distortions that affect HDR content. It manifests as the smearing of color between areas of strongly contrasting chrominance, with the result that luminance and chrominance edges may not necessarily coincide [24, 25]. Color bleeding is caused by subsampling of the chromatic channels, and quantization of higher frequency components [26], and is more prominent in HDR than in SDR content due to the wider range of colors represented in HDR [27]. A WCG also poses challenges to compression and quantization algorithms, because blocking and ringing artifacts can become more visible on brighter colors. SDR VQA algorithms such as ChipQA [15] and RAPIQUE [16] make use of chrominance components and relative saturation defined using opponent color spaces such as CIELAB. Color spaces that have been designed for HDR content and are intended to be more physiologically plausible than CIELAB include HDR-CIELAB, HDR-IPT [22], and \(J_{2}a_{2}b_{2}\)[28]. The chrominance components in these opponent color spaces are Figure 6: Empirical distributions of MSCN coefficients of pristine HDR frames (blue) after \(f(x;4)\) ((5) with \(\delta=4\)) was applied along with their best GGD fits (orange). Figure 7: Empirical distributions of MSCN coefficients of pristine (blue) and distorted (orange) HDR frames. The first row shows distributions of the MSCN coefficients of luma frames. The second row shows distributions of the MSCN coefficients after \(f(x;4)\) ((5) with \(\delta=4\)) was applied to non-overlapping patches of the corresponding frames in the first row. defined as difference signals whose extremes may not correspond to the extremes of HDR color content. Relative saturation, defined as the magnitudes of color-opponent channels, and referred to as "chroma" in CIELAB, decouples color from brightness, which is not desirable when studying HDR color artifacts. Amplifying the extremes of chromatic difference signals or relative saturation therefore does not effectively isolate or enhance distortions that arise in areas where the extra bits allocated to HDR produce more extreme bribights, colors, and darks that are affected differently by distortions. Instead, we have found that statistical features extracted from perceptually-uniform \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) color intensities are remarkably predictive of HDR video quality. Color intensity values captured by camera sensors undergo transformation by the PQ OETF, yielding perceptually uniform \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\). Rather than applying a nonlinear transformation on color difference signals, we directly analyze the statistics of \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) values that have been locally mapped on \(W\times W\) windows to \([-1,1]\), then subjected to (5). As before, assume that the distributions of the MSCN coefficients of the nonlinearly processed \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) values obey GGD models and after finding the best fits, extract their shape and variance as quality-aware features. Exemplar histograms from different color channel of different randomly selected frames of pristine and distorted HDR content are shown in Fig. 8. We again model the products of pairs of adjacent MSCN coefficients of each nonlinearly transformed color channel as obeying an AGGD law, and extract the same quality-sensitive features. In order to access "non-HDR" color distortions, we also process each original \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) plane, compute their MSCN coefficients, assume their first-order distributions follow GGD models, and that the products of adjacent coefficients follow AGGD models. ### Spatio-temporal Components #### 3.3.1 Measuring temporal quality variations The 'quality-aware' features described in Sections 3.1 and 3.2 are extracted on every frame of each video. Specifically, 36 features are extracted on luma, 36 on nonlinearly processed luma (using (5) with \(\delta=4\)), 36 on each \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) channel, and 36 on each nonlinearly processed \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) channel, yielding a total of 288 features per frame. We also compute the standard deviation of these features over every non-overlapping five-frame interval, and average them over the video, yielding an additional single descriptor of the variations of quality that may occur. #### 3.3.2 ST Gradient Chips Visual signals incident on the retina are subjected to a form of spatial bandpass processing and adaptive gain control [29], which are efficiently modelled by the MSCN processing described in (1). Subsequent stages along the visual pathway perform temporal entropy reduction [30, 31, 32]. The visual signal then passes to area V1, where neurons are sensitive to specific space-time directions. Motivated by the localized spatio-temporal bandpass processing that occurs along the later stages of the retina-cortical pathway, we deploy the ST chip concept to capture the natural statistics of videos being quality-analyzed, along motion-sensitized space-time orientations. Gradients contain important information about edges and amplify distortions that manifest as variations of contrast. In HDR-ChipQA, the gradient magnitudes of the PQ luma frames are first computed using a Sobel operator. Following [15], we then apply a discrete temporal filter to the MSCN coefficients of the gradient magnitudes to temporally decorrelate them. The ST chips of the temporally-processed gradient MSCN coefficients are extracted along the directions of minimum excess kurtosis. The histograms of ST Gradient chips of pristine and distorted HDR videos are plotted together in Fig. 9. We also computed the paired products of ST Gradient chips and find that they are well modeled as following AGGD probability laws. As before, parameters of the best AGGD fits are used as predictive quality features in the HDR-ChipQA model. For more details, we refer the reader to [15]. ### Quality Assessment A full list and description of the features that define HDR-ChipQA is given in Table 1. We trained an SVR on all of these features against subjective quality scores, as will be explained. Figure 8: Empirical distributions of MSCN coefficients of nonlinearly processed \(\mathrm{R}^{\prime}\mathrm{G}^{\prime}\mathrm{B}^{\prime}\) channels of randomly chosen frames of pristine HDR videos (blue) and distorted HDR videos (orange), following application of (5) with \(\delta=4\). ## 4 Experiments and Results ### Performance Metrics We computed Spearman's Rank Ordered Correlation Coefficient (SROCC) between the scores predicted by HDR-ChipQA and ground truth mean opinion scores (MOS). We also fit the predicted scores to the MOS using a logistic function \[l(s)=\frac{\beta_{1}-\beta_{2}}{1+\exp(-\frac{(x-\beta_{2})}{\beta_{4}})+\beta_{5}} \tag{6}\] and then computed Pearson's Linear Correlation Coefficient (LCC) and the Root Mean Square Error (RMSE) between the fitted scores and the MOS, following standard practice [33]. ### Databases To the best of our knowledge, there currently exists only one publicly-available HDR10 database: the LIVE-APV HDR ([34]) database. The DML-HDR database [35] contains 10 bit videos, but they are expressed in the BT709 color space and are hence not compliant with modern HDR standards such as HDR10, HDR10+, or Dolby Vision. In addition, the scores for this database are not publicly available. The HDR-HDM-2014 database [36] has been used to evaluate tone-mapping operators but no subjective study has been conducted using this dataset and thus it has no quality scores. We evaluated HDR-ChipQA against other models on the LIVE-APV HDR database. LIVE-APV HDR contains 310 videos that were viewed by human subjects under two ambient conditions. The videos were created by applying 9 different combinations of compression and downsampling on 31 source videos. The ambient conditions were a dark setting with an incident luminance of \(<10\) lux, and a bright setting with an incident luminance of 200 lux. The scores that were recorded under the two ambient settings were found to be statistically indistinguishable from each other. We separately evaluated HDR-ChipQA on both sets of scores. The scores were retrieved from raw opinion scores using the Sureal method of maximum likelihood estimation [37]. In order to evaluate its robustness against dynamic range, we also evaluated it on the 10-bit SDR LIVE ETRI [38] database and the 8-bit SDR LIVE Livestream [39] database, and these results are reported in Section IV.G. ### Evaluation Protocol We trained a Support Vector Regressor (SVR) with a linear kernel on the features to predict the MOS of the videos. The training process was as follows: We partitioned the database a training set and a test set with an 80:20 ratio so that all videos of a same content appear in the same set. This content separation was done to prevent the SVR from learning any content-specific cues from the features, which could artificially inflate performance. We performed 5-fold cross validation on the training set to choose the best value of the regularization parameter \(C\) of the SVR. No hyperparameter tuning was done on the test set. This procedure was repeated 100 times and the median metrics are reported for each database. ### Results on LIVE HDR Database We evaluated HIGRADE, TLVQM, V-BLINDS, HDR-BVQM, VSFA, RAPIQUE, (original) ChipQA, and BRISQUE. \begin{table} \begin{tabular}{|l|l|l|} \hline Domain & Description & Feature index \\ \hline \hline Luma & GGD and AGGD features from HDR Luma. & \(f_{1}-f_{8}\) \\ \hline Luma after \(f(x)\) & GGD and AGGD features from HDR Luma after nonlinearity & \(f_{70}-f_{71}\) \\ \hline R’G’B’ & GGD and AGGD features from each channel of R’G’B’. & \(f_{70}-f_{710}\) \\ \hline R’G’G’B’ after \(f(x)\) & GGD and AGGD features from each channel of R’G’B’ after nonlinearity. & \(f_{81}-f_{720}\) \\ \hline Luma & Standard deviation every 5 frames of GGD and AGGD features from HDR Luma. & \(f_{82}-f_{720}\) \\ \hline Luma after \(f(x)\) & Standard deviation every 5 frames of GGD and AGGD features from HDR Luma after nonlinearity. & \(f_{82}-f_{80}\) \\ \hline R’G’B’ & Standard deviation every 5 frames of GGD and AGGD features from each channel of R’G’B’. & \(f_{84}-f_{80}\) \\ \hline R’G’B’ after \(f(x)\) & Standard deviation every 5 frames of GGD and AGGD features from each channel of R’G’B’ after nonlinearity. & \(f_{80}-f_{710}\) \\ \hline ST Gradient Chips & GGD and AGGD features from ST Chips of the gradient magnitude of luma. & \(f_{87}-f_{812}\) \\ \hline \end{tabular} \end{table} Table 1: Descriptions of features in HDR-ChipQA. Figure 9: Empirical distributions of ST Gradient chips of randomly chosen pristine HDR videos (blue) and distorted HDR videos (orange). All of these models were trained on the LIVE-HDR database for a fair comparison. We also attempted to train the NorVDP-Net network on the LIVEHDR database but it failed to converge. We hence evaluated NorVDPNet using weights obtained by pretraining the network on HDR images having JPEG-XT distortions [40]. Note that VSFA is also a CNN-based metric, and RAPIQUE and VIDEVAL are hybrids of CNN models with feature-based models. HIGRADE, TLVQM, and ChipQA were originally defined on features computed from the CIELAB colorspace. For fairer comparison, we instead trained all models using the HDR-CIELAB colorspace. The performance results are presented in Table 2 for scores gathered under the dark viewing condition, and in Table 3 for scores gathered under the bright viewing condition. HDR-ChipQA outperformed all of the other algorithms by a wide margin, obtaining a nearly 10% increase over the next-best performing algorithm. Scatter plots of the predictions made by all of the compared algorithms on the videos obtained under the dark viewing condition are shown in Fig. 10. This visualization clearly shows that HDR-ChipQA makes predictions that are more consistent with MOS than those produced by the other models. The scatter plots were created by plotting the MOS against the mean quality predictions produced by the NR VQA algorithms on each video in the test set over the 100 train-test splits, ensuring that each video appeared in the test split at least once. A boxplot of the SRCC values obtained over the 100 train-test splits is shown in Fig. 11, showing that HDR-ChipQA attained a higher median SRCC and a smaller spread of SRCC than the other algorithms. V-BLIINDS, BRISQUE, and ChipQA achieved similar performances on the LIVE HDR database. TLVQM is a state-of-the-art algorithm but performs very poorly on HDR. This might be due to the large number of parameters in TLVQM that are defined and tested by the authors of [21] on SDR content. RAPIQUE is a state-of-the-art SDR VQA algorithm but it also performed poorly on LIVE HDR. HDR-ChipQA, on the other hand, was able to achieve high performance against all performance metrics. We also performed a one-sided Welch's t-test with a 95% confidence interval on the SRCC values obtained by evaluating various algorithms on the scores collected under the dark viewing condition over 100 random train-test splits. We chose the Welch's t-test over the Student's t-test because the Welch's t-test does not assume that the variances of the distributions of SRCC values for each algorithm are equal, which Student's t-test assumes and is incorrect in this case [41]. The results are shown in Table 4. Although some of the al \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Method} & \multicolumn{2}{c|}{SROCC\(\uparrow\)} & \multicolumn{2}{c|}{LCC\(\uparrow\)} & \multicolumn{1}{c|}{RMSE\(\downarrow\)} \\ \cline{2-5} \multicolumn{2}{|c|}{} & RAPIQUE [16] & 0.4553 (0.2533) & 0.4864 (0.1171) & 15.7134 (1.7415) \\ \hline \multirow{3}{*}{Image Quality Metrics} & NoR-VDPNet[19] & 0.6248(0.0635) & 0.5605(0.0764) & 14.8226 (2.1892) \\ \cline{2-5} & HIGRADE [12] & 0.7088 (0.0827) & 0.6827 (0.0710) & 14.2545 (2.0780) \\ \cline{2-5} & BRISQUE[9] & 0.7251 (0.0955) & 0.7139 (0.0881) & 12.6404 (2.1651) \\ \hline \multirow{3}{*}{Video Quality Metrics} & TLVQM [21] & 0.5781 (0.1014) & 0.5552 (0.0919) & 14.999 (1.0908) \\ \cline{2-5} & HDR BVQM[17] & 0.6020 (0.0944) & 0.5844 (0.086) & 14.5930 (1.8276) \\ \cline{2-5} & VSFA[18] & 0.7127 (0.1079) & 0.6918 (0.1114) & 13.0511 (2.4003) \\ \cline{2-5} & V-BLIINDS[14] & 0.7483 (0.1446) & 0.7193 (0.1141) & 12.7794 (2.3715) \\ \cline{2-5} & ChiPQA [15] & 0.7435 (0.0895) & 0.7334 (0.0819) & 12.1549 (1.9106) \\ \cline{2-5} & HDR-ChipQA & **0.8250(0.0589)** & **0.8344(0.0562)** & **9.8038 (1.7334)** \\ \hline \end{tabular} \end{table} Table 2: Median SROCC, LCC, and RMSE on the LIVE HDR Database on scores collected under the dark ambient condition for NR algorithms. Standard deviations are shown in parentheses. The best performing algorithm is bold-faced. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Method} & \multicolumn{2}{c|}{SROCC\(\uparrow\)} & \multicolumn{1}{c|}{LCC\(\uparrow\)} & \multicolumn{1}{c|}{RMSE\(\downarrow\)} \\ \hline \multirow{3}{*}{Image Quality Metrics} & RAPIQUE [16] & 0.4470 (0.2171) & 0.4910(0.1393) & 15.6088(1.9382) \\ \cline{2-5} & NoR-VDPNet[19] & 0.5753 (0.0599) & 0.5383 (0.0616) & 14.5622 (1.8899) \\ \cline{2-5} & HIGRADE [12] & 0.6862 (0.0973) & 0.6664 (0.0808) & 13.7339(2.0078) \\ \cline{2-5} & BRISQUE[9] & 0.7133 (0.1004) & 0.7139(0.0885) & 12.6404(2.0428) \\ \hline \multirow{3}{*}{Video Quality Metrics} & HDR BVQM[17] & 0.5411 (0.1102) & 0.5436 (0.0986) & 15.4146 (1.8312) \\ \cline{2-5} & TLVQM [21] & 0.5549 (0.1162) & 0.5504 (0.1008) & 15.2480 (1.8562) \\ \cline{2-5} & V-BLIINDS[14] & 0.7248 (0.1304) & 0.7009 (0.1180) & 12.896 (2.3606) \\ \cline{2-5} & ChiPQA [15] & 0.7437 (0.0815) & 0.7312 (0.0864) & 12.3509 (1.843) \\ \cline{2-5} & HDR-ChipQA & **0.8316(0.0580)** & **0.8287 (0.0552)** & **10.1903 (1.6664)** \\ \hline \end{tabular} \end{table} Table 3: Median SROCC, LCC, and RMSE on the LIVE HDR Database on scores collected under the bright ambient condition for NR algorithms. Standard deviations are shown in parentheses. The best performing algorithm is bold-faced. gorithms attained SRCC values that were statistically indistinguishable from each other, the results again show the elevated performance and statistical superiority of HDR-ChipQA against all the other algorithms tested. ### Ablation Study We also conducted a systematic ablation study by removing each feature space from the algorithm, then evaluating the performance of the rest of the algorithm on the scores collected under the dark viewing condition. The results are presented in Table 5, with the feature spaces ranked by the drop of SRCC they caused when they were removed. We also evaluated the performance of each feature space separately and reported the median SRCC when each feature space was trained and tested to predict human scores gathered under the dark viewing condition in Table 6. These Figure 11: Boxplot of SRCC values attained by the compared models over 100 splits Figure 10: Scatter Plots of MOS vs model predictions, with parametric fits \(I(s)\) shown in orange. experiments indicate that the features drawn from the nonlinearly processed luma and R\({}^{\prime}\)G\({}^{\prime}\)B\({}^{\prime}\) spaces were understandably the most important for HDR quality assessment. ### Experiments on the Expansive Nonlinearity We conducted experiments on the patch size \(W\) for \(\delta\) = 4. Results of comparing the predictions to the scores collected under the dark viewing condition for \(W=9,17,31\) and 63 are reported in Table 7 The results were computed on the entire LIVE HDR dataset following the procedure described in section IV C. We intentionally chose values of \(W\) offset by a single pixel from multiples of 4 so they would not align with compression block boundaries. We found that very small patch sizes led to extreme values rendering the mapping to \([-1,1]\) almost meaningless and preventing the enhancement of contrast. We also tried applying the nonlinearity to entire frames instead of locally, obtaining the results labeled as "Global" in Table X. We found that \(W=17\) produced the best results, and that local mapping to \([-1,1]\) before application of (5) was more beneficial than global rescaling. We also conducted experiments to find the best value of \(\delta\) in the nonlinearity \(f(x)\) in (5) for \(W=17\). As discussed in section 3.1.2, we found that larger values of \(\delta\) caused the mid-range of luma values to be flattened and the ends of the luma range to be contrast-enhanced, but very large values of \(\delta\) caused the flattening of mid-ranges of luma that were still relevant to HDR, while emphasizing luma extremes to a greater degree than necessary. We computed median SRCC values on the entire dataset following the procedure described in Section IV C for HDR-ChipQA using \(\delta=1,2,3,4,5\) and report the results in Table 8.. As may be seen, choosing \(\delta=4\) delivered the best performance. It is to be noted that since these results are the median and standard deviations of metrics computed for 100 splits, they are test-set agnostic. ChipQA on SDR content by implementing R\({}^{\prime}\)G\({}^{\prime}\)B\({}^{\prime}\) features using the BT 709 gamut space, instead of the BT 2020 gamut, and implementing luma features using BT.709 gamma-corrected luma instead of PQ luma. As shown in Tables 9 and 10, we found that HDR-ChipQA achieved the best video quality prediction performances on both the LIVE Livestream dataset and the LIVE ETRI database, respectively. The state-of-the-art performance of HDR ChipQA on both HDR and SDR databases suggests that it may be used agnostically with respect to dynamic range. The strong performance of the algorithm may be due to the fact that image contrast is strongly affected by distortions regardless of dynamic range, and therefore enhancing the local contrast of frames increases the predictive power of subsequent features. The LIVE HDR dataset contains compression and downsampling distortions. The LIVE Livestream dataset contains blur, noise, flicker, judder, interlacing, compression, and downsampling. The LIVE ETRI dataset contains videos with compression, downsampling, and temporal subsampling. Our proposed model achieves the best NR VQA performance on all of these datasets, showing that it can predict the quality of videos affected by any of these distortions as long as it is suitably trained. ## 5 Conclusion We have created the first NR VQA algorithm for HDR10 content. The algorithm is based on principles from natural scene statistics adapted for HDR. We found that an existing NR VQA algorithm designed for SDR content can be improved upon by making use of perceptually-motivated features specifically created for HDR. HDR-ChipQA is able to achieve very high correlations against human judgments of both HDR and SDR video quality, and we envision that it will be a useful tool for a wide variety of applications. The source code for HDR ChipQA will be made available at [https://github.com/JoshuaEbenezer](https://github.com/JoshuaEbenezer). ## Acknowledgment This research was sponsored by a grant from Amazon.com, Inc., and by grant number 2019844 for the National Science Foundation AI Institute for Foundations of Machine Learning (IFML). The authors also thank the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported in this paper. URL: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu).
2304.05734
Few-shot Class-incremental Learning for Cross-domain Disease Classification
The ability to incrementally learn new classes from limited samples is crucial to the development of artificial intelligence systems for real clinical application. Although existing incremental learning techniques have attempted to address this issue, they still struggle with only few labeled data, particularly when the samples are from varied domains. In this paper, we explore the cross-domain few-shot incremental learning (CDFSCIL) problem. CDFSCIL requires models to learn new classes from very few labeled samples incrementally, and the new classes may be vastly different from the target space. To counteract this difficulty, we propose a cross-domain enhancement constraint and cross-domain data augmentation method. Experiments on MedMNIST show that the classification performance of this method is better than other similar incremental learning methods.
Hao Yang, Weijian Huang, Jiarun Liu, Cheng Li, Shanshan Wang
2023-04-12T09:43:39Z
http://arxiv.org/abs/2304.05734v1
# Few-shot Class-incremental Learning for Cross-domain Disease Classification ###### Abstract The ability to incrementally learn new classes from limited samples is crucial to the development of artificial intelligence systems for real clinical application. Although existing incremental learning techniques have attempted to address this issue, they still struggle with only few labeled data, particularly when the samples are from varied domains. In this paper, we explore the cross-domain few-shot incremental learning (CDFSCIL) problem. CDFSCIL requires models to learn new classes from very few labeled samples incrementally, and the new classes may be vastly different from the target space. To counteract this difficulty, we propose a cross-domain enhancement constraint and cross-domain data augmentation method. Experiments on MedMNIST show that the classification performance of this method is better than other similar incremental learning methods. Keywords:Incremental learning Few-shot learning Cross-domain. ## 1 Introduction With the increase of model scale and the number of training data, deep learning has achieved better and better performance in downstream tasks. However, it is well known that a well trained deep neural network (DNN) with large scale of parameters is usually difficult to adapted to a new task by training on just a few examples. Especially when the new samples are coming from different source domains. Further, lack of ability to preserve previous knowledge limits the application of DNN [12, 19]. Therefore, it is important to explore the few-shot learning and memorizing capability of deep learning models. However, existing methods may suffer from few samples increment for which may lead to overfit of the representation and thus destroy the pre-trained structure of feature embeding [11]. As results, leading to more serious forgetfulness. Reasearchers have trying to solve the few-shot incremental problem. Some methods update the backbone when receiving the new incremental tasks after training the network [10, 28], while other methods freeze [26, 30]. At the same experiment setting, comparing to the one freezing the backbone, the updated methods showing more performance decrease since the large unbalancing between the coming data and source data, especially when the data come from different domain. To this aim, we are the first to propose a challenging and practical novel scenarios: cross-domain few-shot incremental learning (CDFSCIL). CDFSCIL requires the model to constantly adapt to new tasks by receiving data from different source domains, and retain the previously learned tasks without additional cost. Considering clinical application, CDFSCIL has the following attribution: 1) The model needs to perform well on all classes equally, even when there are a huge gap in representation space between different domain data; 2) For extreme few sample case, the model should still maintain stable performance. The strategy of backbone freezing decouples the learning of representation and classifier to avoid over-fitting and catastrophic forgetting in representation [14, 21, 26, 29]. In addition, since the basic features of different objects are similar, it is meaningful to choose these types of features to identify new classes. Inspired by the decouples strategy, our method reserve space for future cross-domain tasks by constructing pseudo-domains and pseudo-labels in advance and hence lead to better class-incremental performance. This article proposes a cross-domain few-shot class incremental learning method designed to learn more universal feature embeddings from a small amount of data. Specifically, we generate a large number of pseudo-data and train the model on it to adapt to these data. Through introducing a domain-constrained loss, we jointly optimize the pseudo-data and real data, extract invariant infor Figure 1: Scenario of cross-domain few shots incremental learning. Each scenario achieved by adding non-overlapping classes sequentially. Only a few with different domain images are used for training in the incremental session, while the target space may be very different from the base session. An ideal model should perform well in all categories after training. mation and enable the model to handle incremental sessions with few samples. Our contributions are as follow: 1. We are the first to propose the challenging senario of CDFSCIL; 2. We narrowed the cross-domain feature distribution to maximizes the distance between inter-class feature vectors via a novel constraint; 3. We propose a new cross-domain augmentation strategy to compact intra-class clustering and wide inter-class separation. ## 2 Method In this section, we introduce the propose method via: 1) a cross-domain enhancement constraint and 2) intra-/inter- class separation strategy for the challenges of FSCIL. The workflow of the method are as shown in Fig. 1. Figure 2: This is the workflow of the proposed method. We generate pseudo-incremental data on the base session with multiple domains, obtaining a universal feature embedding representation by making intra-class clustering more compact and inter-class margin of the same domain wider. a) The training process on the base session first randomly samples pseudo data to simulate the future incremental phase to be learned, using domain-enhanced loss to enhance the discriminability of domain-related categories. b) The incremental learning and inference process. The newly added classifier for each session is the average embedding of each class training data in that session. The logit is obtained by the cosine similarity between the data embedding and the classifier. ### Problem Formulation The goal of FSCIL is to learn new classes from extremely limited training samples while maintaining performance over old sessions. Typically, FSCIL has several sequential learning sessions \(\{S^{0},S^{1},...,S^{B}\}\). Each session contains \(N^{b}\) labeled sample pair \(S^{b}=\{x_{i},y_{i}\}_{i=1}^{N^{b}}\), where \(x_{i}\in R^{D}\) is input data and \(y\in Y^{b}\). \(Y^{b}\) is the label space of \(S^{b}\) and \(Y^{b}\) satisfying \(\forall b\neq b^{\prime},Y^{b}\cap Y^{b^{\prime}}=\). The training set of \(b\)-th session \(S^{i}_{train}\) contains data of current session only and will be inaccessible during other sessions. Meanwhile, the testing set of \(b\)-th session \(S^{b}_{test}\) includes both test sets of previous and current sessions, that is \(Y^{b}_{test}=\{Y^{0}\cap Y^{1},...,Y^{b}\}\). Usually, the training set \(S^{0}_{train}\) of the first session is a relatively large data set, which is also known as the basic training set. Instead, the data set is usually described as the N-way K-shot training set in all subsequent sessions. State-of-the-art continuous learning solutions impose constraints on input space and target space, based on the assumption that all tasks have the same target space [4, 16, 20] or all tasks originate from the same (partial) data set [4, 8]. In practice, \(S\) may come from multiple different domains with large distribution discrepancies. Therefore, the goal of our CDFSCIL is continuous learning in different domains. Each domain is defined as a single data set of one class or multiple categories, and each S contains the data of multiple domains. ### Domain enhancement constraint In the case of cross-domain incremental, due to the large inter-domain differences, the learning based on cross-entropy loss may lead the model to converge only in the inter-domain that are easy to classify. The attention to the intra-domain fine category is thus weakened. Therefore, we hope to use a loss function, which can not only maintain the performance of intra-domain classification but also enhance inter-class attention. Many adaptive cross-entropy and cosine-similarity constraint have been used in classification problems [17, 18, 23]: The cross-entropy constraint classify different items by maximizing the posterior probability of the groundtruth. Given the input representation \(x_{i}\) with its corresponding label \(y_{i}\), the cross-entropy loss can be formulated as: \[L_{s}=\frac{1}{N}\sum_{i=1}^{N}(-logp_{i})=\frac{1}{N}\sum_{i=1}^{N}-log\frac{ e^{f_{y_{i}}}}{\sum_{j=1}^{C}e^{f_{j}}} \tag{1}\] where \(p_{i}\) indicates the posterior probability, \(N\),\(C\) is the number of samples and classes respectively. \(f_{j}\) is usually represented as the activation of the fully connected layer with the weight \(W_{j}\) and the bias \(b_{j}\). For simplicity, we set and fix the \(b_{j}=0\). \(f_{j}\) is then given by the following formula: \[f_{j}=W_{j}^{T}x=\|W_{j}\|\|x\|cos\theta_{j} \tag{2}\] where \(\theta_{j}\) is the angle between \(W_{j}\) and \(x\). Eq. 2 shows that both the norm and angle of the vector have contribution to the posterior probability. The constraint without feature normalization is to learns \(L2\) norm to minimize the total loss, resulting in relatively weak cosine constraint, which weakens the ability of classification based on cosine similarity. Inspired by cosFace [23], we normalized the weight vector. Let \(\|W_{j}\|=1\), and fixed \(\|x\|=r\). The representation vector is then mapping on a hypersphere, where \(r\) represent the radius and \(m\) is the cosine edge at the classification boundary: \[L_{A}=-\frac{1}{N}\sum_{j=1}^{N}\log(\frac{e^{r_{A}(\cos\theta_{j}-m_{A})}}{e^ {r_{A}(\cos\theta_{j}-m_{A})}+\sum_{i\neq j}e^{r_{A}\cos\theta_{i}}}) \tag{3}\] Based on Eq. 3, we introduce domain \(D\). In the same domain, features between classes become more widely separated, and the correction loss can be expressed as \[L_{D}=-\frac{1}{N_{D_{i}}}\sum_{j=1,j\in D_{i}}^{N_{D_{i}}}\log(\frac{e^{r_{D} (\cos\theta_{j}-m_{D})}}{e^{r_{D}(\cos\theta_{j}-m_{D})}+\sum_{i\neq j}e^{r_{D }\cos\theta_{i}}}) \tag{4}\] \(r_{A}\) and \(r_{D}\) can control the distribution of feature vectors in the hypersphere with different radius, which loosens the feasible space of different domains. \(m_{D}\) is used to control the interval between classes in the domain and adjust the contribution of \(L_{A}\) and \(L_{D}\). So the total loss \(L\) can be formulate as: \[L=\lambda L_{A}+(1-\lambda)L_{D} \tag{5}\] ### Intra-/Inter- class separation strategy The wide separation of intra-domain and inter-class representation means that more room can be left for the new potential classes. Moreover, learning with more diversity feature representation can help the network learning to be more diverse and transferable. Therefore, we generate a large number of pseudo class data to separate the represented latent space. We propose a simple and effective method to construct pseudo-class and pseudo-domain. For images, we use the image mixing methods such as Mixup [27], Cutout [9] and CutMix [25] to randomly sample and fuse in the base session data to generate new and different pseudo-class samples. For a classification task with \(C\) class and \(D\) domain, we fuse \(N_{C}\) new classes and \(N_{D}\) new domains by random combination. \(N_{C}\) and \(N_{D}\) can be calculated by: \[N_{C}=C\times(C-1)/2 \tag{6}\] \[N_{D}=D\times(D-1)/2 \tag{7}\] The generated pseudo-class and pseudo-domain will be trained together with the original data. In particular, when two data from the same domain are merged, the new data generated still belongs to the source domain. Therefore, the intra-domain category diversity of the source domain will be enhanced. ## 3 Experiment ### Dataset of Multiple Diseases MedMNIST [24] is a large standardized biomedical image collection, including 12 number of 2D datasets and 6 number of 3D datasets. All images are pre-propocess to 28 \(\times\) 28, with corresponding Two-/multi- class labels. Our method are experimented on 6 medical disease classification open-source datasets. Following the proposed CDFSCIL, we select PathMNIST [15],DernaMNIST [22], OrganAMNIST [3] as the base session sets while the others [1, 2, 6] are serve as the incremental session. Details are as shown in Table. 1 and Table. 2. ### Evaluation Criteria Following [5, 7], we report the average accuracy and Performance Dropping rate (PD) for quantitative evaluation. The average accuracy indicates the accuracy of the model after \(i\)-th session. It can be formulated as \(A_{t}=\frac{1}{t}\sum_{i=1}^{t}a_{t,i}\), where \(a_{t,i}\) is the classification accuracy of model on task \(i\) when the training of task \(t\) is completed. PD quantifies the decline in the model performance after each task. PD can be calculated as \(PD=A_{0}-A_{B}\), where \(A_{0}\) indicates accuracy after the base session and \(A_{B}\) indicates the accuracy after the last incremental session. Methods with lower PD suffer less from forgetting. \begin{table} \begin{tabular}{c|c|c c} \hline \hline \multicolumn{2}{c|}{1-Way 1-Shot} & \multicolumn{2}{c}{Single-domain 1-shot} \\ \hline \multirow{2}{*}{Session} & Datasets & \multirow{2}{*}{Session} & Datasets \\ & (\# Classes) & & (\# Classes) \\ \hline 1\(\sim\)5 & RetinaMNIST(5) & 1 & RetinaMNIST(5) \\ \hline \multirow{2}{*}{6\(\sim\)10} & BreastMNIST(2) & \multirow{2}{*}{2} & BreastMNIST(2) \\ & BloodMNIST(3) & & \\ \hline 11\(\sim\)15 & BloodMNIST(5) & 3 & BloodMNIST(8) \\ \hline \hline \end{tabular} \end{table} Table 1: Details of session settings. Two kinds of settings are considered: 1) 1-way 1-shot with 15 incremental sessions, each session learns one category incrementally; 2) Single-domain 1-shot with 3 incremental sessions, each session categories belong to the same domain. \begin{table} \begin{tabular}{c c c c c} \hline \hline Subset & Data Modality & \# Sample & \# Classes & Session \\ \hline PathMNIST [15] & Colon Pathology & 107,180 & 9 & Base \\ DernaMNIST [22] & Dermatoscope & 10,015 & 7 & Base \\ OrganAMNIST [3] & Abdominal CT & 58,850 & 11 & Base \\ RetinaMNIST [6] & Fundus Camera & 1,600 & 5 & Incremental \\ BreastMNIST [2] & Breast Ultrasound & 780 & 2 & Incremental \\ BloodMNIST [1] & Blood Cell Microscope & 17,092 & 8 & Incremental \\ \hline \hline \end{tabular} \end{table} Table 2: Details of subsets. ### Implementation Details For our experiment, we used ResNet18 [13] as the backbone network. FC uses three layers of fully-connected networks, with 2048 neurons in the first and second layers, and the total number of categories calculated based on Eq. 6. The parameters in the loss function are set to 0.8, \(r_{A}=r_{D}=30\), and \(m_{A}=m_{D}=0.4\). This method is based on PyTorch library and is optimized with volume driven SGD. The initial learning rate of training was set as 0.01. The Batchsize is 1024. The Limit [29] and C-FSCIL [14] configurations are both open source implementations. We performed data enhancement on the sample including random flipping, cropping, and color dithering. In this experiment, mixup was used to generate pseudo-class images. For each task, we trained the model on an NVIDIA A100 GPU for 100 epochs and chose to stop early when overfitting occurred Two different experimental configurations were set up to fully evaluate the FSCIL approach, as shown in Table 1. For 1-way 1-shot, we divide it into 15 sessions, each of which learns one class. Single-domain 1-shot is divided into three sessions, each of which is a separate domain. All Settings are not a training sample class. \begin{table} \begin{tabular}{c c c c c c} \hline Session & 0 & 1 & 2 & 3 & PD \(\downarrow\) \\ \hline C-FSCIL & 89.46 & 87.76 & 84.00 & 79.21 & 10.25 \\ LIMIT & 90.77 & 89.69 & 87.97 & 82.40 & 8.37 \\ Ours (w/o \(L_{D}\)) & 90.92 & 89.60 & 88.14 & 83.14 & 7.79 \\ Ours & 91.05 & 89.85 & 88.39 & 83.71 & 7.33 \\ \hline \end{tabular} \end{table} Table 3: Average accuracy and PD under single-domain 1-shot. Figure 3: Average accuracy under 1-way 1-shot FSCIL setting. Our method is better than other basline methods. ### Comparative Results As shown in Fig. 3 and Table. 3, our proposed method can maintain the best accuracy and PD under different settings or stages. In Fig. 3, As the incremental phase increases, C-FSCIL (blue line) shows a downward trend after 8-th and 13-th sessions, respectively. LIMIT (orange line) shows a similar trend to C-FSCIL. Compared to baseline methods, our proposed method shows its superior, especially under large session settings. For the single-domain 1-shot setting, we obtained an average accuracy of 83.71% in the last session, which is 1.31% and 4.7% higher than the LIMIT and C-FSCIL methods, respectively. Particularly, our proposed method is still better in the absence of \(L_{D}\), which means our method could learn various feature representations from pseudo-classes. ## 4 Conclusion In order to better simulate future clinical scenarios, we first propose a cross-domain few-shot class-incremental learning. In this scenario, in addition to a single medical modality, a complete diagnosis of the patient is required. Our cross-domain incremental learning setting assumes that tasks can come from different medical modalities and datasets. Then, we propose a cross-domain few-shot class-incremental learning method, which improves the feature representation ability of the basic session model by constructing pseudo-sample binding domain enhancement loss, provides a better embedded representation for the incremental categories, and reduces the identification error. Our results show that the performance of cross-domain few-shot class-incremental learning can be improved by properly adjusting the relationships between inter-domain and intra-domain categories in the base session, and our method has substantial improvement over the comparison method.
2301.09908
Cross-lingual German Biomedical Information Extraction: from Zero-shot to Human-in-the-Loop
This paper presents our project proposal for extracting biomedical information from German clinical narratives with limited amounts of annotations. We first describe the applied strategies in transfer learning and active learning for solving our problem. After that, we discuss the design of the user interface for both supplying model inspection and obtaining user annotations in the interactive environment.
Siting Liang, Mareike Hartmann, Daniel Sonntag
2023-01-24T10:35:28Z
http://arxiv.org/abs/2301.09908v1
# Cross-lingual German Biomedical Information Extraction: from ###### Abstract This paper presents our project proposal for extracting biomedical information from German clinical narratives with limited amounts of annotations. We first describe the applied strategies in transfer learning and active learning for solving our problem. After that, we discuss the design of the user interface for both supplying model inspection and obtaining user annotations in the interactive environment. ## 1 Introduction Medical information extraction from the large volume of unstructured medical records has the potential to facilitate clinical research and enhance personalized clinical care. Especially the narrative notes, such as radiology reports, discharge summaries and clinical notes provide a more detailed and personalized history and assessments, offering a better context for clinical decision making Chen et al. (2015); Spasic et al. (2020). Name Entity Recognition (NER) task from Natural Language Processing (NLP) studies, have attempted to accurately and automatically extract medical terms from clinical narratives Sonntag et al. (2016); Sonntag and Profitlich (2019); Miotto et al. (2018); Lerner et al. (2020); Wei et al. (2020); Kim and Meystre (2020) using annotated clinical text corpora Johnson et al. (2016); Henry et al. (2019); Miller et al. (2019); Lee et al. (2020); Alsentzer et al. (2019). The large data collection benefits the research community in developing AI applications in processing medical documents in English Spasic et al. (2020). However, there are several limitations in improving information extraction from medical records with machine learning methods in other languages, like German in our case: few German annotated datasets are publicly available, and research on non-English medical documents is scarce Starlinger et al. (2017); Kittner et al. (2021). In most cases, domain experts have higher priority commitments and no capacity to annotate large numbers of training examples for use in machine learning applications Yimam et al. (2015). Our proposed project for extracting medical terms from German clinical narratives with little annotated training data addresses this problem. Two of the most widely studied approaches to this challenge are transfer learning and active learning. In transfer learning, models transfer knowledge learned from data-rich languages or tasks to languages or tasks with less or no annotated data Wang et al. (2019); Lauscher et al. (2020); Xie et al. (2018); Yuan et al. (2019); Pires et al. (2019); Xie et al. (2018); Plank (2019). Active learning is an approach to maximize the utility of annotations while minimizing the annotation effort on the unlabeled target data Chen et al. (2015); Miller et al. (2019); Liu et al. (2020, 2022); Chaudhary et al. (2019); Shelmanov et al. (2019); Zhang et al. (2020); Lauscher et al. (2020). We train a German biomedical NER model building on these two approaches, addressing the following research questions: a) How to transfer knowledge from annotated English clinical narratives corpora to the German NER model? b) In active learning, 1) What is the minimum amount of annotated samples needed for retraining the model? 2) How to evaluate the effectiveness of the query strategies in real-time training (which human (annotator) factors do we have to consider in addition to model performance)? ## 2 Approach We frame our research problem as NER task for German text in the biomedical domain, and combine transfer and active learning strategies in order to reduce the need for annotated data. Our proposed framework, which is shown in figure 1, is similar to the work of Chaudhary et al. (2019). First, a base NER model is pre-trained with English source data using transfer learning strategies (see section 3). Second, we fine-tune the model continuously with annotations in the target language using active learning involving human-in-the-loop (HITL) (see section 4). In contrast to Chaudhary et al. (2019), we also focus on designing a user interface to obtain human annotations by refining the predictions of the base NER model and pay more attention to the human factors in real-time training (see section 5). The NER model used in our framework is a BERT-CRF as in Liu et al. (2020), which consists of a BERT-encoder Devlin et al. (2018) and CRF classifier Lafferty et al. (2001). The hidden states output from the last layer of the BERT-encoder is fed into the CRF classifier as sequence input to predict the sequence of entity labels. Training and Test DataOur target task is to extract entities from the German _BRONCO_Kittner et al. (2021) dataset, which is a collection of discharge summaries with annotated medication terms, i.e. drugs, strength and duration etc, but also other important biomedical information, such as anatomies, diagnosis, and treatments. Two available corpora from the biomedical domain, _n2c2_Henry et al. (2019) and _muchmore_Widdows et al. (2002)1 have relevant context and useful annotations for our task. Hence, we apply these two datasets for pre-training our base model in order to transfer knowledge to the target task. Since neither _n2c2_ nor _muchmore_ have a matching entity label set for our task, we need to reorganize the training data and define a new joint entity label set. More information about the datasets and defining the entity label set is described in the Appendix A. Footnote 1: [http://muchmore.dfki.de](http://muchmore.dfki.de) ## 3 Transfer Learning Strategies We do not train the base NER model from scratch but pre-train it in this part of the work using transfer learning strategies. The aim of pre-training is to transfer the knowledge from annotated English clinical narratives data and German biomedical knowledge to the base NER model for processing the German discharge summaries. Task-adaptive Pre-trainingTo date, domain-specific language models such as BioBERT Lee et al. (2020), clinicalBERT Alsentzer et al. (2019), medBERT Rasmy et al. (2021) and BEHRT Li et al. (2020) that are pre-trained on large collections of PubMed abstracts, clinical documents, or electronic health records, are supposed to learn domain knowledge and can directly be applied to downstream tasks in the biomedical domain. However, domain-adaptive pre-training does not lead to much improvement in the downstream tasks over the general BERT model Gururangan et al. (2020); Laparra et al. (2021). Whereas domain-adaptive pre-training makes use of data from the target domain, task-adaptive pre-training directly uses unlabeled data from the target task to adapt a pre-trained language model using a language modeling objective, e.g. masked language modeling. Task-adaptive pre-training with text derived from the specific tasks can further benefit the in-domain LMs performing on these tasks in biomedical domain Laparra et al. (2021). In our work, Figure 1: Overview of our research project. We utilize a BERT-CRF architecture as the base NER model. There are three relevant datasets for training and testing which are described in section 2. The box on the left side of the figure illustrates the transfer learning strategies: task-adaptive and cross-lingual learning for preparing our base NER model, which are explained in section 3. Human-in-the-loop shown in the right box is the main part of our project, we detail it in section 4. The design of the user interface for human-in-the-loop is discussed in section 5. we use the available training data for task-adaptive pre-training of the in-domain LMs and find the best pre-trained LM to transfer domain knowledge to our base model. We compare the in-domain LMs and the general BERT model with two criteria: efficiency and accuracy. Zero- and Few-shot Cross-Lingual Learning Cross-lingual learning is a common approach to alleviate the problem of lacking in-language training data but rich annotated English data is available (Xie et al., 2018; Pires et al., 2019; Plank, 2019; Wang et al., 2019; Zhao et al., 2020; Lauscher et al., 2020). The zero-shot setup assumes that no annotated training data is available in the target language, and multilingual LMs (Pires et al., 2019; Lample and Conneau, 2019) have shown their cross-lingual generalization capabilities in different NLP tasks across ranges of non-English languages (Hu et al., 2020). However, the impact of linguistic properties of different languages in multilingual models is not yet thorough evaluated (Virtanen et al., 2019). Research in few-shot transfer learning (Zhang et al., 2020; Chaudhary et al., 2019; Lauscher et al., 2020) has the aim of increasing the performance of the cross-lingual model with only a handful of annotated samples in the target languages. We conduct experiments both in a zero- and few-shot setting to investigate the effectiveness of the multilingual LMs in our task compared to the monolingual in-domain LMs. Considering that a multilingual model has ten times larger size than the monolingual variants, we also evaluate its efficiency and computation cost both in training and testing time. ## 4 Active Learning with Human-in-the-Loop We apply transfer learning to prepare our base NER model with the source data: _n2c2_ and _muchmore_ corpora. To adapt the NER model to the _BRONCO_ data, we use active learning to query samples for which we obtain accurate human labels, and improve the accuracy on the target data by retraining the model with the human feedback on these samples. In this part of the work, we do not only analyse the query strategies suitable for the BERT-based deep learning architecture, but also consider the human factors that are expected to strongly affect the human-computer interaction in a real-time training scenario. Query Strategies from Active LearningIn active learning, we attempt to cope with the problem of little annotation resource by measuring how informative each unlabeled instance is and only labeling the most informative instances with the least effort. The representative query strategies for selecting the samples to label fall into two main categories: uncertainty-based sampling (Lewis and Cattett, 1994) and query-by-committee (Seung et al., 1992). When applying active learning to sequence labeling tasks, there are two main issues that we have to address: structured output space and variable-length input. According to results from previous research, sequence level measures are superior to aggregating token-level information for sequence-labeling with CRF models (Settles and Craven, 2008; Chen et al., 2015; Shen et al., 2017; Liu et al., 2020). We incorporate the following most representative query methods that are explored in prior work for NER tasks (Settles and Craven, 2008; Chen et al., 2015; Shen et al., 2017; Chen et al., 2017; Siddhant and Lipton, 2018; Shelmanov et al., 2019; Chaudhary et al., 2019; Griesshaber et al., 2020; Shui et al., 2020; Ren et al., 2021; Liu et al., 2020, 2022; Agrawal et al., 2021), in our experiments: * Lowest Token Probability (LTP) from Liu et al. (2020) as uncertainty-based sampling method; * Batch Bayesian Active Learning Disagreement (BatchBALD) (Houlsby et al., 2011; Kirsch et al., 2019) with Monta Carlo Dropout (MC) (Gal and Ghahramani, 2016); * Information Density (ID) (Settles and Craven, 2008; Shen et al., 2017) for addressing the outliers' problem. We detail the mathematical formulations of the query strategies in Appendix C. Human FactorsCollaboration between the human and the model in real-time training is challenging. Most of the previous work in deep active learning only experiment with the query strategies in a simulated scenario (Culotta and McCallum, 2005) without measuring the real-time labeling cost and the quality of annotations in practice (Haertel et al., 2008; Settles, 2011; Wallace et al., 2018; Qian et al., 2020; Lertvittayakumjorn and Toni, 2021; Wang et al., 2021; Ding et al., 2021; Wu et al., 2021). Due to the large number of model parameters, deep learning methods can be slow when retraining and force annotator to wait for the next query instance to be labeled (Settles, 2011; Arora et al., 2009; Zhang et al., 2019). The more uncertain the predicted labels of the queried instance is, the more corrections are required from the annotator and may lead to inconsistencies among annotators (Chaudhary et al., 2019). Thus, in addition to measuring the model performance, we need to consider the following human factors when evaluating the effectiveness of the query strategies in real-time training: i) _annotation workload of each query instance_; ii) _consistency between annotators_. ## 5 User Interface Design The user interface is critical to the success of the HITL collaboration, as it can affect both user experience as well as the human factors listed above (Gajos et al., 2008; Kangasraasio et al., 2015). Hence, we aim to implement a user interface informed by recommendations from the human-computer interaction literature, in particular by addressing the four central components of user interfaces for interactive machine learning systems identified by Dudley and Kristensson (2018). In the following, we describe how we plan to realize each of these components in our system, where possible building on existing interfaces for HITL NER. Sample reviewThe _sample review_ component allows the user to assess the state of the model on a given sample. Several available interfaces display the predictions of the current model on the sample to be labeled, with the goal to speed up the feedback assignment step (Yang et al., 2018; Lin et al., 2019; Lee et al., 2020; Trivedi et al., 2019). In contrast, our sample review component focuses on increasing the user's understanding of the state of the model, for example by providing explanations of model predictions along with the predicted labels (Stumpf et al., 2009; Amershi et al., 2014). To this end, we will experiment with applying gradient- and occlusion-based explainability methods previously studies for sequence classification tasks (Atanasova et al., 2020). Feedback assignmentThe _feedback assignment_ component allows the user to provide the model with feedback, which can take various forms and constrains the type of interface needed to efficiently collect it. The above mentioned works displaying current model predictions collect label-level feedback by recording the user's binary decision on the correctness of the models suggestions, and a drop-down menu displaying the available label set in case the model prediction is incorrect. Lin et al. (2020) allow the users to mark spans of the input sequence that serve as explanations for a specific prediction. Lee et al. (2020) additionally collect natural language explanations, using auto-completion to ensure the user provides phrases that can be handled by the system's semantic parser. They find that feedback in the form of important input spans is most efficient, and we plan to focus on this feedback format in combination with label-level annotations. Model inspectionThe _model inspection_ component provides the user with a compact summary of global model performance, e.g. by visualizing performance scores on the validation data. Erdmann et al. (2019) define two complementary evaluation frameworks for active learning models: _Exclusive_ evaluation measures model performance on held-out data and indicates how well the model will generalize to additional unlabeled data. _Inclusive_ evaluation measures annotation accuracy on the target corpus annotated by user and model jointly. We plan to implement the component such that the user can choose the appropriate metric for the task at hand. Task overviewThe _task overview_ component gives information about additional task-related decisions, e.g. termination criteria, that determine when to stop the annotation process for an optimal cost-benefit trade-off (Zhu and Hovy, 2007; Laws and Schutze, 2008). ## 6 Conclusion We have presented our current ongoing work on extracting German biomedical information with a limited number of training sources. To evaluate the applied strategies in an experimental setting, we define our target task based on the training and test data at hand and first engage non-expert annotators in the computer-human interaction. Our long-term goal in this project is to apply our findings and generalize the tool on more diverse types of clinical documents in German. In order to evaluate the effectiveness of HITL in our system from the perspective of other stakeholders, we will be working with physicians in the future. ## Acknowledgements The proposed research is funded by the pAItient project (BMG, 2520DAT0P2).
2303.16089
Teleparallel Minkowski Spacetime with Perturbative Approach for Teleparallel Gravity on a Proper Frame
A complete perturbation theory suitable for teleparallel gravity is developed. The proposed perturbation scheme takes into account perturbations of the coframe, the metric, and the spin-connection, while ensuring that the resulting perturbed system continues to describe a teleparallel gravity situation. The resulting perturbation scheme can be transformed to one in which perturbations all take place within the co-frame. A covariant definition of a teleparallel Minkowski geometry is proposed. We compute the perturbed field equations for $f(T)$ teleparallel gravity and discuss the stability of the teleparallel Minkowski geometry within $f(T)$ teleparallel gravity.
Alexandre Landry, Robert J. van den Hoogen
2023-03-28T15:56:51Z
http://arxiv.org/abs/2303.16089v2
Teleparallel Minkowski Spacetime with Perturbative Approach for Teleparallel Gravity on a Proper Frame ###### Abstract A complete perturbation theory suitable for teleparallel gravity is developed. The proposed perturbation scheme takes into account perturbations of the coframe, the metric, and the spin-connection, while ensuring that the resulting perturbed system continues to describe a teleparallel gravity situation. The resulting perturbation scheme can be transformed to one in which perturbations all take place within the co-frame. A covariant definition of a teleparallel Minkowski geometry is proposed. We compute the perturbed field equations for \(f(T)\) teleparallel gravity and discuss the stability of the teleparallel Minkowski geometry within \(f(T)\) teleparallel gravity. ###### Contents * I Introduction * II Teleparallel Theories of Gravity * II.1 Notation * II.2 Torsion-Based Theories * II.3 Geometrical Framework for Teleparallel Gravity * II.4 Linear Transformations and Gauge Choices * II.5 Gauge Choices and Teleparallel Gravity E. Action for \(f(T)\) Teleparallel Gravity F. Field Equations for \(f(T)\) Teleparallel Gravity III. Constant Torsion Spacetimes Null Torsion Scalar Spacetimes 1. Definition: Minkowski Geometry and Minkowski Spacetime IV. Perturbations in Teleparallel Geometries A. Proper Orthonormal Perturbation of the Co-Frame B. Perturbed \(f(T)\) Teleparallel Field Equations: General C. Perturbed \(f(T)\) Teleparallel Field Equations: Constant Torsion Scalar D. Perturbed \(f(T)\) Teleparallel Field Equations: Zero Torsion Scalar E. Perturbed \(f(T)\) Teleparallel Field Equations: The Zero Torsion Scalar Perturbation Limit F. Perturbed \(f(T)\) Teleparallel Field Equations: Minkowski V. Effects of Perturbations and the Minkowski Spacetime Symmetries Conditions for Stability A. Rotation/Boost Perturbation in a Minkowski Background B. General Linear Perturbation in a Minkowski Background C. Perturbations on Trivial Coframes by Each Part of the Perturbation 1. Trace 2. Full Symmetric Perturbation 3. Full Antisymmetric Perturbation 4. A Mixed Situation and Minkowski Spacetime VI. Discussion and Conclusions B. General Perturbed Torsion-Based Field Equation via Linearization C. The Derivation of Minkowski Spacetime Symmetries: Conditions for Stability 1. Rotation/Boost Perturbation 2. General Linear Perturbation ## I Introduction There are two major classes of theories for physical phenomena: gravitational theories and quantized theories [1; 2; 3; 4]. The first class of theories are used to explain phenomena at the astrophysical scale; for example, General Relativity (GR) has been very successful in explaining astrophysical phenomena [5; 6; 7; 8]. However, the second class of theories concerns phenomena occurring at the microscopic scale involving fundamental quantum particles. Attempts have been made to reconcile the two classes of theories in order to have a general, all-encompassing theory. A theory that is capable of dealing with very low-amplitude physical and geometrical quantities, as is the case for theories based on quantization, is desirable. Indeed, Quantum Mechanics (QM) as well as Quantum Field Theory (QFT) have well-established perturbative theories: a potential is perturbed, generating a correction of the eigenvalues of the energies, as well as corrections to the wave functions [1; 2; 3; 4]. QM and QFT are well established and have been used to describe the gravitational corrections of curved spacetimes of physical phenomena that can occur at the microscopic scale [9; 10; 11; 12]. Unfortunately, this perturbative approach to GR is problematic, primarily because one requires an identifiable background on which to perform the perturbations [13]. One can, of course, use gauge invariant variables to address this challenge. Recently, there has been a growing interest in the development of teleparallel gravity as an alternative theory to GR [14; 15; 16; 17; 18; 19; 20; 21]. Teleparallel gravity needs to be better understood and developed in order to address foundational, physical, and geometrical problems. Here, we will illuminate some of the challenges and nuances that are present within perturbative approaches to teleparallel gravity. Golovnev and Guzman [22] studied a class of perturbations within a geometry having a Minkowski metric. They applied perturbations to a particular boosted coframe in which the metric has the Minkowski form and the torsion scalar is zero, but where the torsion tensor is non-zero. One may argue that any geometry in which the torsion tensor is non-zero is inherently not a Minkowski geometry, but this is a matter of definition. In another paper, Jimenez et al. performed perturbations of Minkowski spacetime in \(f(T)\) teleparallel gravity by using a trivial tetrad and having the perturbations encoded in infinitesimal Lorentz transformations [23]. Their approach, while correct, is restrictive when working towards a general perturbation theory within teleparallel gravity. In ref [24], the authors develop a complete perturbation theory that can be employed for perturbation analysis in Minkowski and flat Robertson-Walker-type cosmological spacetimes. Our analysis provides a different perspective and can be used as a general framework, and therefore, it complements the work in ref [24]. Recently, within a cosmological setting, Bahamonde et al. [25] investigated perturbations occurring on a FLRW-type background. They defined a very specific form for the perturbation compatible with this background. They then obtain the perturbed field equations. In addition, they investigated the consequent effects of perturbations on the torsion and on different physical quantities. Most of the types of perturbations studied lead to the flat FLRW background case under some precise limits. On the other hand, some perturbation modes do not propagate, which maintains the strong coupling. This is the case of the scalar and the pseudo-scalar parts of the perturbations. Here, we still have work with a limited scope; hence, the need for a more general theory of perturbations in teleparallel gravity. Bamba and Cai's papers focus on Gravitational Waves (GWs) in teleparallel gravity [26; 27]. GWs are a class of wave-like perturbations of Minkowski spacetime. They are still dealing here with a specific case of perturbation. In Bamba [26], they place themselves in the Minkowski background to process the GWs in teleparallel gravity. In Cai [27], they place themselves in the FLRW background. They therefore have a generalization of Bamba's work for GWs that are compatible with the cosmological models. In addition, in [27], they add the effects of scalar fields in their perturbations. Not only are they still dealing with specific cases of perturbations, but they are moving from the Minkowski background to the FLRW background. However, they still do not have a general theory for the Minkowski background. Therefore, a more general and fundamental theory that is applicable for any perturbation and any co-frame in Minkowski spacetime in teleparallel gravity is needed. We begin this paper with a definition of Minkowski geometry and Minkowski spacetime within teleparallel gravity. Then, we will investigate the effects of perturbations in teleparallel gravity. After, we will study the stability of Minkowski spacetime by using the perturbed quantities and field equations. In teleparallel gravity, co-frames encode both the gravitational and inertial effects. Our goal is to explore the perturbations of gravity, and therefore, we shall carefully construct a perturbative theory that achieves this goal. If we transform initially to "proper" frames which encode only the gravitational effects and then perform perturbations on all physical quantities, consequently ensuring that the resulting perturbed theory is still within the class of teleparallel theories of gravity will yield the general allowable form for perturbations within teleparallel gravity. We will perturb the physical quantities which maintain the "proper frames", thus avoiding the challenge of interpreting the spurious inertial effects that may appear in "non-proper frames" [14; 15; 28; 29; 16]. We want to highlight the effects of perturbations in teleparallel gravity. For example, in an absolute vacuum, one can highlight the effects of perturbations modifying this same vacuum. For example, we will determine the gravitational Energy-Momentum associated with a perturbation. We will apply this theory of perturbations in teleparallel gravity to some examples and problems of Physics [30; 16; 31]. Particularly, we will study through these coframe perturbations the stability of the Minkowski background, and determine the required symmetry conditions to satisfy. This paper is divided as follows. In Section II, we present a summary of teleparallel gravity and propose a definition of Minkowski geometry within teleparallel gravity. In Section IV, we will define the perturbations maintaining the "proper frames", the orthonormal framework, and we will also provide the perturbed Field Equations (FEs). In Section V, we will explore some coframe perturbations to determine the stability criterions for Minkowski spacetime. We can also generalize these criterions to null and constant torsion spacetimes. ## II Teleparallel theories of gravity ### Notation Greek indices \(\left(\mu,\nu,\dots\right)\) are employed to represent the spacetime coordinate indices, while Latin indices \(\left(a,b,\dots\right)\), are employed to represent frame or tangent-space indices. As is standard notation, round parentheses surrounding indices represent symmetrization, while square brackets represent anti-symmetrization. Any quantity that is computed using a Levi-Civita connection \(\overset{\circ}{\omega}^{a}{}_{b\mu}\) will have a circle above the symbol. A comma will denote a partial derivative. The metric signature is assumed to be \(\left(-,+,+,+\right)\). ### Torsion-Based Theories Torsion-based theories of gravity are a subclass of Einstein-Cartan theories [15; 16; 32]. This superclass of theories contains theories based solely on the curvature, for example, General Relativity, or \(f\left(R\right)\) theories where \(R\) is Ricci curvature scalar. Einstein-Cartan theories of gravity also contain theories of gravity that are based solely on the torsion, for example, teleparallel theories of gravity, including New General Relativity [33] and \(f\left(T\right)\) theories where \(T\) is the torsion scalar. In addition, theories of gravity based on both the curvature and torsion scalars (\(f\left(R,T\right)\)-type) are also subclasses of the Einstein-Cartan theories of gravity. Recently, there has been an emergence of theories based on non-metricity (\(f\left(Q\right)\)-type), although they are less well known [16; 34; 35]. In this paper, we are interested in teleparallel gravity, and in particular, \(f(T)\) teleparallel gravity [14; 15; 16; 17; 18; 19; 20; 29]. ### Geometrical Framework for Teleparallel Gravity Let \(M\) be a 4-dimensional differentiable manifold with coordinates \(x^{\mu}\). Then, the geometry of the manifold is characterized by the three geometrical objects. * **The Co-frame:**\(h^{a}=h^{a}{}_{\mu}dx^{\mu}\). This quantity generally encodes both the gravitational and inertial effects in a gravitational system. The dual of the co-frame is defined as the vector field \(h_{a}=h_{a}^{\;\mu}\frac{\partial}{\partial x^{\mu}}\), such that \(h^{a}{}_{\mu}h_{b}^{\;\mu}=\delta^{a}_{b}\). * **The Gauge Metric:**\(g_{ab}\). This object expresses the "metric" of the tangent space, such that \(g_{ab}=g(h_{a},h_{b})\). Having a metric allows one to define the lengths and angles. * **The Spin-connection:**\(\omega^{a}_{\ b}=\omega^{a}_{\ b\mu}dx^{\mu}\). Having a connection allows one to "parallel transport',' or equivalently, it allows one to define a covariant differentiation. In teleparallel gravity, the co-frame, gauge metric, and spin connection are restricted and interdependent, characterized by the following two postulates [14; 15; 16]: * **Null Curvature:** \[R^{a}_{\ b\nu\mu}\equiv\omega^{a}_{\ b\mu,\nu}-\omega^{a}_{\ b\nu,\mu}+\omega^ {a}_{\ c\nu}\omega^{c}_{\ b\mu}-\omega^{a}_{\ c\mu}\omega^{c}_{\ b\nu}=0\] (1) * **Null Non-Metricity:** \[Q_{ab\mu}\equiv-g_{ab,\mu}+\omega^{c}_{\ a\mu}g_{cb}+\omega^{c}_{\ b\mu}g_{ ac}=0\] (2) In teleparallel gravity, the only remaining non-null field strength is the torsion defined as \[T^{a}_{\ \mu\nu}=h^{a}_{\ \nu,\mu}-h^{a}_{\ \mu,\nu}+\omega^{a}_{\ b\mu}h^{b}_{ \ \nu}-\omega^{a}_{\ b\nu}h^{b}_{\ \mu} \tag{3}\] It is now possible to construct a gravitational theory that depends only on the torsion. However, before proceeding, we illustrate the effects of gauge transformations on the geometry, and how we can judiciously choose a gauge to simplify our computations. ### Linear Transformations and Gauge Choices From the Principle of Relativity, we impose the requirement that the physical gravitational system under consideration be invariant under \(GL(4,\mathbb{R})\) local linear transformations of the frame. These types of transformations allow one to pass from one frame of reference to another frame of reference. For the fundamental geometrical quantities \(\{h^{a},g_{ab},\omega^{a}_{\ bc}\}\), we have the following transformation rules under a general linear transformation \(M^{a}_{\ b}\in GL(4,\mathbb{R})\): \[h^{\prime a}_{\ \mu} =M^{a}_{\ b}\,h^{b}_{\ \mu}, \tag{4}\] \[g^{\prime}_{ab} =M_{a}^{\ e}\,M_{b}^{\ f}\,g_{ef},\] (5) \[\omega^{\prime a}_{\ b\mu} =M^{a}_{\ e}\,\omega^{e}_{\ f\mu}\,M_{b}^{\ f}+M^{a}_{\ e}\, \partial_{\mu}\,M_{b}^{\ e}. \tag{6}\] where \(M_{b}^{\ a}=(M^{-1})^{a}_{\ b}\) represents the inverse matrix. Equation (6) shows that the Spin-connection transforms non-homogeneously under a general linear transformation. #### Gauge Choices and Teleparallel Gravity Physical phenomena must respect the principle of Gauge Invariance. The physical phenomenon must be explainable and valid, regardless of the gauge and its possible transformations. If this general principle is important for quantized theories, then this same principle is also important for teleparallel gravity. Generally, we have a tremendous choice of gauge, depending on the assumed symmetries of the physical system. However, once we have made a gauge choice, the consequent field equations describing the theory must transform covariantly (i.e., they are invariant) under any remaining gauge freedom. Proper Orthonormal FrameThe Null Curvature postulate guarantees that there exists an element \(M^{a}_{\ b}\in GL(4,\mathbb{R})\), such that \[\omega^{a}_{\ b\mu}\equiv(M^{-1})^{a}_{\ b}\partial_{\mu}(M^{b}_{\ c}) \tag{7}\] Since the connection transforms non-homogeneously under local linear transformations, we can always apply the linear transformation \(M^{a}_{\ b}\) to transform to a proper frame in which \(\omega^{a}_{\ b\mu}=0\). Further, within this proper frame, given the Null Non-Metricity postulate, it is then possible to apply a second constant linear transformation to bring the gauge metric to some desired form. For example, we can transform to a gauge in which the spin connection is null and the gauge metric is \(g_{ab}=\text{Diag}[-1,1,1,1]\), which we will call a "proper orthonormal frame". The only remaining gauge freedom in this case are global (constant) Lorentz transformations. Orthonormal FrameIf one prefers not to be restricted to a proper frame, then there is more flexibility. Since the gauge metric is symmetric, we can still always choose an "orthonormal frame" in which the gauge metric becomes \(g_{ab}=\text{Diag}[-1,1,1,1]\), but where the spin connection may be non-trivial. Assuming an orthonormal frame, the remaining gauge freedom is represented by proper orthochronous Lorentz transformations in the \(SO^{+}(1,3)\) subgroup of \(GL(4,\mathbb{R})\). Other gauge choices might include Complex-Null, Half-Null, Angular-Null, and others [17; 18; 19]. In the orthonormal frame, given the Null Curvature postulate, there exists a \(\Lambda^{a}_{\ b}\in SO^{+}(1,3)\), such that the spin connection is [36; 37]: \[\omega^{a}_{\ b\mu}\equiv(\Lambda^{-1})^{a}_{\ b}\partial_{\mu}(\Lambda^{b}_{ \ c}) \tag{8}\] and given the Null Non-Metricity postulate, we have the restriction \(\omega_{(ab)\mu}=0\). However, in either choice of gauge, we note that the spin connection, \(\omega^{a}_{\ b\mu}\), is not a true dynamical variable and that it only encodes inertial effects present in the choice of frame [14; 15; 16; 17; 18; 19; 20; 28; 29]. ### Action for \(f(T)\) Teleparallel Gravity In principle, one can construct a Lagrangian density from any of the scalars built from the torsion tensor. One such scalar is [14; 15; 16; 17; 18; 19; 20; 29]: \[T=\frac{1}{4}T^{a}_{\ bc}T^{\ bc}+\frac{1}{2}T^{a}_{\ bc}T^{cb}_{\ \ a}-T^{a}_{\ ca}T^{bc}_{\ b}, \tag{9}\] which we will call "the" torsion scalar \(T\). Another related scalar, for example, used in New General Relativity [33], is \[\widetilde{T}=c_{1}T^{a}_{\ bc}T^{\ bc}_{\ a}+c_{2}T^{a}_{\ bc}T^{cb}_{\ \ a}+c_{3}T^{a}_{\ ca}T^{bc}_{\ \ b} \tag{10}\] Other torsion scalars could be included, but these scalars are not invariant under \(SO^{+}(1,3)\), and they include parity violating terms [33]. Here, we are interested in a particular class of teleparallel gravity theories, \(f(T)\) teleparallel gravity. The action describing the \(f(T)\) teleparallel theory of gravity containing matter is [14; 15; 16; 17; 18; 19; 20; 29]: \[S_{f(T)}=\int\,d^{4}\,x\,\left[\frac{h}{2\,\kappa}\,f\left(T\right)+\mathcal{ L}_{Matter}\right]. \tag{11}\] where \(h=\text{Det}\left(h^{a}_{\ \mu}\right)\) is the determinant of the veilbein, the parameter \(\kappa\) is the gravitational coupling constant which contains the physical constants, and \(f\left(T\right)\) is an arbitrary function of the torsion scalar \(T\), given by Equation (9). ### Field Equations for \(f(T)\) Teleparallel Gravity From the action integral expressed by Equation (11), we determine the field equations by varying with respect to the coframe \(h^{a}_{\ \mu}\)[14; 15; 16; 17; 18; 19; 20; 29]: \[\kappa\,\Theta_{a}^{\ \ \mu}=\frac{f_{T}(T)}{h}\,\partial_{\nu}\,\left(h\,S_{a}^{ \ \ \mu\nu}\right)+f_{TT}(T)\,S_{a}^{\ \ \mu\nu}\,\partial_{\nu}T+\frac{f(T)}{2}\,h_{a}^{\ \ \mu}-f_{T}(T)\,\left(\omega^{b}_{\ The canonical Energy-Momentum is defined as [18]: \[h\,\Theta_{a}^{\ \mu}\equiv\frac{\delta\mathcal{L}_{Matter}}{\delta h_{\ \mu}^{a}}. \tag{14}\] Now, expressing the field equations (12) in terms of the tangent-space components allows one to split the field equations into symmetric and antisymmetric parts. The symmetric and antisymmetric parts of the \(f(T)\) teleparallel gravity FEs are respectively [17; 18; 19]: \[\kappa\Theta_{(ab)} = f_{TT}\left(T\right)\,S_{(ab)}^{\ where we define the re-scaled gravitational coupling constant \(\kappa_{eff}=\frac{\kappa}{f_{T}(T_{0})}\) and an effective cosmological constant \(\Lambda\left(T_{0}\right)\), both dependent on the value of \(T=T_{0}\). We observe that if \(T=T_{0}=\text{Const}\), then the \(f(T)\) teleparallel field equations reduce to those of GR, having a re-scaled gravitational coupling and a cosmological constant. Due to its importance in characterizing the Minkowski geometry, we carefully consider the case of \(T_{0}=0\) for further consideration. ### Null Torsion Scalar Spacetimes When \(T_{0}=0\), the field equations reduce to: \[\kappa_{eff}\Theta_{\left(ab\right)} =\overset{\circ}{G}_{ab}+g_{ab}\,\left[\frac{f\left(0\right)}{2 \,f_{T}\left(0\right)}\right],\] \[=\overset{\circ}{G}_{ab}+g_{ab}\,\Lambda\left(0\right). \tag{20}\] where \(\kappa_{eff}=\frac{\kappa}{f_{T}(0)}\) and \(\Lambda\left(0\right)=\frac{f(0)}{2\,f_{T}(0)}\). If \(f(0)\neq 0\), then the Cosmological Constant \(\Lambda(0)\neq 0\). #### iv.1.1 Definition: Minkowski Geometry and Minkowski Spacetime Before obtaining the field equations and introducing the perturbations on such, one must clearly define the true nature of the Minkowski spacetime in teleparallel gravity in a covariant way. This will make it possible to better understand the nature and origin of the equations involving the dominant quantities with respect to the perturbed quantities. This geometry is characterized as follows: * Maximally symmetric: The Minkowski geometry is invariant under a \(G_{10}\) group of transformations [18]. * Null Curvature: \(R^{a}_{\ b\mu\nu}=0\) * Null Torsion: \(T^{a}_{\ \mu\nu}=0\) * Null Non-Metricity: \(Q_{ab\mu}=0\) One of the consequences is that Minkowski geometry is everywhere a smooth geometry without singularity. This covariant definition of teleparallel Minkowski geometry has been proposed also by Beltran et al. [38]. We distinguish between Minkowski geometry and Minkowski spacetime in teleparallel gravity as follows. Minkowski geometry is defined independently of any field equations, while Minkowski spacetime is a Minkowski geometry that is a solution to the teleparallel gravity field equations where the matter source is a vacuum, \(\Theta_{ab}=0\). If the geometry is Minkowski, then the torsion scalar is identically zero. Note that the converse is not necessarily true. The Einstein tensor \(\overset{\circ}{G}_{ab}=0\), and since the matter source is a vacuum, \(\Theta_{ab}=0\), the field equations (20) reduce to \[0=\frac{f\left(0\right)}{2}\,g_{ab}. \tag{21}\] From the field equations (21), if the geometry is Minkowski and \(\Theta_{ab}=0\), then \(f(0)=0\). In this case, the solution is a Minkowski spacetime, a Minkowski geometry that satisfies the field equations in vacuum. Alternatively, if \(f(0)\neq 0\), then a solution to the field equations (21) necessarily requires a non-null \(\Theta_{ab}\), and consequently, this spacetime is not a Minkowski spacetime, even though the geometry is Minkowski. Of course, the non-trivial \(\Theta_{ab}\) can be interpreted as the energy density of the vacuum. Expressing the statement clearly, Minkowski geometry is a solution to the vacuum \(f(T)\) teleparallel gravity field equations only if \(f(0)=0\). ## IV Perturbations in Teleparallel Geometries ### Proper Orthonormal Perturbation of the Co-Frame As described earlier, a teleparallel geometry is characterized in general via the triplet of quantities, the co-frame one form \(h^{a}\), the spin connection one-form \(\omega^{a}_{\ b}\), and the metric tensor field \(g_{ab}\), with two constraints, Null Curvature and Null Non-Metricity. As argued earlier, assuming that the physical system is invariant under the \(GL(4,\mathbb{R})\) linear transformations (see also ref. [38]), this means that even before constructing a perturbative theory, one can always choose to begin in a "proper orthonormal frame" as our background without a loss of generality: \[h^{a}=h^{a}_{\ \mu}dx^{\mu},\qquad\omega^{a}_{\ b}=0,\qquad g_{ab}=\eta_{ab}= \text{Diag}[-1,1,1,1]. \tag{22}\] Now, we apply a perturbation to all three quantities, as follows: \[h^{\prime a}=h^{a}+\delta h^{a},\qquad\omega^{\prime a}_{\ b}=\delta\omega^{a }_{\ b},\qquad g^{\prime}_{ab}=\eta_{ab}+\delta g_{ab} \tag{23}\] The perturbed geometry is no longer expressed in a proper orthonormal frame. The perturbed system is only proper if \(\delta\omega^{a}_{\ b}=0\), and orthonormal if \(\delta g_{ab}=0\). However, we shall show that we can always transform to a proper orthonormal perturbation scheme. We note that the perturbed geometry given by the triplet \(\{h^{\prime a},\omega^{\prime a}_{\ b},g^{\prime}_{ab}\}\) must still satisfy the Null Curvature and Null Non-Metricity constraints or else one is moving outside of the theory of teleparallel gravity. In general, the perturbations \(\delta h^{a}\), \(\delta\omega^{a}_{\ b}\), and \(\delta g_{ab}\) are not all independent. The Null Curvature constraint for the perturbed connection \(\omega^{\prime a}_{\ b}\) implies that there exists some local linear transformation \(L^{a}_{\ b}\in GL(4,\mathbb{R})\), such that \[\delta\omega^{a}_{\ b}=(L^{-1})^{a}_{\ c}dL^{c}_{\ b} \tag{24}\] where \(d\) indicates the exterior derivative. This means that we can apply this general linear transformation to the perturbed system to express it in a perturbed proper frame \[\bar{h}^{\prime a}=L^{a}_{\ b}(h^{b}+\delta h^{b}),\qquad\bar{\omega}^{\prime a }_{\ b}=0,\qquad\bar{g}^{\prime}_{ab}=(L^{-1})^{c}_{\ a}(L^{-1})^{d}_{\ b}( \eta_{cd}+\delta g_{cd}) \tag{25}\] where we have used a bar to indicate that we are now in a proper frame. The Null Non-Metricity condition applied to this "perturbed proper frame" (25) means that \(\bar{g}^{\prime}_{ab}\) is a symmetric matrix of the constants which can diagonalized. That is, there exists a matrix \(P^{a}_{\ b}\in GL(4,\mathbb{R})\) of constants such that \(\bar{g}^{\prime}_{ab}=(P^{-1})^{c}_{\ a}(P^{-1})^{d}_{\ b}\eta_{cd}\). So, we can apply this constant transformation \(P^{a}_{\ b}\) to the "perturbed proper frame" (25) to obtain a "perturbed proper orthonormal frame" without a loss of generality. \[\hat{h}^{\prime a} =P^{a}_{\ b}\bar{h}^{\prime b}=P^{a}_{\ b}L^{b}_{\ c}(h^{c}+ \delta h^{c}), \tag{26a}\] \[\hat{\omega}^{\prime a}_{\ b} =0,\] (26b) \[\hat{g}^{\prime}_{ab} =\eta_{ab}. \tag{26c}\] We observe that we can investigate perturbations in teleparallel geometries by simply looking at the perturbations in a co-frame, using proper orthonormal frames. Doing so ensures that the Null Curvature and Null Non-Metricity constraints are respected. If we define the compositions of the two linear transformations as matrix \(M^{a}_{\ b}=P^{a}_{\ c}L^{c}_{\ b}\in GL(4,\mathbb{R})\), then the "perturbed proper orthonormal frame" becomes \[\hat{h}^{\prime a}=M^{a}_{\ b}\left(h^{b}+\delta h^{b}\right). \tag{27}\] which encodes all possible perturbations within a proper orthonormal framework. If \(M^{a}_{\ b}=\delta^{a}_{b}\), then the only perturbations are perturbations in the original proper orthonormal frame. The matrix \(M^{a}_{\ b}\) encodes the perturbations that took place originally in the spin connection and metric, but it ensures that the resulting perturbed system is teleparallel in nature. For completeness, the original perturbations can be expressed in terms of \(M^{a}_{\ b}\), as \[\delta\omega^{a}_{\ b}=(M^{-1})^{a}_{\ c}dM^{c}_{\ b},\qquad\delta g_{ab}=(M^{-1 })^{c}_{\ a}(M^{-1})^{d}_{\ b}\eta_{cd}-\eta_{ab} \tag{28}\] Now, in a perturbative approach, to the first order, we have that \[M^{a}_{\ b} \approx\delta^{a}_{b}+\mu^{a}_{\ b} \tag{29}\] \[\delta h^{a} \approx\nu^{a}_{\ b}h^{b} \tag{30}\] for some \(\mu^{a}_{\ b}\) and \(\nu^{a}_{\ b}\in\mathfrak{gl}(4,\mathbb{R})\). Therefore, putting it all together, we have to first order \[\hat{h}^{Ia} =h^{a}+(\mu^{a}_{\ b}+\nu^{a}_{\ b})h^{b}=h^{a}+\lambda^{a}_{\ b} h^{b}, \tag{31a}\] \[\hat{\omega}^{\prime a}_{\ b} =0,\] (31b) \[\hat{g}^{\prime}_{ab} =\eta_{ab}, \tag{31c}\] where \(\lambda^{a}_{\ b}\in M(4,\mathbb{R})\), the set of \(4\times 4\) real-valued matrices. Perturbations of the independent quantities in teleparallel geometry can always be transformed to the form (31). The matrix \(\lambda\) can be invariantly decomposed into trace, symmetric trace-free, and anti-symmetric parts. For the next section and in the appendix, we will apply the perturbations \[\delta h^{a}=\lambda^{a}_{\ b}h^{b},\qquad\delta\omega^{a}_{\ b}=0,\qquad \delta g_{ab}=0, \tag{32}\] to the \(f(T)\) teleparallel field equations in a proper orthonormal frame. In particular, we will look at perturbations of constant scalar torsion spacetimes. ### Perturbed \(f(T)\) Teleparallel Field Equations: General Considering the perturbations of the field equations (15), we obtain \[\kappa\left[\Theta_{(ab)}+\delta\Theta_{(ab)}\right] =f_{TT}\left(T+\delta T\right)\,\left[S^{\ \ \ \ \mu}_{(ab)}+\delta S^{\ \ \ \ \mu}_{(ab)}\right]\left[\partial_{\mu}T+\partial_{\mu}\left(\delta T \right)\right]\] \[\quad+f_{T}\left(T+\delta T\right)\,\left[\overset{\circ}{G}_{ab }+\overset{\circ}{\delta G}_{ab}\right]\] \[\quad+\frac{g_{ab}}{2}\left[f\left(T+\delta T\right)-\left(T+ \delta T\right)\,f_{T}\left(T+\delta T\right)\right], \tag{33a}\] \[0 =f_{TT}\left(T+\delta T\right)\,\left[S^{\ \ \ \ \mu}_{[ab]}+\delta S^{\ \ \ \ \mu}_{[ab]}\right]\partial_{\mu}\left(T+\delta T\right), \tag{33b}\] which to the first order in the perturbations yields \[\kappa\,\delta\Theta_{(ab)} \approx\left[f_{TTT}\,S_{(ab)}^{\ \ \ \ \ \mu}\partial_{\mu}T+f_{TT}\,\left( \overset{\circ}{G}_{ab}-\frac{T}{2}\,g_{ab}\right)\right]\,\delta T+f_{T}\, \delta\overset{\circ}{G}_{ab}\] \[\qquad+f_{TT}\left[\delta S_{(ab)}^{\ \ \ \ \ \mu}\partial_{\mu}T+S_{(ab)}^{\ \ \ \ \ \mu}\,\partial_{\mu}\,(\delta T)\right]+O\left(| \delta h|^{2}\right),\] (34a) \[0 \approx f_{TTT}\,\left[S_{[ab]}^{\ \ \ \ \ \ \mu}\partial_{\mu}T\right]\, \delta T+f_{TT}\,\left[S_{[ab]}^{\ where \(\kappa_{eff}=\frac{\kappa}{f_{T}(0)}\). As before, in general, \(S_{ab}^{\ \ \mu}\neq 0\), and therefore, the perturbations in the torsion scalar are constant. These equations represent perturbations in non-Minkowski but zero torsion scalar spacetimes. However, they can reduce to perturbations of the \(f(T)\) teleparallel field equations with a teleparallel Minkowksi geometry when \(S_{ab}^{\ \ \mu}=0\) and \(\overset{\circ}{G}_{ab}=0\), which are the conditions that are compatible with a teleparallel Minkowski spacetime, as defined in Section III.1. ### Perturbed \(f(T)\) Teleparallel Field Equations: The Zero Torsion Scalar Perturbation Limit We are curious to know what happens in the restricted perturbation scheme in which \(\delta T\to 0\) only. Starting with Equation (34), we take the limit \(\delta T\to 0\), and these perturbed field equations become: \[\kappa\,\delta\Theta_{(ab)} \approx f_{T}\,\overset{\circ}{G}_{ab}+f_{TT}\left[\delta S_{(ab)}^{\ \ \mu}\,\partial_{\mu}T+S_{(ab)}^{\ \ \mu}\,\partial_{\mu}\left(\delta T\right) \right]+O\left(|\delta h|^{2}\right), \tag{37a}\] \[0 \approx f_{TT}\,\left[\delta S_{[ab]}^{\ \ \mu}\partial_{\mu}T+S_{[ab]}^{\ \ \mu}\, \partial_{\mu}\left(\delta T\right)\right]+O\left(|\delta h|^{2}\right). \tag{37b}\] Looking at Equation (37b), given that in general, \(S_{[ab]}^{\ \ \mu}\neq 0\) and \(\delta S_{[ab]}^{\ \ \ \mu}\neq 0\) (or equivalently, the torsion tensor and perturbations of the torsion tensor are non-trivial, respectively), we observe that if the torsion scalar is not constant, \(\partial_{\mu}T\neq 0\), and then the perturbations of the torsion scalar are also not constant, that is, \(\partial_{\mu}(\delta T)\neq 0\). Conversely, if \(\partial_{\mu}T=0\), then \(\partial_{\mu}(\delta T)=0\). ### Perturbed \(f(T)\) Teleparallel Field Equations: Minkowski For the Minkowski spacetimes, as defined in Section III.1, since the torsion tensor is zero by definition, the superpotential terms \(S_{(ab)}^{\ \ \ \mu}=S_{[ab]}^{\ \ \ \mu}=0\). Further, the Einstein tensor \(\overset{\circ}{G}_{ab}=0\), and as argued before, \(f(0)=0\), so that Equations (36a) and (36b) reduce as follows: \[\kappa_{eff}\,\delta\Theta_{(ab)} \approx\delta\overset{\circ}{G}_{ab}+O\left(|\delta h|^{2}\right), \tag{38a}\] \[0 \approx O\left(|\delta h|^{2}\right). \tag{38b}\] Equation (38b) for the antisymmetric part of the field equations is identically satisfied, while Equation (38a) shows that a variation \(\overset{\circ}{\delta G}_{ab}\) associated with a perturbation is directly related to a variation of the energy-momentum tensor \(\delta\Theta_{(ab)}\). This shows that the perturbations of Minkowski spacetime as defined in Section III.1 for \(f(T)\) teleparallel gravity follow the perturbative treatments of Minkowski spacetime in GR. ## V Effects of perturbations and the Minkowski spacetime symmetries conditions for stability ### Rotation/Boost Perturbation in a Minkowski Background We would like to know if orthonormal coframe perturbations as expressed by Equation (32) lead to the stability of a pure Minkowski spacetime background. To achieve this goal, we will first test the stability for the rotation/boost perturbations as described in Equation (32). Secondly, we will also test the stability and its impact for a translated form of this Equation (32). We will finish by studying the effects of the trace, symmetric, and anti-symmetric parts of perturbation, and their respective impacts on torsion and superpotential perturbations. In fact, Equation (32) for the orthonormal gauge is exactly the rotation/boost perturbation in Minkowski spacetime. The perturbation is described as follows: \[\delta h^{a}_{\ \mu}=\lambda^{a}_{\ b}\,h^{b}_{\ \mu}. \tag{39}\] By substituting Equation (18) inside Equation (38a), the field equation with the Equation (39) perturbation inside is exactly: \[\kappa_{eff}\,\delta\Theta_{(ab)} \approx\left(h_{a}^{\ \mu}h_{b}^{\ \nu}\right)\left[h_{k}^{\ \alpha}\,h_{\ \mu}^{m}\,\delta\overset{\ \ \(\delta T\) is expressed by Equation (C1) in Appendix C. This last equation can be summarized as: \[\delta T\to 0\qquad\quad\text{for}\,T^{a}_{\ \mu\nu}=\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\to 0. \tag{41}\] From here, we obtain that the condition for \(\delta T\to 0\) is described by the zero torsion tensor criteria \(T^{a}_{\ \mu\nu}=0\) relation as: \[\partial_{\mu}\left(h^{a}_{\ \nu}\right)\approx\partial_{\nu}\left(h^{a}_{\ \mu}\right) \tag{42}\] From Equation (A13), and by substituting Equation (39), the superpotential perturbation \(\delta S_{ab}^{\ \ \mu}\) is expressed by Equation (C2) in Appendix C. This equation can be summarized as: \[\delta S_{ab}^{\ \ \mu}\to 0\qquad\quad\text{for}\ \delta T^{a}_{\ \mu\nu}= \partial_{\mu}\ (\lambda^{a}_{\ c}\,h^{c}_{\ \nu})-\partial_{\nu}\ \left(\lambda^{a}_{\ c}\,h^{c}_{\ \mu} \right)\to 0. \tag{43}\] From this result, we obtain that the condition for \(\delta S_{ab}^{\ \ \mu}\to 0\) is also described by the zero perturbed torsion tensor criteria \(\delta T^{a}_{\ \ \mu\nu}=0\) relation as: \[\partial_{\mu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \nu}\right)\approx\partial_{ \nu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \mu}\right). \tag{44}\] Equation (44) (the zero perturbed torsion criteria) is complementary to Equation (42) (zero torsion criteria) for obtaining the limit \(\delta S_{ab}^{\ \ \mu}\to 0\). We apply Equation (42) before applying Equation (44). From here, the Equations (42) and (44) are the **two fundamental symmetry conditions for Minkowski spacetime stability**. If we set \(\delta T\to 0\) and \(\delta S_{ab}^{\ \ \mu}\to 0\) for Equations (36a) and (36b) for all zero torsion spacetimes, we still respect Equations (42) and (44), as for pure Minkowski spacetimes. Hence, the zero torsion tensor and zero perturbed torsion tensor criterions are still valid for all zero torsion spacetimes, Minkowski or not. Even for the constant torsion spacetimes, by always setting \(\delta T\to 0\) and \(\delta S_{ab}^{\ \ \mu}\to 0\) inside Equations (35a) and (35b), we respect again Equations (42) and (44), as for the zero torsion scalar spacetimes. This is another generalization of the Minkowski spacetime result to a most general class of spacetimes as the constant torsion ones. There are some other consequences for Minkowski spacetime on a proper frame. By applying the null covariant derivative criteria to Equation (39), we use Equation (C3) in the Appendix C result to obtain as a relation: \[\delta\Gamma^{\rho}_{\ \nu\mu}=h_{a}^{\ \rho}\left[\partial_{\mu}\left( \lambda^{a}_{\ b}\,h^{b}_{\ \nu}\right)-\left(h_{c}^{\ \sigma}\,\partial_{\mu}\,h^{c}_{\ \nu}\right)\left(\lambda^{a}_{\ b}\,h^{b}_{\ \sigma}\right)\right], \tag{45}\] where \(\Gamma^{\rho}_{\ \nu\mu}=h_{c}^{\ \rho}\,\partial_{\mu}\,h^{c}_{\ \nu}\) is the Weitzenbock connection for a proper frame. For trivial coframes as \(h^{a}_{\ \mu}=\delta^{a}_{\ \mu}=Diag[1,1,1,1]\), Equation (45) becomes: \[\delta\Gamma^{\rho}_{\ \nu\mu}=h^{\ \rho}_{a}\left[\partial_{\mu}\left( \lambda^{a}_{\ b}\,h^{b}_{\ \mu}\right)\right]=\delta_{a}^{\ \rho}\,\partial_{\mu}\left( \lambda^{a}_{\ b}\right)\,\delta^{b}_{\ \mu}. \tag{46}\] In the next subsection, we will study the effect of a translation applied to the perturbation described by Equation (39) on Equations (45) and (46). The goal is to know the effects of the perturbations on the Weitzenbock connection and its perturbation. We can now see by the Equations (40)-(46) the effect of the perturbation described by Equation (39), maintaining the proper frame and respecting the \(GL(4,\mathbb{R})\) invariance transformation. In addition, Equations (42) and (44) give the Minkowski spacetime stability conditions on proper frames for the perturbation described by Equation (39) [39; 40; 41; 42]. ### General Linear Perturbation in a Minkowski Background A more general perturbation scheme requires one to deal with the following general linear perturbation: \[\delta h^{a}_{\ \mu}=\lambda^{a}_{\ b}\,h^{b}_{\ \mu}+\epsilon^{a}_{\ \mu}, \tag{47}\] where \(|\lambda^{a}_{\ b}|,|\epsilon^{a}_{\ \mu}|\ll 1\). We have here the transformation described by Equation (39), superposed with a translation in Minkowski tangent space. For the Equation (47) perturbation, Equation (40) becomes as follows: \[\kappa_{eff}\,\delta\Theta_{(ab)} \approx (h_{a}^{\ \mu}h_{b}^{\ \nu})\left[h_{k}^{\ \alpha}\,h_{\ \mu}^{\ m}\,\delta\overset{k}{R}_{\ \ m\alpha\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\, \left[h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h^{e}_{\ \mu}\,h^{f}_{\ \nu}\right]\,h_{k}^{\ \alpha}\,h_{\ \sigma}^{m}\,\delta \overset{\circ k}{R}_{\ \ m\alpha\rho}\right] \tag{48}\] \[\ \ +O\left(|\delta h|^{2}\right),\] \[0 \approx O\left(|\delta h|^{2}\right).\] Here, again we obtain the perturbed FEs in terms of \(\delta\overset{\circ k}{R}_{\ \ m\alpha\rho}\) and \(h^{a}_{\ \mu}\). As for Equation (40), \(\delta\overset{\circ k}{R}_{\ \ m\alpha\nu}\to 0\), we still then obtain \(\delta\Theta_{(ab)}\to 0\) for Equation (47), as is also required by GR and TEGR [39; 40; 41; 42]. We might express Equation (48) in terms of \(\lambda^{a}_{\ b}\) and \(\epsilon^{a}_{\ \mu}\). Here again, we have shown that pure Minkowski spacetime is still stable from the zero curvature criteria, as required by teleparallel postulates. From Equation (A8), and by substituting Equation (47), the torsion scalar perturbation \(\delta T\) is expressed by Equation (C4) in Appendix C and can be summarized as: \[\delta T\to 0\ \ \ \ \ \ \ \ \ \ \ \text{for}\ T^{a}_{\ \mu\nu}=\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\to 0. \tag{49}\] The condition for \(\delta T\to 0\) is still described by Equation (42) for the zero torsion tensor criteria \(T^{a}_{\ \ \mu\nu}=0\). From Equation (138), and by substituting Equation (47), the superpotential perturbation \(\delta S_{ab}^{\ \ \mu}\) is expressed by Equation (C5) in Appendix C and can also be summarized as: \[\delta S_{ab}^{\ \ \mu}\to 0. \tag{50}\] Equation (50) is satisfied if we respect \(\partial_{a}\epsilon_{b}^{\ \ \mu}=\partial_{b}\epsilon_{a}^{\ \ \mu}=0\) (a constant translation condition for Equation (47)) and after applying the Equation (42) criteria. The condition for \(\delta S_{ab}^{\ \ \mu}\to 0\) is still described by Equation (44) for the zero perturbed torsion tensor criteria \(\delta T^{a}_{\ \ \mu\nu}=0\), only if the constant translation criteria are respected as: \[\partial_{\mu}\epsilon^{a}_{\ \ \nu}=\partial_{\nu}\epsilon^{a}_{\ \ \mu}=0. \tag{51}\] Hence, for the Equation (47) perturbation, we still respect Equations (42) and (44) as the two first symmetry conditions for Minkowski spacetime stability, but we must also respect Equation (51) before Equation (44). A simple translation does not affect these Equations (42) and (44) only if we respect Equation (51), and the translation term \(\epsilon^{a}_{\ \ \nu}\) must be constant inside Equation (47). This constant translation criteria as expressed by Equation (51) is a **third symmetry condition for Minkowski spacetime stability**. As for Equations (45) and (46), we apply the null covariant derivative criteria to Equation (47) and we obtain as a relation: \[0 =\partial_{\mu}\,\left(\lambda^{a}_{\ \ b}\,h^{b}_{\ \nu}+ \epsilon^{a}_{\ \ \nu}\right)-\left(h_{c}^{\ \rho}\,\partial_{\mu}\,h^{c}_{\ \nu}\right)\,\left(\lambda^{a}_{\ \ b}\,h^{b}_{\ \rho}+ \epsilon^{a}_{\ \ \rho}\right)-\delta\Gamma^{\rho}_{\ \ \nu\mu}h^{a}_{\ \ \rho}\] \[\quad\Rightarrow\delta\Gamma^{\rho}_{\ \ \nu\mu}=h_{a}^{\ \rho}\left[ \partial_{\mu}\left(\lambda^{a}_{\ \ b}\,h^{b}_{\ \nu}\right)-\left(h_{c}^{\ \sigma}\,\partial_{\mu}\,h^{c}_{\ \nu}\right)\left(\lambda^{a}_{\ b}\,h^{b}_{\ \sigma}+ \epsilon^{a}_{\ \sigma}\right)\right]. \tag{52}\] where \(\Gamma^{\sigma}_{\ \ \nu\mu}=h_{c}^{\ \sigma}\,\partial_{\mu}\,h^{c}_{\ \nu}\) is the Weitzenbock connection for a proper frame and \(\partial_{\mu}\,\epsilon^{a}_{\ \nu}=0\) because of constant translation. Equation (52) is slightly different from Equation (45) according to the term \(-\left(h_{c}^{\ \sigma}\,\partial_{\mu}\,h^{c}_{\ \nu}\right)\,\epsilon^{a}_{\ \sigma}\). For non-trivial coframes (i.e., \(\partial_{\mu}\,h^{c}_{\ \nu}\neq 0\)), Equation (52) is not invariant under Equation (51). For trivial coframes (i.e., \(\partial_{\mu}\,h^{c}_{\ \nu}=0\)), Equation (52) becomes exactly Equation (46), as for the perturbation described by Equation (39). From this result, we now respect the constant coframe criteria as (or null Weitzenbock connection \(\Gamma^{\rho}_{\ \ \nu\mu}=0\) criteria): \[\partial_{\mu}\,h^{c}_{\ \nu}=0. \tag{53}\] With Equation (53), we also satisfy the invariance under Equation (51), the constant translation criteria for the Weitzenbock connection perturbation. Hence, Equations (45) and (52) show that the Weitzenbock connection perturbation \(\delta\Gamma^{\rho}{}_{\nu\mu}\) is invariant only if we respect Equation (53), the constant coframe criteria. This criteria, as expressed by Equation (53), is a **fourth symmetry condition for Minkowski spacetime stability**. Now, Equations (48)-(53) generalize Equations (40)-(46) by applying a constant translation \(\epsilon^{a}{}_{\nu}\) to the linear transformation described by Equation (39), which maintains the proper frame and the invariance under the \(GL(4,\mathbb{R})\) transformation. By respecting Equation (51), the constant translation criteria, we still respect Equations (42) and (44) for Equation (47), and this generalization shows that Minkowski spacetime and all zero torsion spacetimes are stable everytime [39; 40; 41; 42]. However, Equations (45) and (52), both giving Equation (46), show that the Weitzenbock connection perturbation \(\delta\Gamma^{\rho}{}_{\nu\mu}\) is invariant only if we work with constant or trivial coframes respecting Equation (53). ### Perturbations on Trivial Coframes by Each Part of the Perturbation Before properly dealing with more complex cases of coframes, it is imperative to deal with perturbations on the trivial coframe. This coframe is defined as follows: \[h^{a}{}_{\mu}=\delta^{a}{}_{\mu}=Diag\left[1,\,1,\,1,\,1\right]. \tag{54}\] The coframe described by Equation (54) is defined in the orthonormal gauge. This equation (54) respects Equation (53), the fourth symmetry condition for Minkowski spacetime stability. From there, we will study the following general perturbations which will be applied to Equation (54) in terms of \(\lambda^{a}{}_{b}\) and respecting Equations (42) and (44), and if necessary, Equation (51). In addition, we will compare with another recent similar study on so-called "cosmological" perturbations in order to better situate the results for Minkowski spacetime for a scale factor of 1 [24]. Their \(\lambda^{a}{}_{b}\) equivalent matrix is expressed as: \[\left(\lambda^{a}{}_{b}\right)_{Golov}=\left[\begin{array}{cc}\phi&\partial _{a}\,\xi+v_{a}\\ \partial_{i}\,\beta+u_{i}&\left[-\psi\,\delta^{a}_{j}+\partial^{2}_{a\,j}\sigma +\epsilon_{ajk}&\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{ h_{aj}}{2}\right]\end{array}\right], \tag{55}\] where we must respect the constraints \(\partial^{a}\,v_{a}=0\), \(\partial^{k}\,w_{k}=0\), \(\partial^{i}\,u_{i}=0\), and \(\partial^{a}\,c_{a}=0\), and the tensorial part is also traceless. Trace We have first as \(\lambda^{a}_{\ b}\) for a full trace perturbation: \[\left(\lambda^{a}_{\ b}\right)_{Trace}=\lambda=Trace\left[Diag\left[a_{00},\,a_{ 11},\,a_{22},\,a_{33}\right]\right]=a_{00}+a_{11}+a_{22}+a_{33}. \tag{56}\] Equation (39) will be exactly \(\left(\delta h^{a}_{\ \mu}\right)_{Trace}=\frac{\lambda}{4}\delta^{a}_{\ \mu}\), and by setting \(h^{a}_{\ \mu}=\delta^{a}_{\ \mu}\), Equations (41) and (43) are: \[\delta T\approx O\left(|\delta h|^{2}\right)\to 0, \tag{57}\] which respects Equation (42) and \[\delta S_{ab}^{\ \ \mu}\rightarrow \left[\frac{1}{8}\left(\partial_{a}\left(\lambda\,\delta_{b}^{\ \mu}\right)-\partial_{b}\left(\lambda\,\delta_{a}^{\ \mu}\right)\right)+\frac{1}{4}\left(\partial_{b}\left(\lambda\,\delta_{a}^{ \mu}\right)-\partial_{a}\left(\lambda\,\delta_{b}^{c}\,h^{\mu}_{\ c}\right)\right)\right.\] \[\left.-\frac{1}{4}\delta_{b}^{\mu}\,\delta_{\ \rho}^{c}\left[\partial_{c}\,\left(\lambda\,\delta_{a}^{\ \rho}\right)-\partial_{a}\,\left(\lambda\,\delta_{c}^{\ \rho}\right)\right]+\frac{1}{4}\,\delta_{a}^{\mu}\,\delta_{\ \rho}^{c}\left[\partial_{c}\,\left(\lambda\,\delta_{b}^{\ \rho}\right)-\partial_{b}\,\left(\lambda\,\delta_{c}^{\ \rho}\right)\right]\,\right]\] \[+O\left(|\delta h|^{2}\right)\qquad\text{by applying Equation (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq Full Symmetric Perturbation For the perfect symmetric perturbation, we have as the \(\lambda^{a}_{\ b}\) perturbation with null diagonal components: \[\left(\lambda^{a}_{\ b}\right)_{Sym}=\tilde{\lambda}^{a}_{\ b}=\left[ \begin{array}{cccc}0&b_{10}&b_{20}&b_{30}\\ b_{10}&0&b_{12}&b_{13}\\ b_{20}&b_{12}&0&b_{23}\\ b_{30}&b_{13}&b_{23}&0\end{array}\right]. \tag{63}\] Equation (39) will be exactly \(\left(\delta h^{a}_{\ \mu}\right)_{Sym}=\tilde{\lambda}^{a}_{\ b}\,\delta^{b}_{\ \mu}\), and by setting \(h^{a}_{\ \mu}=\delta^{a}_{\ \mu}\), Equation (41) is still expressed by Equation (57), respecting the Equations (42) and (43): \[\delta S_{ab}^{\ \ \mu}= \Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\tilde{\lambda}^{c}_{ \ b}\delta_{c}^{\ \mu}\right)-\partial_{b}\left(\tilde{\lambda}^{c}_{\ a}\,\delta^{\ \mu}_{c}\right)\right)+\left(\partial_{b}\left(\tilde{\lambda}^{c}_{\ a}\,\delta^{\ \mu}_{c}\right)-\partial_{a}\left(\tilde{\lambda}^{c}_{\ b}\,\delta^{\ \mu}_{c}\right)\right)\] \[\qquad-\delta^{\ \mu}_{\ b}\,\delta^{c}_{\ \rho}\left[\partial_{c} \,\left(\tilde{\lambda}^{f}_{\ d}\delta^{\ \rho}_{f}\right)-\partial_{a}\,\left(\tilde{\lambda}^{c}_{\ f}\delta^{\ \rho}_{f}\right)\right]+\delta^{\mu}_{\ a}\,\delta^{c}_{\ \rho}\left[\partial_{c}\,\left(\tilde{\lambda}^{f}_{\ b}\delta^{\ \rho}_{f}\right)-\partial_{b}\,\left(\tilde{\lambda}^{c}_{\ c}\delta^{\ \rho}_{f}\right)\right]\Bigg{]}\] \[\qquad+O\left(|\delta h|^{2}\right)\qquad\text{by applying Equation (\ref{eq:20}) (the zero torsion criteria).}\] \[\to 0\qquad\qquad\qquad\qquad\text{for}\ \delta T^{a}_{\ \mu\nu}=\partial_{\mu}\,\left(\lambda^{a}_{\ c}\right)\delta^{c}_{\ \nu}-\partial_{\nu}\,\left(\lambda^{a}_{\ c}\right),\delta^{c}_{\ \mu}\to 0. \tag{64}\] Equation (44) will be expressed as: \[\partial_{\mu}\,\left(\tilde{\lambda}^{a}_{\ c}\right)\delta^{c}_{\ \nu}\approx\partial_{\nu}\,\left(\tilde{\lambda}^{a}_{\ c}\right),\delta^{c}_{ \ \mu}. \tag{65}\] By comparing with Equation (55) again, we obtain the following equations for the rectangular coordinates [24]: * Equation (63) becomes: \[\left(\lambda^{a}_{\ b}\right)_{Sym\,Golov} =\left(\tilde{\lambda}^{a}_{\ b}\right)_{Golov}\] \[=\left[\begin{array}{cc}0&\partial_{a}\,\xi+v_{a}\\ \partial_{a}\,\xi+v_{a}\,\,\left[\partial^{2}_{a\,j}\sigma+\partial_{j}\,c_{ a}+\frac{h_{aj}}{2}\right]\end{array}\right],\] (66) where \(a\neq j\neq k\), \(\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)=0\) and \(\partial_{a}\,\xi+v_{a}=\partial_{i}\,\beta+u_{i}\) because we have a symmetric perturbation. As a supplement, we deduce that \(\phi=0\) and \(\psi=0\) for Equation (63), because of the null diagonal components. * The Equation (65) components will be expressed in terms of Equation (66): \[\partial_{\mu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{\, \,\nu} \approx\partial_{\nu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{\,\, \mu}\] \[\partial_{\mu}\,\left(\partial^{2}_{a\,j}\sigma+\partial_{j}\,c_{ a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\,\,\nu} \approx\partial_{\nu}\,\left(\partial^{2}_{a\,j}\sigma+\partial_{j}\,c_{a}+ \frac{h_{aj}}{2}\right)\,\delta^{a}_{\,\,\mu}\] (67) #### iv.2.3 Full Antisymmetric Perturbation For the full antisymmetric perturbation, we have as the \(\lambda^{a}_{\,\,\,b}\) perturbation with null diagonal components: \[\left(\lambda^{a}_{\,\,\,b}\right)_{AntiSym}=\bar{\lambda}^{a}_{\,\,\,b}= \left[\begin{array}{cccc}0&b_{10}&b_{20}&b_{30}\\ -b_{10}&0&b_{12}&b_{13}\\ -b_{20}&-b_{12}&0&b_{23}\\ -b_{30}&-b_{13}&-b_{23}&0\end{array}\right]. \tag{68}\] Equation (39) will be exactly \(\left(\delta h^{a}_{\,\,\,\mu}\right)_{AntiSym}=\bar{\lambda}^{a}_{\,\,\,b} \,\delta^{b}_{\,\,\mu}\), and by setting \(h^{a}_{\,\,\,\mu}=\delta^{a}_{\,\,\,\mu}\), Equation (41) is still expressed by Equation (57), respecting Equations (42) and (43): \[\delta S_{ab}^{\,\,\,\mu}= \left[\frac{1}{2}\left(\partial_{a}\left(\bar{\lambda}^{c}_{\,\, \,b}\,\delta^{\,\,\mu}_{c}\right)-\partial_{b}\left(\bar{\lambda}^{c}_{\,\,\,a }\,\delta^{\,\,\mu}_{c}\right)\right)+\left(\partial_{b}\left(\bar{\lambda}^ {c}_{\,\,\,a}\,\delta^{\,\mu}_{c}\right)-\partial_{a}\left(\bar{\lambda}^{c} _{\,\,\,b}\,\delta^{\,\mu}_{c}\right)\right)\right.\] \[\left.\qquad-\delta^{\mu}_{\,\,\,b}\,\delta^{c}_{\,\,\,\rho} \left[\partial_{c}\,\left(\bar{\lambda}^{f}_{\,\,\,a}\delta^{\,\rho}_{f} \right)-\partial_{a}\,\left(\bar{\lambda}_{c}^{\,\,\,f}\delta^{\,\,\rho}_{f} \right)\right]+\delta^{\mu}_{\,\,\,a}\,\delta^{c}_{\,\,\rho}\left[\partial_{c }\,\left(\bar{\lambda}_{b}^{\,\,\,f}\delta^{\,\,\rho}_{f}\right)-\partial_{b }\,\left(\bar{\lambda}^{f}_{c}\delta^{\,\,\rho}_{f}\right)\right]\,\right]\] \[\qquad+O\left(|\delta h|^{2}\right)\qquad\text{by applying Equation \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq where \(a\neq j\neq k\), \(\partial_{a\,j}^{2}\sigma=-\partial_{j\,a}^{2}\sigma=0\), \(\partial_{j}\,c_{a}=-\partial_{a}\,c_{j}\), \(h_{aj}=-h_{ja}\), and \(\partial_{a}\,\xi+v_{a}=-\left(\partial_{i}\,\beta+u_{i}\right)\) because we have an antisymmetric perturbation. We deduce again that \(\phi=0\) and \(\psi=0\) for Equation (68), because of the null diagonal components. * The Equation (70) components will be expressed in terms of Equation (71): \[\partial_{\mu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{ \phantom{a}\nu} \approx\partial_{\nu}\,\left(\partial_{a}\,\xi+v_{a}\right)\, \delta^{a}_{\phantom{a}\mu}\] \[\partial_{\mu}\,\left(\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{ k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\phantom{a} \nu} \approx\partial_{\nu}\,\left(\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{ k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\phantom{a}\mu}\] #### iv.2.4 A Mixed Situation and Minkowski Spacetime Here, we will treat the most general case. It is the combination of the three previous sorts, as: \[\left(\lambda^{a}_{\phantom{a}b}\right)_{Mixed}=\lambda^{a}_{\phantom{a}b} =\frac{\delta^{a}_{\phantom{a}b}}{4}\,\left(\lambda^{a}_{\phantom{a}b} \right)_{Trace}+\left(\lambda^{a}_{\phantom{a}b}\right)_{Sym}+\left(\lambda^{ a}_{\phantom{a}b}\right)_{AntiSym},\] \[=\frac{\lambda}{4}\delta^{a}_{\phantom{a}b}+\tilde{\lambda}^{a}_ {\phantom{a}b}+\bar{\lambda}^{a}_{\phantom{a}b}. \tag{73}\] In general, we always obtain for Equation (73) that \(\left(\lambda^{a}_{\phantom{a}b}\right)_{Mixed}\) is exactly Equation (55) when we compare it to the linear parametrization of ref [24]. Then, we obtain as the components of Equation (44) the most general relations for perturbation in the Minkowski background as: \[\partial_{\mu}\,\phi\delta^{c}_{\phantom{c}\nu} \approx\,\,\partial_{\nu}\,\phi,\delta^{c}_{\phantom{c}\mu} \tag{74a}\] \[\partial_{\mu}\,\left(\partial_{a}\,\xi+v_{a}\right)\delta^{c}_{ \phantom{c}\nu} \approx\,\,\partial_{\nu}\,\left(\partial_{a}\,\xi+v_{a}\right), \delta^{c}_{\phantom{c}\mu}\] (74b) \[\partial_{\mu}\,\left(\partial_{i}\,\beta+u_{i}\right)\delta^{c}_{ \phantom{c}\nu} \approx\,\,\partial_{\nu}\,\left(\partial_{i}\,\beta+u_{i}\right), \delta^{c}_{\phantom{c}\mu}\] (74c) \[\partial_{\mu}\,\left[-\psi\,\delta^{a}_{j}+\partial^{2}_{a\,j} \sigma+\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a} +\frac{h_{aj}}{2}\right]\delta^{c}_{\phantom{c}\nu}\] \[\approx\,\,\partial_{\nu}\,\left[-\psi\,\delta^{a}_{j}+\partial ^{2}_{a\,j}\sigma+\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{ j}\,c_{a}+\frac{h_{aj}}{2}\right]\delta^{c}_{\phantom{c}\mu}. \tag{74d}\] Equation (39) will be exactly \(\left(\delta h^{a}_{\phantom{a}\mu}\right)_{Mixed}=\lambda^{a}_{\phantom{a}b} \,\delta^{b}_{\phantom{a}\mu}\), and we exactly obtain Equations (41) and (43) by respecting Equations (42) and (44) via superposition. In Equation (73), the first two terms (Trace and Symmetric terms) represent the symmetric part of \(\left(\lambda^{a}_{\phantom{a}b}\right)_{Mixed}\), and the last term (Antisymmetric term) represents the Antisymmetric part of \(\left(\lambda^{a}_{\phantom{a}b}\right)_{Mixed}\) For every case, we satisfy the Equations (42), (44), (51), and (53) in the supplement of the energy-momentum stability from \(\delta{R^{\ \ Equation (39) perturbation (boost/rotation perturbation). These Equations (42) and (44) are the two first fundamental Minkowski spacetime stability conditions on proper frames. However, if we use the more general linear perturbation as defined by Equation (47), we need to respect the constant translation criteria as defined by Equation (51) in order for the superpotential perturbation \(\delta S_{ab}^{\ \ \mu}\) to proceed to zero. This is a third Minkowski spacetime stability condition for the proper frames to respect for the Equation (47) perturbation. In this way, by respecting Equation (51), we then respect the Equations (42) and (44), as for the Equation (39) perturbation. Another consequence from the Equation (47) perturbation is about the Weitzenbock connection perturbation \(\delta\Gamma^{\rho}_{\ \ \nu\mu}\). Equations (45) and (52) have shown that we need to respect the constant coframe criteria as defined by Equation (53). Equation (53) is a fourth Minkowski spacetime stability condition for proper frames to respect for the Equation (47) perturbation, allowing for the invariance for the Weitzenbock connection perturbation. To generalize, these steps applied for the Minkowski spacetime, given these stability criteria, can also be applied for null torsion scalar spacetimes, as well as the constant torsion scalar spacetimes. Indeed, with the analysis made in Sections V.1 and V.2, and the stability criteria obtained for the Minkowski spacetime, Equations (36a) and (36b) for the null torsion scalar spacetimes make it possible to generalize these treatments, and in the end to obtain the same stability criteria, which are Equations (42) and (44), and if necessary, Equations (51) and (53). This is also the case for the constant torsion scalar spacetimes described by the Equations (35a) and (35b) if we take the limits \(\delta T\to 0\) and \(\delta S_{ab}^{\ \ \mu}\to 0\), as for the Minkowski and null torsion scalar spacetimes. One can expand upon the work here on perturbations in covariant teleparallel gravity to more general teleparallel spacetimes and to broader classes of teleparallel gravity theories. For example, in the case of static spherically symmetric teleparallel spacetimes [43; 44] in which the torsion scalar is not constant, what is the stability of the static spherically symmetric solution? Further, this perturbation scheme can also be applied to cosmological geometries in \(f(T)\) teleparallel gravity [21], thereby enhancing the previous work of [24]. Additionally, one can also look at perturbations in other non-\(f(T)\) teleparallel gravity theories. The current analysis could also bring some light to a couple of unresolved challenges in teleparallel gravity. The first challenge concerns the number of degrees of freedom (DOF) in 4-dimensional \(f(T)\) teleparallel gravity [45; 46; 47; 48]. In [45], the authors employ a Hamiltonian analysis to determine that \(f(T)\) teleparallel gravity has three extra DOF when compared to GR. Unfortunately, it appears that the analysis is flawed, in that it is not general, for they assumed a diagonal metric to reach some of their conclusions. Later, Ferraro and Guzman [46] made an argument that the number of extra DOF is 1. However, the analysis appears to be somewhat incomplete and only applicable to teleparallel gravity geometries in which the torsion scalar is constant [48]. More recently, the authors of [47] go through a Hamiltonian analysis to conclude that the number of extra DOF is 3. A couple of challenges in their results have been identified in [48]. Obviously, this is still an unresolved problem which requires further investigation. Another unresolved complex physical problem is the strong coupling of teleparallel perturbations. This physical problem occurs as one approaches the Planck scale where the quantum field effects become non-negligible, particularly for second-order perturbations and higher. At these scales, the kinetic energy part will become dominant when compared to the gravity and background parts. This strong coupling issue with teleparallel perturbations needs further development and understanding within the covariant \(f(T)\) teleparallel-gravity framework. Here, with the material developed in this present paper, we have a more complete perturbation framework that is suitable for use in teleparallel gravity, and the toolkit needed for studying several and more complex problems in teleparallel gravity. ###### Acknowledgements. R.J.v.d.H. is supported by the Natural Sciences and Engineering Research Council of Canada, and by the W.F. James Chair of Studies in the Pure and Applied Sciences at St.F.X. A.L. is supported by an AARMS fellowship. ## Abbreviations The following abbreviations are used in this paper: * Field Equation GR General Relativity TEGR Teleparallel Equivalent of General Relativity DOF Degrees of Freedom ## Appendix A Perturbed Physical Quantities in Teleparallel Theories To complete the analysis of Teleparallel theories and geometries, we want to perturb various physical quantities that may be involved. As explained in Section IV.1, we are able to always consider perturbations of the co-frame only within a proper orthonormal gauge. \[\hat{h}^{\prime a}_{\ \mu} =h^{a}_{\ \mu}+\delta h^{a}_{\ \mu}, \tag{11a}\] \[\hat{\omega}^{\prime a}_{\ b\mu} =0,\] (11b) \[\hat{g}^{\prime}_{ab} =\eta_{ab}, \tag{11c}\] where \(\delta h^{a}=\delta h^{a}_{\ \mu}\,dx^{\mu}=\lambda^{a}_{\ b}h^{b}\). Here, we apply the coframe perturbations to the main physical and geometrical quantities involved in Teleparallel Gravity. 1. The inverse coframe perturbation \(\delta h^{\ \mu}_{a}\): \[h^{\ \mu}_{a}+\delta h^{\ \mu}_{a} =h^{\ \mu}_{a}+\left[\lambda^{b}_{\ a}\right]^{-1}\,h^{\ \mu}_{a},\] \[=h^{\ \mu}_{a}+\lambda^{\ b}_{a}\,h^{\ \mu}_{a},\] \[\Rightarrow\delta h^{\ \mu}_{a} =\lambda^{\ a}_{\ b}\,h^{\ \mu}_{a}\] (11) 2. Determinant of the co-frame \(h=\text{Det}(h^{a}_{\ \mu})\): \[h+\delta h =\text{Det}(h^{a}_{\ \mu}+\delta h^{a}_{\ \mu})\] \[\approx h+\text{Det}(\lambda^{a}_{\ b}\,h^{b}_{\ \mu})=h+\lambda\,h\] \[\Rightarrow\delta h \approx\lambda\,h\] (12) where \(\lambda=\text{Det}(\lambda^{a}_{\ b})\ll 1\) and \(\text{Det}(\delta h^{a}_{\ \mu})=\text{Det}(\lambda^{a}_{\ b}\,h^{\ b}_{\ \mu})=\lambda\,h\). 3. Metric tensor \(g_{\mu\nu}\): \[g_{\mu\nu}+\delta g_{\mu\nu} =\eta_{ab}\left[h^{a}_{\ \mu}+\delta h^{a}_{\ \mu}\right]\left[h^{b}_{\ \nu}+\delta h^{b}_{\ \nu}\right],\] \[\approx g_{\mu\nu}+\eta_{ab}\left[\delta h^{a}_{\ \mu}h^{b}_{\ \nu}+h^{a}_{\ \mu} \delta h^{b}_{\ \nu}\right]+O\left(|\delta h|^{2}\right),\] \[\Rightarrow\delta g_{\mu\nu} \approx\eta_{ab}\left[\delta h^{a}_{\ \mu}h^{b}_{\ \nu}+h^{a}_{\ \mu} \delta h^{b}_{\ \nu}\right]+O\left(|\delta h|^{2}\right).\] (13) 4. Torsion tensor \(T^{a}_{\ \mu\nu}\) and \(T^{\rho}_{\ \mu\nu}\): \[T^{a}_{\ \mu\nu}+\delta T^{a}_{\ \mu\nu} =\partial_{\mu}h^{a}_{\ \nu}+\partial_{\mu}\left(\delta h^{a}_{\ \nu}\right)- \partial_{\nu}h^{a}_{\ \mu}+\partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\] \[\approx T^{a}_{\ \mu\nu}+\left[\partial_{\mu}\left(\delta h^{a}_{\ \nu}\right)- \partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\right]+O\left(|\delta h|^{2}\right)\] \[\Rightarrow\delta T^{a}_{\ \mu\nu} \approx\left[\partial_{\mu}\left(\delta h^{a}_{\ \nu}\right)- \partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\right]+O\left(|\delta h|^{2}\right)\] (11) If we also have that \(T^{\rho}_{\ \mu\nu}=h_{a}^{\ \rho}\,T^{a}_{\ \mu\nu}\), then: \[T^{\rho}_{\ \mu\nu}+\delta T^{\rho}_{\ \mu\nu} =\left(h_{a}^{\ \rho}+\delta h_{a}^{\ \rho}\right)\left(T^{a}_{\ \mu\nu}+\delta T^{a}_{\ \mu\nu}\right)\] \[\approx T^{\rho}_{\ \mu\nu}+\delta h_{a}^{\ \rho}\,T^{a}_{\ \mu\nu}+h_{a}^{\ \rho}\, \delta T^{a}_{\ \mu\nu}+O\left(|\delta h|^{2}\right)\] \[\Rightarrow\delta T^{\rho}_{\ \mu\nu} \approx\delta h_{a}^{\ \rho}\,\left[\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right]+h_{a}^{\ \rho}\,\left[\partial_{\mu}\left(\delta h^{a}_{\ \nu}\right)- \partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\right]+O\left(|\delta h|^{2}\right)\] 5. Torsion scalar \(T\): \[T+\delta T =\frac{1}{4}\left(T^{a}_{\ \mu\nu}+\delta T^{a}_{\ \mu\nu}\right)\left(T_{a}^{\ \mu\nu}+\delta T^{a}_{\ \mu\nu}\right)+\frac{1}{2}\left(T^{a}_{\ \mu\nu}+\delta T^{a}_{\ \mu\nu}\right)\left(T^{\nu \mu}_{\ \ a}+\delta T^{\nu\mu}_{\ \ a}\right)\] \[\qquad\qquad-\left(T^{\nu}_{\ \mu\nu}+\delta T^{\nu}_{\ \mu\nu}+ \right)\left(T^{\rho\mu}_{\ \ \rho}+\delta T^{\rho\mu}_{\ \rho}\right)\] \[=T+\frac{1}{4}\left(\delta T^{a}_{\ \mu\nu}T_{a}^{\ \mu\nu}+T^{a}_{\ \mu\nu} \delta T_{a}^{\ \mu\nu}\right)+\frac{1}{2}\left(\delta T^{a}_{\ \mu\nu}T^{\nu\mu}_{\ \ a}+T^{a}_{\ \mu\nu}\delta T^{\nu\mu}_{\ \ a}\right)\] \[\qquad\qquad-\left(\delta T^{\nu}_{\ \mu\nu}T^{\rho\mu}_{\ \ \rho}+T^{\nu}_{\ \mu\nu} \delta T^{\rho\mu}_{\ \rho}\right)+O\left(|\delta h|^{2}\right)\] \[\Rightarrow\delta T =\frac{1}{4}\left(\delta T^{a}_{\ \mu\nu}T_{a}^{\ \mu\nu}+T^{a}_{\ \mu\nu} \delta T_{a}^{\ \mu\nu}\right)+\frac{1}{2}\left(\delta T^{a}_{\ \mu\nu}T^{\nu\mu}_{\ \ a}+T^{a}_{\ \mu\nu}\delta T^{\nu\mu}_{\ \ a}\right)\] \[\qquad\qquad-\left(\delta T^{\nu}_{\ \mu\nu}T^{\rho\mu}_{\ \ \rho}+T^{\nu}_{\ \mu\nu} \delta T^{\rho\mu}_{\ \rho}\right)+O\left(|\delta h|^{2}\right)\] (12) In terms of Equations (A) and (A), Equation (A) becomes as: \[\delta T= \frac{1}{4}\Bigg{[}\left(\partial_{\mu}\left(\delta h^{a}_{\ \nu}\right)- \partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\right)\left(\partial^{\mu}\,h^{\ \nu}_{\ a}-\partial^{\nu}\,h^{a}_{\ \mu}\right)+\left(\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right)\] \[\times\left(\partial^{\mu}\,\left(\delta h^{a}_{\ \nu}\right)- \partial^{\nu}\,\left(\delta h^{a}_{\ \mu}\right)\right)\Bigg{]}+\frac{1}{2}\Bigg{[}\left(\partial_{\mu}\left( \delta h^{a}_{\ \nu}\right)-\partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\right)\left(\partial^{\nu}\,h^{\mu}_{\ \ a}-\partial^{\mu}\,h^{\nu}_{\ \ a}\right)\] \[+\left(\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right)\left( \partial^{\nu}\,\left(\delta h^{\mu}_{\ \ a}\right)-\partial^{\mu}\,\left(\delta h^{\nu}_{\ \ a}\right)\right)\Bigg{]}\] \[-\Bigg{[}\left(\delta h^{\nu}_{a}\,\left[\partial_{\mu}\,h^{a}_{ \ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right]+h^{\ \nu}_{a}\,\left[\partial_{\mu}\left(\delta h^{a}_{\ \nu}\right)-\partial_{\nu}\left(\delta h^{a}_{\ \mu}\right)\right]\right)\left(h^{a}_{\ \rho}\left(\partial^{\rho}\,h^{\mu}_{\ \ a}-\partial^{\mu}\,h^{\rho}_{\ \ a}\right)\right)\] \[+O\left(|\delta h|^{2}\right).\] (13) 6. Lagrangian density \(\mathcal{L}_{Grav}\): \[\mathcal{L}_{Grav}+\delta\mathcal{L}_{Grav} =\frac{1}{2\kappa}\left(h+\delta h\right)\,f\left(T+\delta T \right),\] \[\approx\mathcal{L}_{Grav}+\frac{1}{2\kappa}\left[\delta h\,f\left( T\right)+h\,f_{T}\left(T\right)\,\delta T\right]+O\left(|\delta h|^{2}\right),\] \[\Rightarrow\delta\mathcal{L}_{Grav} \approx\frac{1}{2\kappa}\left[\delta h\,f\left(T\right)+h\,f_{T} \left(T\right)\,\delta T\right]+O\left(|\delta h|^{2}\right).\] (111) 7. Sum of the Torsion and Ricci Curvature scalar \(\overset{\circ}{R}+T\): Here, \(\overset{\circ}{R}\) is the Ricci scalar computed from the Levi-Civita connection. \[\delta(\overset{\circ}{R}+T) =\,\delta\left[\frac{2}{h}\,\delta_{\mu}\left(h\,T^{\nu\mu}_{\ \nu}\right)\right]=2\left[\delta\left(\frac{1}{h}\right)\,\delta_{\mu}\left( h\,T^{\nu\mu}_{\ \nu}\right)+\frac{1}{h}\delta_{\mu}\left[\delta\left(h\,T^{\nu\mu}_{\ \nu}\right)\right]\right]\] \[\approx\frac{2}{h}\,\left[-\frac{\delta h}{h}(\delta_{\mu}h)\,T^{ \nu\mu}_{\ \nu}+(\delta_{\mu}(\delta h))\,\,T^{\nu\mu}_{\ \nu}+(\delta_{\mu}h)\,\,\delta T^{\nu\mu}_{\ \nu}+h\,\delta_{\mu}\left(\delta T^{\nu\mu}_{\ \nu}\right)\right]\] \[\quad+O\left(|\delta h|^{2}\right)\] (112) By using Equation (101), Equation (112) becomes as: \[\delta(\overset{\circ}{R}+T)\approx \frac{2}{h}\left[\,-\,\frac{\delta h}{h}(\delta_{\mu}h)\,\left(h^{ a}_{\ \nu}\,[\partial^{\nu}\,h^{\mu}_{\ a}-\partial^{\mu}\,h^{\nu}_{\ a}]\right)+(\delta_{ \mu}(\delta h))\,\left(h^{a}_{\ \nu}\,[\partial^{\nu}\,h^{\mu}_{\ a}-\partial^{\mu}\,h^{\nu}_{\ a}]\right)\right.\] \[\left.+\left(\delta_{\mu}h\right)\,\left(\delta h^{a}_{\ \nu}\,[\partial^{\nu}\,h^{\mu}_{\ a}-\partial^{\mu}\,h^{\nu}_{\ a}]+h^{a}_{ \ \nu}\,[\partial^{\nu}\,\left(\delta h^{\mu}_{\ a}\right)-\partial^{\mu}\, \left(\delta h^{\nu}_{\ a}\right)]\right)\right.\] \[\left.+\,h\,\delta_{\mu}\left(\delta h^{a}_{\ \nu}\,[\partial^{\nu}\,h^{\mu}_{\ a}- \partial^{\mu}\,h^{\nu}_{\ a}]+h^{a}_{\ \nu}\,[\partial^{\nu}\,\left(\delta h^{\mu}_{\ a}\right)-\partial^{\mu}\, \left(\delta h^{\nu}_{\ a}\right)]\right)\right]+O\left(|\delta h|^{2}\right)\] 8. Superpotential \(S_{ab}^{\ \mu}\): \[S_{ab}^{\ \mu}+\delta S_{ab}^{\ \mu}= \frac{1}{2}\left(T_{ab}^{\ \mu}+\delta T_{ab}^{\ \mu}+T^{\mu}_{\ ba}+\delta T^{\mu}_{\ ba}-T^{\mu}_{\ ab}-\delta T^{\mu}_{\ ab}\right)\] \[\quad-\left(h^{\mu}_{\ b}+\delta h^{\mu}_{\ b}\right)\left(T_{\ \rho a}^{\ \rho}+ \delta T_{\ \rho a}^{\ \rho}\right)+\left(h^{\mu}_{\ a}+\delta h^{\mu}_{\ a}\right) \left(T_{\ \rho b}^{\ \rho}+\delta T_{\ \rho b}^{\ \rho}\right)\] \[\approx S_{a}^{\ \mu\nu}+\left[\frac{1}{2}\left(\delta T_{ab}^{\ \ \mu}+\delta T^{\mu}_{\ ba}- \delta T^{\mu}_{\ ab}\right)-\delta h^{\mu}_{\ b}T_{\ \rho a}^{\ \rho}-h^{\mu}_{\ b}\delta T_{\ \rho a}^{\ \rho}+\delta h^{\mu}_{\ a}T_{\ \rho b}^{\ \rho}\right.\] \[\quad+\left.h^{\mu}_{\ a}\delta T_{\ \rho b}^{\ \rho}\right]+O\left(|\delta h|^{2}\right)\] \[\Rightarrow\delta S_{ab}^{\ \ \mu}\approx \left[\frac{1}{2}\left(\delta T_{ab}^{\ \ \mu}+2\,\delta T^{\mu}_{\ ba}\right)-\delta h^{\mu}_{\ b}T_{\ \rho a}^{\ \rho}-h^{\mu}_{\ b}\delta T_{\ \rho a}^{\ \rho}+\delta h^{\mu}_{\ a}T_{\ \rho b}^{\ \rho}+h^{\mu}_{\ a}\delta T_{\ \rho b}^{\ \rho}\right]+O\left(|\delta h|^{2}\right)\] (113) In terms of \(\delta h^{a}_{\ \mu}\), Equation (A12) becomes: \[\delta S_{ab}^{\ \ \mu}\approx\] \[\left[\frac{1}{2}\left(\partial_{a}\left(\delta h_{b}^{\ \mu}\right)-\partial_{b}\left(\delta h_{a}^{\ \mu}\right)\right)+\left(\partial_{b}\left(\delta h^{\mu}_{\ a}\right)- \partial_{a}\left(\delta h^{\mu}_{\ b}\right)\right)-\delta h^{\mu}_{\ b} \left(h^{c}_{\ \rho}\left[\partial_{c}\left.h_{a}^{\ \rho}-\partial_{a}\left.h_{c}^{\ \rho}\right]\right)\right.\right.\] \[-h^{\mu}_{\ b}\left(\delta h^{c}_{\ \rho}\left[\partial_{c}\left.h_{a}^{\ \rho}-\partial_{a}\left.h_{c}^{\ \rho}\right]+h^{c}_{\ \rho}\left[\partial_{c} \left.\left(\delta h_{a}^{\ \rho}\right)-\partial_{a}\left.\left(\delta h_{c}^{\ \rho}\right)\right]\right)+\delta h^{\mu}_{\ a}\left(h^{c}_{\ \rho}\left[\partial_{c}\left.h_{b}^{\ \rho}-\partial_{b}\left.h_{c}^{\ \rho}\right]\right)\right.\right.\] \[+h^{\mu}_{\ a}\left(\delta h^{c}_{\ \rho}\left[\partial_{c}\left.h_{b}^{\ \rho}-\partial_{b}\left.h_{c}^{\ \rho}\right]+h^{c}_{\ \rho}\left[\partial_{c} \left.\left(\delta h_{b}^{\ \rho}\right)-\partial_{b}\left.\left(\delta h_{c}^{\ \rho}\right)\right]\right)\right]+O\left(|\delta h|^{2}\right)\] (A13) 9. Einstein tensor \(\overset{\circ}{G}_{\mu\nu}\): \[\overset{\circ}{G}_{ab}+\delta\overset{\circ}{G}_{ab} =\left(\overset{\circ}{G}_{\mu\nu}+\delta\overset{\circ}{G}_{ \mu\nu}\right)\left(h_{a}^{\ \mu}+\delta h_{a}^{\ \mu}\right)\left(h_{b}^{\ \nu}+\delta h_{b}^{\ \nu}\right)\] \[\approx\overset{\circ}{G}_{ab}+\left[\overset{\circ}{G}_{\mu \nu}\left(\delta h_{a}^{\ \mu}h_{b}^{\ \nu}+h_{a}^{\ \mu}\delta h_{b}^{\ \nu}\right)+\delta\overset{\circ}{G}_{\mu \nu}\left(h_{a}^{\ \mu}h_{b}^{\ \nu}\right)\right]+O\left(|\delta h|^{2}\right)\] \[\Rightarrow\delta\overset{\circ}{G}_{ab} \approx\left[\overset{\circ}{G}_{\mu\nu}\left(\delta h_{a}^{\ \mu}h_{b}^{\ \nu}+h_{a}^{\ \mu}\delta h_{b}^{\ \nu}\right)+\delta \overset{\circ}{G}_{\mu\nu}\left(h_{a}^{\ \mu}h_{b}^{\ \nu}\right)\right]+O\left(|\delta h|^{2}\right).\] (A14) If \(\overset{\circ}{G}_{\mu\nu}=\overset{\circ}{R}_{\mu\nu}-\frac{1}{2}\,g^{ \sigma\rho}\,g_{\mu\nu}\overset{\circ}{R}_{\sigma\rho}=\overset{\circ}{R}_{ \mu\nu}-\frac{\eta^{cd}\,\eta_{ab}}{2}\left[h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{a}\,h_{\ \nu}^{b}\right]\overset{ \circ}{R}_{\sigma\rho}\), then we obtain from Equation (A4): \[\delta\overset{\circ}{G}_{\mu\nu}\approx \,\delta\overset{\circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ab}}{2 }\left[\,\left[h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{a}\,h_{\ \nu}^{b}\right]\,\delta\overset{\circ}{R}_{\sigma\rho}+\left[\delta h_{c}^{ \ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{a}\,h_{\ \nu}^{b}+h_{c}^{\ \sigma}\,\delta h_{d}^{\ \rho}\,h_{\ \mu}^{a}\,h_{\ \nu}^{b}\right.\right.\] \[+h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,\delta h_{\ \mu}^{a}\,h_{\ \nu}^{b}+h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{a}\,\delta h_{\ \nu}^{b}\right] \overset{\circ}{R}_{\sigma\rho}\right]+O\left(|\delta h|^{2}\right)\] (A15) By substituting Equation (A15) into Equation (A14), we obtain that: \[\delta\overset{\circ}{G}_{ab}\approx \left[\overset{\circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2} \left[h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{e}\,h_{\ \nu}^{f}\right]\,\overset{\circ}{R}_{\sigma\rho}\right]\left(\delta h_{a}^{ \ \mu}h_{b}^{\ \nu}+h_{a}^{\ \mu}\delta h_{b}^{\ \nu}\right)\] \[+\left(h_{a}^{\ \mu}h_{b}^{\ \nu}\right)\left[\delta\overset{ \circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\left[\,\left[h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{e}\,h_{\ \nu}^{f}\right]\,\delta\overset{\circ}{R}_{\sigma\rho}\right.\right.\] \[+\left[\delta h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{e}\,h_{\ \mu}^{f}\,h_{\ \nu}^{f}+h_{c}^{\ \sigma}\,\delta h_{d}^{\ \rho}\,h_{\ \mu}^{f}+h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \nu}^{e}+h_{c}^{\ \sigma}\,h_{d}^{\ \rho}\,h_{\ \mu}^{e}\,\delta h _{\ \mu}^{f}\right]\,\overset{\circ}{R}_{\sigma\rho}\right]\right]\] \[+O\left(|\delta h|^{2}\right)\] (A16) Now, if we have that \(\overset{\circ}{R}_{\mu\nu}=h_{k}^{\,\alpha}\,h_{\,\mu}^{m}\,\overset{\circ}{R}_{ \,\,\,m\alpha\nu}^{k}\), then Equation (165) becomes \[\delta\overset{\circ}{G}_{ab}\approx \Bigg{[}h_{k}^{\,\alpha}\,h_{\,\mu}^{m}\,\overset{\circ}{R}_{\,\, \,m\alpha\nu}^{k}-\frac{\eta^{cd}\,\eta_{ef}}{2}\left[h_{c}^{\,\,\sigma}\,h_{d }^{\,\,\rho}\,h_{\,\mu}^{e}\,h_{\,\nu}^{f}\right]\,h_{k}^{\,\alpha}\,h_{\, \sigma}^{m}\,\overset{\circ}{R}_{\,\,\,m\alpha\rho}^{k}\Bigg{]}\left(\delta h_{a} ^{\,\,\mu}h_{b}^{\,\,\nu}+h_{a}^{\,\,\mu}\delta h_{b}^{\,\,\nu}\right)\] \[+\left(h_{a}^{\,\,\mu}h_{b}^{\,\,\nu}\right)\Bigg{[}\left[\left( \delta h_{k}^{\,\,\alpha}\,h_{\,\mu}^{m}+h_{k}^{\,\,\alpha}\,\delta h_{\,\mu}^ {m}\right)\,\overset{\circ}{R}_{\,\,\,m\alpha\nu}^{k}+h_{k}^{\,\,\alpha}\,h_{ \,\mu}^{m}\,\overset{\circ}{\delta}\overset{\circ}{R}_{\,\,\,m\alpha\nu}^{k} \right]\] \[-\frac{\eta^{cd}\,\eta_{ef}}{2}\Bigg{[}\left[h_{c}^{\,\,\sigma}\,h _{d}^{\,\,\rho}\,h_{\,\mu}^{e}\,h_{\,\nu}^{f}\right]\,\left[\left(\delta h_{k} ^{\,\,\alpha}\,h_{\,\sigma}^{m}+h_{k}^{\,\,\alpha}\,\delta h_{\,\mu}^{m}\right) \,\overset{\circ}{R}_{\,\,\,m\alpha\rho}^{k}+h_{k}^{\,\,\alpha}\,h_{\,\sigma }^{m}\,\overset{\circ}{\delta}\overset{\circ}{R}_{\,\,\,m\alpha\rho}^{k}\right]\] \[+\left[\delta h_{c}^{\,\,\sigma}\,h_{d}^{\,\,\rho}\,h_{\,\mu}^{e }\,h_{\,\nu}^{f}+h_{c}^{\,\,\sigma}\,\delta h_{d}^{\,\,\rho}\,h_{\,\mu}^{e}\, h_{\,\nu}^{f}+h_{c}^{\,\,\sigma}\,h_{d}^{\,\,\rho}\,\delta h_{\,\,\mu}^{e}\,h_{\, \,\nu}^{f}+h_{c}^{\,\,\sigma}\,h_{d}^{\,\,\rho}\,h_{\,\mu}^{e}\,\delta h_{\,\, \nu}^{f}\right]\] \[\times\,h_{k}^{\,\,\alpha}\,h_{\,\sigma}^{m}\,\overset{\circ}{R }_{\,\,\,m\alpha\rho}^{k}\Bigg{]}\Bigg{]}+O\left(|\delta h|^{2}\right) \tag{176}\] For pure Minkowski spacetime, we have that \(\overset{\circ}{R}_{\,\,\,m\alpha\rho}^{k}=0\) by default and Equation (176) reduces as: \[\delta\overset{\circ}{G}_{ab}\approx \left(h_{a}^{\,\,\mu}h_{b}^{\,\,\nu}\right)\left[h_{k}^{\,\,\alpha }\,h_{\,\mu}^{m}\,\overset{\circ}{\delta}\overset{\circ}{R}_{\,\,\,m\alpha \nu}^{k}-\frac{\eta^{cd}\,\eta_{ef}}{2}\,\left[h_{c}^{\,\,\sigma}\,h_{d}^{\,\, \rho}\,h_{\,\mu}^{e}\,h_{\,\nu}^{f}\right]\,h_{k}^{\,\alpha}\,h_{\,\sigma}^{m} \,\overset{\circ}{\delta}\overset{\circ}{R}_{\,\,\,m\alpha\rho}^{k}\right]+O \left(|\delta h|^{2}\right). \tag{177}\] Equation (177) is useful for Equations (40) and (48) and the energy-momentum stability test. ## Appendix B General Perturbed Torsion-Based Field Equation via Linearization Here, we can also obtain the perturbed field equation (Equations (33) and (34)) using Equation (170), with a matter contribution as follows: \[\delta\mathcal{L}\approx\frac{1}{2\kappa}\left[\delta h\,f\left(T\right)+h\,f_ {T}\left(T\right)\,\delta T\right]+\delta\mathcal{L}_{Matter}+O\left(|\delta h| ^{2}\right) \tag{178}\] As for the non-perturbed FEs, we have here that \(\delta\Theta_{(ab)}=\delta T_{ab}\equiv\frac{1}{2}\frac{\delta\left(\delta L_{ Matt}\right)}{\delta g_{ab}}\). For the term \(\frac{1}{2\kappa}\,\delta h\,f\left(T\right)\), we obtain by analogy with Equation (15) the following part (here, \(\delta g_{ab}=0\) for the orthonormal framework): \[\frac{\delta h\,f\left(T\right)}{2\kappa} \to f_{TT}\left[\delta S_{(ab)}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ At Equation (24), we only perturb the physical quantities linked by \(\delta h\), giving \(\delta T\), \(\overset{\circ}{\delta G}_{ab}\), and \(\delta S_{ab}^{\ \mu}\). We do not perturb \(f(T)\) and its derivatives. For the term \(\frac{1}{2\kappa}\,h\,f_{T}\,(T)\ \delta T\), we still obtain by analogy with Equation (15) the part (here again, \(\delta g_{ab}=0\)): \[\begin{split}\frac{h\,f_{T}\,(T)\ \delta T}{2\kappa}& \rightarrow\left[f_{TTT}\,S_{(ab)}^{\ \ \ \ \ \mu}\partial_{\mu}T+f_{TT}\,\overset{\circ}{G}_{ab}+\frac{g_{ab}}{2}\,(f_{T}-T \,f_{TT})\right]\,\delta T\qquad\text{Symmetric}\\ &\to f_{TTT}\,\left[S_{[ab]}^{\ \ ### Rotation/Boost Perturbation 1. Torsion scalar perturbation \(\delta T\): by using Equation (A8) and by substituting Equation (39) inside, we obtain the expression: \[\delta T= \frac{1}{4}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{\ b}\,h^{b} _{\ \nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \mu}\right)\right)\left(\partial^{\mu}\,h^{\ \nu}_{\ a}-\partial^{\nu}\,h^{\ \mu}_{\ a}\right)+\left(\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right)\] \[\times\left(\partial^{\mu}\,\left(\lambda^{\ b}_{\ a}\,h^{\ \nu}_{\ b}\right)-\partial^{\nu}\,\left(\lambda^{\ b}_{\ a}\,h^{\ \mu}_{\ b}\right)\right)\Bigg{]}+\frac{1}{2}\Bigg{[}\left(\partial_{\mu} \left(\lambda^{a}_{\ b}\,h^{b}_{\ \nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \mu}\right)\right)\left(\partial^{\nu}\,h^{\ \mu}_{\ a}-\partial^{\mu}\,h^{\ \nu}_{\ a}\right)\] \[+\left(\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right)\left(\partial^{\nu}\,\left(\lambda^{b}_{\ a}\,h^{\mu}_{\ b} \right)-\partial^{\mu}\,\left(\lambda^{b}_{\ a}\,h^{\nu}_{\ b}\right)\right) \Bigg{]}\] \[-\Bigg{[}\left(\lambda^{\ b}_{\ a}\,h^{\ \nu}_{\ b}\,\left[\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right]+h^{\ \ \nu}_{\ a}\,\left[\partial_{\mu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \mu}\right)\right]\right)\left(h^{a}_{\ \rho}\left(\partial^{\rho}\,h^{\ \mu}_{\ a}-\partial^{\mu}\,h^{\rho}_{\ a}\right)\right)\] \[+\left(h^{\ \nu}_{\ a}\,\left[\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right]\right)\left(\lambda^{a}_{\ b}h^{b}_{\ \rho}\left( \partial^{\rho}\,h^{\mu}_{\ a}-\partial^{\mu}\,h^{\rho}_{\ a}\right)+h^{\ \ \rho}_{\ \rho}\left(\partial^{\rho}\,\left(\lambda^{b}_{\ a}\,h^{\mu}_{\ b} \right)-\partial^{\mu}\,\left(\lambda^{b}_{\ a}\,h^{\rho}_{\ b}\right)\right) \right)\Bigg{]}\] \[+O\left(|\delta h|^{2}\right)\] \[\to 0\] (C1) We need to impose \(T^{a}_{\ \mu\nu}=\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\to 0\) to obtain the final result for Equation (C1). 2. Superpotential perturbation \(\delta S_{ab}^{\ \ \mu}\): by using Equation (A13) and by substituting Equation (39) inside, we obtain the expression: \[\delta S_{ab}^{\ \ \mu}\] \[= \Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{\ c}\,h_{c} ^{\ \mu}\right)-\partial_{b}\left(\lambda_{a}^{\ c}\,h_{c}^{\ \mu}\right)\right)+\left(\partial_{b}\left(\lambda_{a}^{\ c}\,h_{c}^{\ \mu}\right)-\partial_{a}\left(\lambda_{b}^{\ c}\,h_{c}^{\ \mu}\right)\right)-\lambda_{b}^{\ e}\,h_{e}^{\ \mu}\left(h_{c}^{\ c}\,\left[\partial_{c}\,h_{a}^{\ \rho}-\partial_{a}\,h_{c}^{\ \rho}\right]\right)\] \[-h_{\ b}^{\mu}\left(\lambda_{\ e}^{c}\,h_{e}^{\ e}\,\rho\left[ \partial_{c}\,h_{a}^{\ \rho}-\partial_{a}\,h_{c}^{\ \rho}\right]+h_{\ e}^{\ c}\,\rho\left[\partial_{c}\, \left(\lambda_{a}^{\ f}h_{f}^{\ \rho}\right)-\partial_{a}\,\left(\lambda_{c}^{\ f}h_{f}^{\ \rho}\right)\right]\right)\] \[+\lambda_{\ a}^{\ e}h_{\ e}^{\mu}\left(h_{c}^{\ c}\,\left[ \partial_{c}\,h_{b}^{\ \rho}-\partial_{b}\,h_{c}^{\ \rho}\right]\right)+h_{\ a}^{\mu}\lambda_{c}^{\ c}h_{e}^{\ e}\,\rho \left[\partial_{c}\,h_{b}^{\ \rho}-\partial_{b}\,h_{c}^{\ \rho}\right]\] \[+h_{\ a}^{\mu}\,h_{c}^{\ c}\,\rho\left[\partial_{c}\,\left(\lambda _{b}^{\ f}h_{f}^{\ \rho}\right)-\partial_{b}\,\left(\lambda_{c}^{\ f}h_{f}^{\ \rho}\right)\right]\Bigg{]}+O\left(|\delta h|^{2}\right)\] \[\rightarrow \Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{\ c}\,h_ {c}^{\ \mu}\right)-\partial_{b}\left(\lambda_{a}^{\ c}\,h_{c}^{\ \mu}\right)\right)+\left(\partial_{b}\left(\lambda_{a}^{\ c}\,h_{c}^{\ \mu}\right)-\partial_{a}\left(\lambda_{b}^{\ c}\,h_{c}^{\ \mu}\right)\right)\] \[-h_{\ b}^{\mu}\,h_{\ e}^{\ \rho}\left[\partial_{c}\,\left(\lambda_{a}^{\ f}h_{f}^{\ \rho}\right)-\partial_{a}\,\left(\lambda_{c}^{\ f}h_{f}^{\ \rho}\right)\right]+h_{\ a}^{\mu}\,h_{\ e}^{\ c}\,\left[\partial_{c}\,\left( \lambda_{b}^{\ f}h_{f}^{\ \rho}\right)-\partial_{b}\,\left(\lambda_{c}^{\ f}h_{f}^{\ \rho}\right)\right]\Bigg{]}\] \[+O\left(|\delta h|^{2}\right)\qquad\text{by applying Equation (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: ### General Linear Perturbation 1. The torsion scalar perturbation \(\delta T\): \[\delta T= \frac{1}{4}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \nu}+\epsilon^{a}_{\ \nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \mu}+\epsilon^{a}_{\ \mu}\right)\right)\left( \partial^{\mu}\,h^{\ \nu}_{a}-\partial^{\nu}\,h^{\ \mu}_{\ a}\right)+\left(\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right)\] \[\times\left(\partial^{\mu}\,\left(\lambda^{b}_{a}\,h^{\ \nu}_{b}+\epsilon^{\ \nu}_{a}\right)- \partial^{\nu}\,\left(\lambda^{b}_{a}\,h^{\ \mu}_{b}+\epsilon^{\ \mu}_{\ a}\right)\right)\Bigg{]}\] \[+\frac{1}{2}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{\ b} \,h^{b}_{\ \nu}+\epsilon^{a}_{\ \nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\ b}\,h^{b}_{\ \mu}+\epsilon^{a}_{\ \mu}\right)\right)\left( \partial^{\nu}\,h^{\mu}_{\ a}-\partial^{\mu}\,h^{\nu}_{\ a}\right)\] \[+\left(\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right)\left( \partial^{\nu}\,\left(\lambda^{b}_{\ a}\,h^{\mu}_{\ \ b}+\epsilon^{\mu}_{\ a}\right)- \partial^{\mu}\,\left(\lambda^{b}_{\ a}\,h^{\nu}_{\ b}+\epsilon^{\nu}_{\ a} \right)\right)\Bigg{]}\] \[-\Bigg{[}\left(\left(\lambda^{a}_{\ a}\,h^{b}_{\ b}+\epsilon^{a}_ {\ a}\right)\,\left[\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right]+h^{\ \nu}_{a}\,\left[\partial_{\mu} \left(\lambda^{a}_{\ b}\,h^{b}_{\ \nu}+\epsilon^{a}_{\ \nu}\right)-\partial_{\nu}\left( \lambda^{a}_{\ b}\,h^{b}_{\ \mu}+\epsilon^{a}_{\ \mu}\right)\right]\right)\] \[\times\left(h^{a}_{\ \rho}\left(\partial^{\rho}\,h^{\mu}_{\ a}- \partial^{\mu}\,h^{\rho}_{\ a}\right)\right)+\left(h^{\ \nu}_{a}\,\left[\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\right]\right)\] \[\times\left(\left(\lambda^{a}_{\ b}h^{b}_{\ \rho}+\epsilon^{a}_{\ \rho}\right)\left( \partial^{\rho}\,h^{\mu}_{\ a}-\partial^{\mu}\,h^{\rho}_{\ a}\right)+h^{\ a}_{\ \rho}\left( \partial^{\rho}\,\left(\lambda^{b}_{\ a}\,h^{\mu}_{\ b}+\epsilon^{\mu}_{\ a} \right)-\partial^{\mu}\,\left(\lambda^{b}_{\ a}\,h^{\rho}_{\ b}+\epsilon^{ \rho}_{\ a}\right)\right)\right)\Bigg{]}\] \[+O\left(|\delta h|^{2}\right)\] \[\to 0.\] (100) We again need to impose \(T^{a}_{\ \mu\nu}=\partial_{\mu}\,h^{a}_{\ \nu}-\partial_{\nu}\,h^{a}_{\ \mu}\to 0\) as for Equation (100) to obtain Equation (100). 2. The superpotential perturbation \(\delta S_{ab}^{\ \ \mu}\) is expressed as: \[\delta S_{ab}^{\ \ \mu}= \Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{c}\,h_{c}^{ \ \mu}+\epsilon_{b}^{\ \mu}\right)-\partial_{b}\left(\lambda_{a}^{c}\,h_{c}^{\ \mu}+\epsilon_{a}^{\ \mu}\right)\right)+\left(\partial_{b}\left( \lambda^{c}_{\ a}\,h_{c}^{\ \mu}+\epsilon_{a}^{\ \mu}\right)-\partial_{a}\left(\lambda^{c}_{\ b}\,h_{c}^{\ \mu}+\epsilon_{b}^{\ \mu}\right)\right)\] \[-\left(\lambda^{e}_{\ b}\,h_{e}^{\ \mu}+\epsilon_{b}^{\ \mu}\right) \left(h_{\ \rho}^{c}\left[\partial_{c}\,h_{a}^{\ \rho}-\partial_{a}\,h_{c}^{\ \rho}\right]\right)\] \[-h_{\ b}^{\ \mu}\left(\left(\lambda^{c}_{\ e}\,h_{\rho}^{\ \rho}+\epsilon_{\ \rho}^{c}\right)\left[\partial_{c}\,h_{a}^{\ \rho}-\partial_{a}\,h_{c}^{\ \rho}\right]+h_{\ \rho}^{c}\left[\partial_{c}\, \left(\lambda_{a}^{\ f}h_{f}^{\ \rho}+\epsilon_{a}^{\ \rho}\right)-\partial_{a}\, \left(\lambda_{c}^{\ f}h_{f}^{\ \rho}+\epsilon_{c}^{\ \rho}\right)\right]\right)\] \[+\left(\lambda^{e}_{\ a}h_{e}^{\ \mu}+\epsilon_{a}^{\ \mu}\right) \left(h_{\ \rho}^{c}\left[\partial_{c}\,h_{b}^{\ \ \rho}-\partial_{b}\,h_{c}^{\ \rho}\right]\right)+h_{\ a}^{\ \mu} \left(\lambda^{c}_{\ e}h_{\ e}^{\ \rho}+\epsilon_{\ \rho}^{c}\right)\left[\partial_{c}\,h_{b}^{\ \ \rho}-\partial_{b}\,h_{c}^{\ \rho}\right]\] \[+h_{\ a}^{\ \mu}\,h_{c}^{\ \rho}\left[\partial_{c}\, \left(\lambda_{b}^{\ f}h_{f}^{\ \rho}+\epsilon_{b}^{\ \rho}\right)-\partial_{b}\, \left(\lambda_{c}^{\ f}h_{f}^{\ \rho}+\epsilon_{c}^{\ \rho}\right)\right]\Bigg{]}+O\left(|\delta h|^{2}\right)\] \[\to 0,\] (120) where we apply Equation (42) and we respect the \(\partial_{a}\epsilon_{b}^{\ \mu}=\partial_{b}\epsilon_{a}^{\ \mu}=0\) condition.
2310.19282
THz transition radiation of electron bunch laser-accelerated in long-scale near-critical density plasmas
Direct laser electron acceleration in near-critical density plasma produces collimated electron beams with high charge $Q$ (up to $\mu$C). This regime could be of interest for high energy THz radiation generation, as many of the mechanisms have a scaling $\propto Q^2$. In this work we focused specifically on challenges that arise during numerical investigation of transition radiation in such interaction. Detailed analytical calculations that include both diffraction and decoherence effects of characteristics of transition radiation in the THz range were conducted with the input parameters obtained from 3D PIC and hydrodynamic simulations. The calculated characteristics of THz radiation are in good agreement with the experimentally measured ones. Therefore, this approach can be used both to optimize properties of THz radiation and distinguish the transition radiation contribution if several mechanisms of THz radiation generation are considered.
D A Gorlova, I N Tsymbalov, I P Tsygvintsev, A B Savelev
2023-10-30T05:46:36Z
http://arxiv.org/abs/2310.19282v1
THz transition radiation of electron bunch laser-accelerated in long-scale near-critical density plasmas ###### Abstract Direct laser electron acceleration in near-critical density plasma produces collimated electron beams with high charge \(Q\) (up to \(\mu\)C). This regime could be of interest for high energy THz radiation generation, as many of the mechanisms have a scaling \(\propto Q^{2}\). In this work we focused specifically on challenges that arise during numerical investigation of transition radiation in such interaction. Detailed analytical calculations that include both diffraction and decoherence effects of characteristics of transition radiation in the THz range were conducted with the input parameters obtained from 3D PIC and hydrodynamic simulations. The calculated characteristics of THz radiation are in good agreement with the experimentally measured ones. Therefore, this approach can be used both to optimize properties of THz radiation and distinguish the transition radiation contribution if several mechanisms of THz radiation generation are considered. * October 2023 _Keywords_: relativistic laser-plasma interaction, THz radiation, transition radiation, near-critical plasma, direct laser acceleration, PIC simulations ## 1 Introduction Generation of THz radiation from relativistic laser - plasma interaction is currently being actively investigated [1]. Such THz sources have a unique advantage as they experience no saturation with the increase in laser pulse energy. Therefore, relativistic laser driven THz radiation sources could have pulse energy up to 1% of main laser pulse energy [2], i.e. potentially hundreds of mJ, which cannot be obtained by other generation schemes. This high energy combined with short pulse duration (\(\approx\) ps) provides high THz radiation peak field strength, allowing studies of nonlinear THz radiation-matter interactions and several applications in material science [3]. A number of THz radiation generation mechanisms in relativistic laser plasma such as transition radiation [4], synchrotron radiation [5], sheath radiation [6], linear conversion of plasma wake [7], currents excited via parametric instabilities [8] had been proposed and investigated. To date, the most widely studied and relevant one being transition radiation (TR), arising when a charged particle crosses electric permittivity discontinuity. In relativistic laser-plasma interaction transition radiation occurs naturally when accelerated electrons cross plasma-vacuum boundary i.e. leave the target. Transition radiation was proposed as the main mechanism for THz generation for various target types: solid targets [9], thin foils [10, 11], gas jets [12, 13] as well as different micron-sized targets [14, 15]. TR was also considered the main mechanism of THz radiation generation in long (\(\approx\) hundreds of \(\mu\)m) optically transparent undercritical (\(n_{e}/n_{c}\approx 0.1\)) plasma in our previous paper [16]. In such long-scale near-critical plasma electrons can be efficiently accelerated with direct laser acceleration (DLA) mechanism both on terawatt [17, 18] and petawatt [19] laser setups. Due to high target density, total reported charge of accelerated electrons is up to \(\mu C\)[20], which can generate substantial amount of TR, as this mechanism scales as \(\propto Q^{2}\), where \(Q\) - total charge of accelerated electrons. In this work we discuss specific challenges that arise with numerical investigation of transition radiation generated by DLA-accelerated electrons in near-critical plasma. Namely, full analytical model, including both diffraction and decoherence effects, is needed to correctly estimate TR characteristics. Multi-stage numerical simulations, consisting of hydrodynamic, PIC and analytical transition radiation calculation, were carried out. The resulting TR properties correspond to experimentally observed ones with high degree of accuracy, which indicates the possibility to estimate numerically the parameters of THz radiation generated by the TR mechanism. ## 2 Transition radiation in DLA conditions Figure 1: Comparison of the formation of transition radiation (TR) for various types of targets used for laser-plasma electron acceleration. In the relativistic laser plasma interaction one of the methods for both interpreting experimental data [8] and predicting the characteristics of a THz radiation source [21] is the analysis of THz radiation via particle-in-cell (PIC) simulations. However, due to computational complexity, PIC simulation domain is usually limited to a size of \(<\)mm\({}^{2}\). Therefore, concerning THz radiation, the simulation takes place in the near or in the so-called pre-wave zone [22], where, for example, radiation focusing effects [23] that are not transferred to the far zone can be observed. A strong quasi-static magnetic field of the accelerated electrons, which is difficult to separate from the radiation due to small domain size [24], is also present. These issues are rarely discussed and the analysis of PIC simulation results is usually limited to Fourier filtering of electric field in the low-frequency band [9, 25]. However, it can lead to a significant overestimation of THz radiation energy, spectral width and field strengths during the PIC simulation analysis. Therefore, it is necessary to develop other approaches to estimate THz radiation parameters. As was noted above, generation of TR always occurs during the process of laser-plasma acceleration. The most direct way to evaluate the characteristics of the TR is to calculate them analytically [26], with the undoubted advantage of pertaining to the far field. Such an analysis had successfully been carried out both for the laser wakefield acceleration (LWFA) [27] and acceleration of electrons in the thin foils [28]. At the same time, as TR is fully determined by the characteristics of accelerated electrons and the plasma-vacuum boundary, interaction with long-scale near-critical plasma should have unique features. The differences in TR generation for different target types are schematically presented in Figure 1. In the case of electron acceleration in near-critical plasma via DLA electron energy spectrum is exponential with temperature \(T=1-15\) MeV. Thus, by the time electrons cross the plasma-vacuum boundary, usually located at a distance of hundreds of microns from the acceleration region, its longitudinal size will be comparable to the THz wavelengths, leading to destructive interference, i.e. temporal decoherence. At the boundary electron beam will also have transverse size on the order of tens of microns (divergence of 0.1-1 rad), leading to spatial decoherence. Thus, both temporal and spatial decoherence will limit TR spectrum at shorter wavelengths. For LWFA decoherence effects are much less pronounced due to the high energy monochromaticity and collimation of the beam [27]. In the case of electron acceleration in thin foils, both the angular and energy distributions of the accelerated electrons can be comparable to the DLA. However, the distance between the electron acceleration region and plasma-vacuum boundary is much smaller and determined by foil thickness (i.e. \(\approx\) tens of microns). Thus, temporal decoherence will be suppressed, while spatial decoherence may still be important to consider. It is also important to take into account diffraction effects due to the limited size of plasma-vacuum boundary. For LWFA diffraction had already been shown to significantly limit THz radiation spectrum at longer wavelengths [27]. For metal foils diffraction are negligible if the rear boundary is unperturbed. For the near-critical plasma, however, diffraction effects must be taken into account as such plasma is usually created through the target heating with an additional prepulse and has a final transverse size of the order of hundreds of microns. Thus for laser-plasma interaction with long-scale near-critical density plasma no approximations, applicable for other types of targets, can be made. Therefore, it is necessary to take into account effects of both spatial and temporal decoherence, as well as diffraction in analytical calculations. ## 3 PIC simulation of electron acceleration Earlier [16] we had experimentally investigated THz radiation generation in the interaction of 1 TW Ti:Sa laser system (50 mJ, 50 fs, \(I=5\cdot 10^{18}\) W/cm\({}^{2}\)) with near-critical density preplasma layer with a length of \(\approx\) 200 \(\mu\)m. Here we briefly summarize the results. This layer was created through ablation and subsequent hydrodynamic expansion of 16 \(\mu\)m mylar tape with an additional Nd:YAG prepulse (200 mJ, 10 ns, \(I=10^{12}\) W/cm\({}^{2}\)). THz radiation parameters were measured for different delays between the main pulse and the prepulse \(\Delta t_{fs-ns}\), i.e. different target electron densities, longitudinal and transverse sizes. It was proved that THz radiation is generated via TR mechanism. Properties of TR, calculated with experimentally measured electron beam parameters, corresponded reasonably well to the experimentally measured ones. Note, that in [16] only diffraction effects were taken into account. Decoherence effects, as we show later, were disguised by the sharp decline of Teflon vacuum-air window transmission for \(\nu>3\) THz. Electron beam parameters that can be measured experimentally (divergence, spectrum, spatial stability) make it possible to take into account only spatial decoherence, while temporal shape of the beam cannot be measured. It can, however, be obtained through particle-in-cell (PIC) simulations. In this work a series of 3D PIC simulations were carried out using the SMILEI [29] code. Gaussian linearly polarized laser pulse with \(a_{0}=1.5,\tau_{FWHM}=50\) fs was focused to a 4 \(\mu\)m FWHM spot at a point (x,y)=(10,0) on Figure 2a (corresponds to focusing 10 \(\mu\)m into on the unperturbed target surface). Target profile was obtained through 2D axisymmetric hydrodynamic simulations in the 3DLINE code [30], a more detailed description of this simulation can be found in [31]. Initially target consisted of neutral carbon atoms with 1 particle per cell (corresponds to 4-6 particle per cell for electrons). PIC simulations grid steps were \(\Delta x/\lambda\)=1/32, \(\Delta y/\lambda,\Delta z/\lambda\)=1/4, \(\Delta\tau/t\)=1/36. Using target profile for \(\Delta t_{fs-ns}=-1\) ns (Figure 2a), which was established to be optimal for the THz production [16], an electron beam with divergence of \(\alpha_{FWHM}=0.12\) rad (Figure 2b for E\(>\)2 MeV) was obtained, which corresponds well to the experimentally observed value of \(\approx\) 0.1 rad [16]. Obtained energy-angle distribution of accelerated electrons (Figure 2b) was used for analytical calculations of THz radiation. Note that the energy-angle distribution was summed along one of the axes (perpendicular to the polarization direction of the laser pulse) to simplify the calculations. ## 4 Framework for analytical TR calculation As was already mentioned, the most reliable method for calculating TR characteristics is the use of analytical model. Detailed analytical calculations of the TR characteristics had previously been carried out for the LWFA regime [27]. There, the applicability of the sharp plasma-vacuum boundary approximation was demonstrated for gas jet-based laser-plasma accelerators. Here it will also be valid, as the transition region between overcritical (i.e. metal) to undercritical (i.e. vacuum) plasma for the THz radiation occurs on spatial scale \(\approx\) several \(\mu\)m\(\ll\lambda_{THz}\) (see Figure 2a). Figure 3: (a) - coordinate system used in analytical calculations: z is the axis of electron beam propagation, \(\theta\) is the observation angle, \(x_{j}\) is the distance from the \(z\) axis at which the \(j\)th electron crosses the plasma-vacuum boundary at an angle \(\alpha_{j}\), \(e_{\perp}\) - vector perpendicular to the observation direction, (b) - function D (Equation (3)) that represents the influence of diffraction effects on the THz spectrum, for different transverse sizes of the target (see figure, in \(\mu\)m) for the \(\theta=30^{\circ}\), electron energy \(E=1\) MeV, (c) - angular distribution of the radiation power of one electron with \(E=2\) MeV (Equation (2)) crossing the plasma-vacuum boundary at an angle of \(\alpha_{j}=20^{\circ}\) (orange) and \(\alpha_{j}=-50^{\circ}\) (blue). Figure 2: Target ion density, obtained in 2D axisymmetric hydrodynamic simulation of interaction of Nd:YAG controlled laser prepulse with 16 \(\mu\)m thick mylar tape at delay \(\Delta t_{fs-ns}=-1\) ns (1 ns after prepulse peak) (a) and energy-angle distribution of accelerated electrons, obtained in 3D PIC simulation for \(\Delta t_{fs-ns}=-1\) ns (b). On (a) are also marked \(L\) - distance between ”point source” of accelerated electrons and plasma-vacuum boundary and \(\rho\) - transverse target size, as well as direction of laser propagation. In this work, for the analytical calculation of TR characteristics, a two-dimensional problem was considered; the coordinate system is presented in Figure 3a. In the far field TR, generated at an angle \(\theta\) as a result of N electrons crossing the plasma-vacuum boundary can be written as [27]: \[\vec{E}(\omega,\theta)=i\frac{4\pi e}{\omega}\sum_{j=1}^{N}\frac{1}{\cos\theta} \varepsilon(\theta,u_{j},\alpha_{j})D(\theta,\omega,u_{j},\rho)e^{-ix_{jk} \sin\theta+\phi_{j}}\vec{e}_{\perp}^{\,} \tag{1}\] where [32]: \[\varepsilon(\theta,u_{j},\alpha_{j})=\frac{u_{j}\cos\alpha_{j}(\sqrt{1+u_{j}^ {2}}\sin\theta-u_{j}\sin\alpha_{j})}{\left(\sqrt{1+u_{j}^{2}}-u_{j}\sin\theta \sin\alpha_{j}\right)^{2}-\left(u_{j}\cos\theta\cos\alpha_{j}\right)^{2}} \tag{2}\] is the amplitude of the field of the \(j-\)th electron with normalized momentum \(u_{j}=\gamma_{j}\beta_{j}\), where \(\gamma_{j}=1/\sqrt{1-\beta_{j}^{2}}\), \(\beta_{j}=v_{j}/c\); \(\alpha_{j}\) is the angle and \(x_{j}\) is the transverse coordinate at which the \(j-\)th electron crosses plasma-vacuum boundary, while \(\phi_{j}\) corresponds to the time delay arising due to electron beam being non-monochromatic. \(\varepsilon^{2}\) corresponds to the known conical distribution of transition radiation with a cone angle \(\frac{1}{\gamma}\) if \(\gamma\gg 1\). \[D(\theta,\omega,u_{j},\rho)=1-J_{0}(b_{j}u_{j}\sin\theta)[b_{j}K_{1}(b_{j})+ \frac{b_{j}^{2}}{2}K_{0}(b_{j})]-\frac{b_{j}^{2}}{2}K_{0}(b_{j})J_{2}(b_{j}u_{ j}\sin\theta) \tag{3}\] - function allowing one to take into account the effects of diffraction, arising due to finite transverse size of the plasma-vacuum boundary \(\rho\) (see Figure 2a), where \(b_{j}=\frac{2\pi\rho}{u_{j}\lambda}\), \(J\) are Bessel functions of the 1st kind, \(K\) - Macdonald functions [27]. Figure 3b shows function \(D\) for a range of boundary sizes \(\rho\). Following notations are also introduced: \(\nu\), \(\omega\) - ordinary and angular frequencies of radiation, e - electron charge, z - axis of electron propagation, c - speed of light, \(k\) and \(\lambda\) are the wave vector and wavelength of radiation with frequency \(\omega\), \(\vec{e}_{\perp}\) is the vector, perpendicular to the direction of observation (see Figure 3a). Next, from Equation (1) we can obtain an expression for the frequency-angular distribution of the energy \(W\) of transition radiation: \[\frac{d^{2}W}{d\omega d\theta}=\frac{e^{2}}{\pi^{2}c}\sum_{j=1}^{N}\sum_{m=1 }^{N}\varepsilon_{j}\varepsilon_{m}D_{j}D_{m}e^{ik(x_{m}-x_{j})\sin\theta+i( \phi_{m}-\phi_{j})} \tag{4}\] Equation (4) was the main one used for carrying out analytical TR calculations. It includes the effects of diffraction (via the D function, Equation (3)), electrons crossing the interface at an angle \(\alpha\) (via \(\varepsilon\), Equation (2)) and both spatial (via \(x_{m}-x_{j}\approx L(\alpha_{m}-\alpha_{j})\) for the small \(\alpha\) angles) and temporal (via \(\phi_{m}-\phi_{j}=\frac{\omega L}{c}(\frac{1}{\beta_{m}}-\frac{1}{\beta_{j}})\)) decoherence, where \(L\) - distance from the acceleration region to plasma-vacuum boundary (see Figure 3a). Further, we did not take into account the coefficient in front of the Equation (4), working in relative units. ## 5 Analytical TR calculations The main input parameters of Equation (4) are the energy of accelerated electrons E, their divergence \(\alpha\), the transverse size of the target \(\rho\) and the distance to the interface \(L\). The first two parameters were obtained in 3D PIC simulations (Figure 2b), while the last two - from hydrodynamic simulations of nanosecond target expansion (Figure 2a, \(L=\)125 \(\mu\)m, \(\rho=\)230 \(\mu\)m). Double summation Equation (4) was carried out via simple Python script. The results of such calculation at \(\Delta t_{fs-ns}=-1\) ns are presented in Figure 4. It can be seen that numerically calculated angular distribution of transition radiation and its spectrum are in good agreement with experimentally measured ones. Such multi-stage - combination of hydrodynamic, PIC and TR analytical - calculations of TR and their subsequent agreement with experimental results was demonstrated for the first time. Here, calculated THz radiation spectrum and autocorrelation function coincide well with experimentally measured ones (Figure 4b,c). Calculated angular distribution (Figure 4a) is somewhat wider than observed in the experiment which is due to the THz beam being "cut" at the vacuum-to-air Teflon window. As the frequency-angular spectrum of the TR does not change significantly for \(\theta>20^{\circ}\) (see Figure 6b), this had little effect on the experimentally measured spectrum. Note, that accounting for the transmittance of Teflon window has little effect on the observed spectrum (Figure 4b) due to the fact that they have a sharp decline in approximately the same frequency region (\(\nu>3\) THz). Next, numerical PIC calculations and subsequent calculation of TR characteristics were carried out for neighboring values of \(\Delta t_{fs-ns}\), corresponding energy-angle distributions of accelerated electrons and resulting THz angular distributions are presented on Figure 5. The angular characteristics of the TR are in agreement with the experimentally measured ones for both \(\Delta t_{fs-ns}\). For \(\Delta t_{fs-ns}=0\) ns, the experimentally measured angular distribution, similar to Figure 4, experiences a sharp decline, which is not observed in the numerical one is caused by limited acceptance angle of the Figure 4: Comparison of experimentally measured and numerically calculated THz radiation angular distribution (a), spectrum (b) and autocorrelation function (c) for \(\Delta t_{fs-ns}=-1\) ns. Numerically obtained results are shown both with (*T) and without attenuation by Teflon window. registration system. However, for \(\Delta t_{fs-ns}=-2\) ns, where electrons are accelerated more efficiently and, therefore, generate TR with smaller cone opening angle, a complete coincidence of the angular distributions is observed when Teflon window is taken into account. In addition, the asymmetry of THz radiation angular distribution (Figure 5a), which is caused by the asymmetry of the initial distribution of electrons (Figure 5b), is also replicated. Full (i.e., without taking into account the Teflon window) frequency-angular spectra of TR for all three considered values of \(\Delta t_{fs-ns}\) are shown in Figure 6. It can be seen that in the regime of laser plasma interaction under consideration the full spectrum of TR is quite limited: at low frequencies - through diffraction, at high frequencies - through temporal and spatial decoherence. Moreover, varying the parameter \(\Delta t_{fs-ns}\) one simultaneously varies electrons energy-angle distribution, the plasma size \(\rho\) and the length \(L\), leading to non-monotonic changes in TR radiation characteristics. For \(\Delta t_{fs-ns}=0\) ns target size is relatively small (\(\rho\)=150 \(\mu\)m, L=120 \(\mu\)m), leading to high diffraction and suppressed decoherence. However, mean electron energies are also small and no effective generation is observed in the frequency range \(\nu>5\) THz. Further, for Figure 5: Angular distribution of THz radiation measured experimentally and calculated numerically (a,c), as well as energy-angle distribution of accelerated electrons obtained in PIC simulations (b,d), for two values of \(\Delta t_{fs-ns}\): - 2 ns (a,b) and 0 ns (c,d). Target parameters for calculations: \(\rho\)=250 \(\mu\)m, L=100 \(\mu\)m (\(\Delta t_{fs-ns}=-2\) ns) and \(\rho\)=150 \(\mu\)m, L=120 \(\mu\)m (\(\Delta t_{fs-ns}=-2\) ns ). \(\Delta t_{fs-ns}=-1\) ns (\(\rho\)=230 \(\mu\)m, L=125 \(\mu\)m) diffraction effects are reduced allowing more efficient TR generation in the frequency range \(\nu<2\) THz. For \(\Delta t_{fs-ns}=-2\) ns (\(\rho\)=250 \(\mu\)m, L=100 \(\mu\)m) limiting effects of diffraction is similar to \(\Delta t_{fs-ns}=-1\) ns. However, due to higher electron energies and lower beam divergence (Figure 5b), substantial TR with \(\nu>3\) THz is generated. This interconnection of main parameters defining TR with the change in the main pulse - prepulse delay is a characteristic feature of laser - near-critical density plasma interaction. While it requires careful consideration during analysis of experimental or numerical results, with proper target tailoring it may be possible to use it to control the spectrum of THz radiation. Specifically, a more prominent variation of \(L\) is required to shift central TR wavelength efficiently. ## 6 Conclusions Generation of transition radiation in the THz frequency range in the interaction of a 1 TW laser pulse with long-scale undercritical plasma layer, created by ablation of a 16 \(\mu\)m thick mylar tape with a nanosecond pulse, was numerically studied. It was shown that for the parameters of electron beam and target in question is necessary to take into account effects of diffraction and temporal and spatial decoherence during analytical calculation of the TR characteristics. When these effects are taken into account, calculations based on multi-stage numerical simulations (hydrodynamic, PIC and TR analytical) yield THz radiation characteristics (angular distribution, spectrum) that are in good agreement with those measured experimentally. Such complete agreement between numerical calculations and experiments was demonstrated in this work for the first time. It additionally confirms that the mechanism of THz radiation generation in our interaction is transition radiation. It also establishes multi-stage numerical calculations as a viable and sufficiently accurate way to predict the characteristics of Figure 6: Calculated frequency-angular spectra of THz TR for \(\Delta t_{fs-ns}\): 0 ns (a), -1 ns (b), -2 ns (c). THz radiation. It may be of great importance both for establishing mechanisms of THz radiation generation, as well as separating the contribution of TR if various mechanisms are present. ## 7 Acknowledgements This work was supported by scientific program of the National Center of Physics and Mathematics (project "Physics of high energy density. Stage 2023-2025"). D.G. acknowledges the Foundation for Theoretical Research 'Basis' for financial support.
2304.12248
MAC, a novel stochastic optimization method
A novel stochastic optimization method called MAC was suggested. The method is based on the calculation of the objective function at several random points and then an empirical expected value and an empirical covariance matrix are calculated. The empirical expected value is proven to converge to the optimum value of the problem. The MAC algorithm was encoded in Matlab and the code was tested on 20 test problems. Its performance was compared with those of the interior point method (Matlab name: fmincon), simplex, pattern search (PS), simulated annealing (SA), particle swarm optimization (PSO), and genetic algorithm (GA) methods. The MAC method failed two test functions and provided inaccurate results on four other test functions. However, it provided accurate results and required much less CPU time than the widely used optimization methods on the other 14 test functions.
Attila László Nagy, Goitom Simret Kidane, Tamás Turányi, János Tóth
2023-04-14T14:49:02Z
http://arxiv.org/abs/2304.12248v1
# MAC, a novel stochastic optimization method+ ###### Abstract A novel stochastic optimization method called MAC was suggested. The method is based on the calculation of the objective function at several random points and then an empirical expected value and an empirical covariance matrix are calculated. The empirical expected value is proven to converge to the optimum value of the problem. The MAC algorithm was encoded in Matlab and the code was tested on 20 test problems. Its performance was compared with those of the interior point method (Matlab name: fmincon), simplex, pattern search (PS), simulated annealing (SA), particle swarm optimization (PSO), and genetic algorithm (GA) methods. The MAC method failed two test functions and provided inaccurate results on four other test functions. However, it provided accurate results and required much less CPU time than the widely used optimization methods on the other 14 test functions. ## 1 Introduction Optimization is one of the central topics of both applied and theoretical mathematics. It studies the problem of maximizing or minimizing outputs of functions related to the selected parameters. Using efficient optimization, one can in general increase efficiency and decrease losses. Many classes of optimization problems have been studied in mathematics, computer science, and other fields of science (e.g. see refs Hopfield and Tank (1985); Krentel (1986); Gambella et al. (2021)). Over the years, a plethora of specialized optimization areas emerged (e.g. convex optimization, stochastic programming, combinatorial optimization, etc.) and various methods have been elaborated and applied to specific problems. The algorithms can be classified into two groups based on whether they use randomization (stochastic methods) or not (deterministic methods). Stochastic optimization methods (e.g. Draper (2002); Spall (2012); Zheng et al. (2015); Reddy et al. (2017)) search for an optimal solution using the concept of probabilistic translation rules (randomness). These algorithms are gaining popularity due to certain properties which deterministic algorithms do not have. For example, stochastic algorithms may suit to use for objective functions that have multiple local optima whereby deterministic algorithms may get stuck. An extremum (a maximum or minimum point) can be either global (the highest or lowest function value within a region) or local (the highest or lowest value in a neighborhood but not necessarily in the entire region). Finding a global extremum, in general, proves to be an onerous task while trying to explore a local extremum is usually less burdensome (see Press et al. (2007), Zhigljavsky and Zilinskas (2008), Weise (2009) and Hendrix and Gazdag-Toth (2010),). Several optimization methods have already been built into popular program packages like _Mathematica_ (Wolfram language) or Matlab. The least-square function is a non-linear function of several variables (called parameters), which is to be optimized on a convex and compact parameter space. The least-square optimization can be regarded as one of the most studied areas that are still investigated and applied in many different areas of science and engineering (see Bard (1974); Larranaga and Lozano (2002); Seber and Wild (2005)). The tasks of optimization are either maximization or minimization, but they are trivially related to each other, hence from now we are focusing on and formalizing only minimization problems. We are interested in minimizing a real-valued continuous function defined over a convex, compact domain. The solutions to these types of problems are often based on finding the minimum of some kind of distance between the model results and the corresponding measurements Bard (1974); Gustavo et al. (2020). In the applications of chemical kinetics to combustion systems, optimization of rate parameters of reaction mechanisms is required for the efficient interpretation of experimental data Goitom et al. (2022). The rate parameters describe the temperature and pressure dependence of the rate coefficients of the reaction steps, and their initial values are assigned based on direct measurements of rate coefficients, theoretical calculations, or analogies to similar reaction steps. The mechanism is then tested against indirect experimental data, like measured laminar burning velocities and ignition delay times, through an optimization process. This optimization is carried out, by comparing the experimental and simulated data points using a nonlinear objective function called an error function. This error function have typically many local minima and the evaluation of the error function is computationally expensive because it is based on the solution of large systems of ODEs and PDEs. A search for efficient optimization methods applicable to chemical kinetics optimizations is one of the motivations of this paper. As our focus is on the optimization of a function over a domain, the two main important issues to investigate are: 1. how many times we have to evaluate the error function and 2. how many iterations are needed to arrive at or close to the extremum point? In the present work, we describe a novel method that belongs to the family of stochastic approximation methods. The method generates a converging series of mean values and (empirical) covariance matrices, therefore it was named the MAC method. It seems to have good properties from the above-enumerated viewpoints. Convergence of the method will be proved in Theorem 1. We have chosen a series of benchmark functions found in Jamil and Yang (2013); Mishra (2006); Rahnamayan et al. (2007); Shang and Qiu (2006) and we have found that on several test problems, the novel method requires much fewer function evaluations, thus it is appropriate to solve problems when the function evaluation is expensive. Also, it approaches close to the minimum in fewer steps than several other methods. The structure of the paper is as follows. Section 2.1 introduces the main concepts and presents the algorithm. After that (Section 2.2) the mathematical results are discussed. This is followed by the presentation of the numerical results on benchmark functions in Section 3. We summarize the results and outline the plans for the near future in Section 4. ## 2 Main concepts This section is devoted to presenting our stochastic optimization algorithm (MAC) in rigorous terms. After a preliminary discussion on the problem, we introduce the main algorithm and later we discuss some results. ### The algorithm We are given a **continuous**, **deterministic**, **real-valued** function \(Q\) of \(d\) independent variables \((\mathbf{x}=(x_{1},x_{2},\ldots,x_{d})^{\top},d\in\mathbb{Z}^{+})\) defined on a domain \(\mathcal{D}\in\mathds{R}^{d}\). We assume that \(\mathcal{D}\) is a **convex**, **compact** (i.e. closed and bounded) set of \(\mathds{R}^{d}\). Under these terms, it follows that \(Q\) is a bounded function and its (global and local) minimum(s) and maximum(s) are attained over \(\mathcal{D}\). Our main goal is to find the extrema of \(Q\) with a stochastic iterative algorithm. Minimizing \(Q\) over \(\mathcal{D}\) is equivalent to maximizing \(-Q\) above the same region. Hence, without loss of generality, we are targeting to find the (global) minimum points of \(Q\) on \(\mathcal{D}\). Note that \(Q\) is equivalently called **objective function** throughout the article. Under the above terms, let us also define the (non-empty) set \(\mathcal{G}\) of **global minimum** points of \(Q\) over \(\mathcal{D}\) as \[\mathcal{G}:=\left\{\mathbf{x}\in\mathcal{D}:\,Q(\mathbf{x})\leq Q(\mathbf{y })\,\mbox{ holds for all }\mathbf{y}\in\mathcal{D}\right\}. \tag{1}\] Let us introduce the **mass** (or **penalty**) **function**\(g\) that is associated with the **objective function**\(Q\) in such a way that for \((\rho,\mathbf{x})\in\mathds{R}^{+}_{0}\times\mathds{R}^{d}\): \[g(\rho,\mathbf{x})=\exp\left(-\rho\,Q(\mathbf{x})\right). \tag{2}\] Note that \(0<g(\rho,\mathbf{x})\leq 1\) holds for every \(\rho\in\mathds{R}^{+}_{0}\) and \(\mathbf{x}\in\mathcal{D}\). Next, we introduce the concept of **ambient (apriori) distribution**. This is simply defined to be any fixed \(\nu\)**absolutely continuous**, \(d\)-dimensional probability distribution such that: * the support of \(\nu\) is a convex, compact subset of \(\mathds{R}^{d}\); * \(\nu\) is'standard', that is its expectation is \(\mathbf{0}\) and its covariance matrix is the identity matrix \(\mathbf{I}\). E.g. \(\nu\) can be chosen to be the standard \(d\)-dimensional normal distribution (support of which is \(\mathds{R}^{d}\)) or it can be the uniform distribution over the \(d\)-dimensional unit 'ball' around the origin (that is its support is \(\left\{\mathbf{x}:\left\|\mathbf{x}\right\|_{2}\leq 1\right\}\)). Finally, let \(\alpha,\gamma:\mathbb{Z}_{0}^{+}\times\mathbb{Z}_{0}^{+}\to\mathbb{Z}_{0}^{+}\) be two **non-decreasing, unbounded, double sequences** of natural numbers, that is for all \(m\leq n\), \(M\leq N\in\mathbb{Z}_{0}^{+}\): we have \(\alpha(m,M)\leq\alpha(n,N)\), \(\gamma(m,M)\leq\gamma(n,N)\), and \(\sup_{l\in\mathbb{Z}^{+}}\alpha(l,N)=\sup_{L\in\mathbb{Z}^{+}}\alpha(n,L)= \sup_{l\in\mathbb{Z}^{+}}\gamma(l,N)=\sup_{L\in\mathbb{Z}^{+}}\gamma(n,L)=+\infty\) for all fixed \(n,N\in\mathbb{Z}_{0}^{+}\). We also fix \(\alpha(0,0)\)=0. Now, we are ready to present the pseudo-code of our stochastic algorithm. **BEGIN** 1. let \(\beta:\mathbb{Z}_{0}^{+}\to\mathbb{Z}_{0}^{+}\) be a fixed, non-decreasing function of natural numbers such that \(\beta(0)=0\) and let \(\delta>0\) be a real number; with a slight abuse of notation, we use the shorthand notations below \(\alpha_{n}:=\alpha(n,\beta(n))\) and \(\gamma_{n}:=\gamma(n,\beta(n))\) for \(n\in\mathbb{Z}_{0}^{+}\). 2. let \(\mathbf{u}_{0}\in\mathcal{D}\), pick a positive definite matrix \(\mathbf{U}_{0}\in\mathds{R}^{d\times d}\), and fix an ambient (apriori) distribution \(\nu\); 3. \(n:=1\); **Repeat** i. Take a completely independent sample \((\xi_{i})_{i=\alpha_{n-1}+1}^{\alpha_{n}}\) from the distribution \(\nu\) and let: \[\mathbf{p}_{i}:=\mathbf{u}_{n-1}+\mathbf{U}_{n-1}\xi_{i},\] where \(i=\alpha_{n-1}+1,\alpha_{n-1}+2,\ldots,\alpha_{n}\). (Note that \(\mathbf{p}_{i}\)'s are already calculated in the previous step(s) for \(i=1,2,\ldots,\alpha_{n-1}\).) Finally, compute the weights: \((\mathbf{p}_{i})_{i=\alpha_{n-1}}^{\alpha_{n}}\). 2. Compute the **empirical expected value** and **empirical covariance matrix** with those weights determined above, i.e.: \[\mathbf{u}_{n} :=\sum_{i=1}^{\alpha_{n}}\frac{g(\gamma_{n},\mathbf{p}_{i})}{\sum_{ j=1}^{\alpha_{n}}g(\gamma_{n},\mathbf{p}_{j})}\,\mathbf{p}_{i},\] (3) \[\mathbf{U}_{n} :=\left(\sum_{i=1}^{\alpha_{n}}\frac{g(\gamma_{n},\mathbf{p}_{i})} {\sum_{j=1}^{\alpha_{n}}g(\gamma_{n},\mathbf{p}_{j})}\,(\mathbf{p}_{i}- \mathbf{u}_{n})(\mathbf{p}_{i}-\mathbf{u}_{n})^{\top}\right)^{1/2},\] (4) where recall the penalty function defined in Eq. 2. 3. \(n:=n+1\); \(\mathbf{UNTIL}\left\{\left\|\mathbf{u}_{n}-\mathbf{u}_{n-1}\right\|_{2}< \delta\right\}\). **END** **Remark**.: The \(\alpha_{n}\) and \(\gamma_{n}\) parameters are also called the 'learning' parameters. The more we tweak them, i.e. increase, the closer we might get to the optimum (minimum) point. Note that '\(\mathbf{U}_{n}^{2}\)' is a symmetric, positive semi-definite matrix, hence its square root is well-defined and is real as it can be decomposed as \(\mathbf{O}\mathbf{\Lambda}\mathbf{O}^{\top}\) for some orthogonal matrix \(\mathbf{O}\in\mathds{R}^{d\times d}\), where \(\mathbf{\Lambda}\) is the diagonal matrix of the eigenvalues of \(\mathbf{U}_{n}^{2}\), hence \(\mathbf{U}_{n}=\mathbf{O}\mathbf{\Lambda}^{1/2}\mathbf{O}^{\top}\). Mind that \(\mathbf{u}_{n}\in\mathcal{D}\) holds for all \(n\in\mathbb{Z}^{+}\) due to the convexity of the set \(\mathcal{D}\), hence, \(\{\mathbf{u}_{n}\}_{n\in\mathbb{Z}^{+}}\) is a bounded sequence in \(\mathds{R}^{d}\). However, \(\{\mathbf{U}_{n}\}_{n\in\mathbb{Z}^{+}}\) is only bounded when the support of \(\nu\) is finite. Now, the immediate question of whether the above method converges at all, more precisely \(\mathbf{u}_{n}\) converges to an extremum point, is going to be answered affirmatively under some further conditions. ### Results The main theorem of this section concerns the proof of convergence for the previously introduced algorithm under some further conditions on its parameters. **Theorem 1**.: _Let \(\mathbf{u}_{0}\in\mathcal{D}\) and \(\mathbf{U}_{0}\in\mathds{R}^{d\times d}\) be a column-vector and an arbitrarily chosen positive definite matrix, respectively. Assume that the ambient (apriori) distribution \(\nu\) is a.s. **bounded** in such a way that for some \(0<T<\frac{1}{4}\):_ \[\nu\left(\mathcal{S}\right)=1,\ \text{ where }\ \mathcal{S}=\{\mathbf{x}:\left\|\mathbf{x}\right\|_{2}^{2}\leq T\}. \tag{5}\] _Let \(\mathcal{A}:=\{\mathbf{x}\in\mathcal{S}\,:\,\mathbf{u}_{0}+\mathbf{U}_{0} \mathbf{x}\in\mathcal{D}\}\) and assume that \(Q\) has a single minimum value over \(\mathbf{u}_{0}+\mathbf{U}_{0}\mathcal{A}\), i.e.:_ \[\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{A}}Q(\mathbf{u}_{0}+\mathbf{U }_{0}\mathbf{x})=\{\overline{\mathbf{x}}\}.\] _In other words, \(\mathcal{G}=\{\mathbf{u}_{0}+\mathbf{U}_{0}\overline{\mathbf{x}}\}\), where recall 1. Furthermore, with a slight abuse of notation, let \(\alpha_{n}=N\cdot\tilde{\alpha}_{n}\) and \(\gamma_{n}=\gamma\cdot\tilde{\gamma}_{n}\), where \(N\in\mathbb{Z}^{+}\), \(\gamma\in\mathds{R}^{+}\) are free parameters, and \(\tilde{\alpha},\tilde{\gamma}\) alone satisfy the requirements of the algorithm listed above. Then, we have the following convergence results_ \[\lim_{(n,N,\gamma)\rightarrow+\infty}\mathbf{u}_{n}=\mathbf{u}_{0 }+\mathbf{U}_{0}\overline{\mathbf{x}}, \tag{6}\] \[\lim_{(n,N,\gamma)\rightarrow+\infty}\mathbf{U}_{n}=\mathbf{0} \mathbf{0}^{\top} \tag{7}\] _to hold such that for Eq. 6 we impose the condition \(\lim_{(N,\gamma)\rightarrow+\infty}N\cdot\exp\left(-\gamma\right)=0\), where the above convergence is meant almost surely (in short: a.s.)._ * (Theorem 1). First, we prove the second convergence 7 (for empirical variance) and then focus on the first one. First, let \(c=\sqrt{4T}\) and note that by our assumption 5: \(0<c<1\) holds. It is straightforward to see from Eq. 4 that \[\mathbf{U}_{n}^{2}=\mathbf{U}_{n-1}\left(\sum_{i=1}^{\alpha_{n}}w_{i,\alpha_{ n},\gamma_{n}}\mathbf{w}_{i}\mathbf{w}_{i}^{\top}\right)\mathbf{U}_{n-1},\] where we used the below short-hand notations: \[\mathbf{w}_{i} =\xi_{i}-w_{i,\alpha_{n},\gamma_{n}}\xi_{i},\] \[w_{i,\alpha_{n},\gamma_{n}} =\frac{g(\gamma_{n},\mathbf{p}_{i})}{\sum_{j=1}^{\alpha_{n}}g( \gamma_{n},\mathbf{p}_{j})}\] Recall that the random variables \(\xi_{i}\)'s are directly sampled from \(\nu\), hence \(\xi_{i}\in\mathcal{S}\) holds for every \(1\leq i\leq\alpha_{n}\). Then, by 5, the following chain of inequality can easily be derived: \[\left\|\mathbf{w}_{i}\mathbf{w}_{i}^{\top}\right\|_{F} \leq\sup_{\begin{subarray}{c}\kappa_{i}\in[0,1],\mathbf{x}_{i}\in \mathcal{S}\\ \sum_{i=1}^{\alpha_{n}}\kappa_{i}=1,\left\|\mathbf{x}_{i}\right\|_{2}^{2}\leq T \\ (1\leq i\leq\alpha_{n})\end{subarray}}\left\|\left(\mathbf{x}_{i}-\sum_{j=1}^{ \alpha_{n}}\kappa_{i}\mathbf{x}_{i}\right)\left(\mathbf{x}_{i}^{\top}-\sum_{i =1}^{\alpha_{n}}\kappa_{i}\mathbf{x}_{i}^{\top}\right)\right\|_{F}\] \[\leq\sup_{\mathbf{x}\in\mathcal{S}}\left\|2\mathbf{x}\right\|_{2 }^{2}=4T=c^{2},\] holding for all \(1\leq i\leq\alpha_{n}\), where \(\left\|\cdot\right\|_{F}\) denoted the Frobenius norm. Now, from the above estimate it is clear that \[\left\|\mathbf{U}_{n}\right\|_{F}^{2} =\operatorname{Tr}\mathbf{U}_{n}^{2}\] \[=\operatorname{Tr}\mathbf{U}_{n-1}\left(\sum_{i=1}^{\alpha_{n}}w _{i,\alpha_{n},\gamma_{n}}\mathbf{w}_{i}\mathbf{w}_{i}^{\top}\right)\mathbf{ U}_{n-1}\] \[=\sum_{i=1}^{\alpha_{n}}w_{i,\alpha_{n},\gamma_{n}}\operatorname {Tr}\left(\mathbf{w}_{i}\mathbf{w}_{i}^{\top}\right)\mathbf{U}_{n-1}^{2}\] \[\leq\sum_{i=1}^{\alpha_{n}}w_{i,\alpha_{n},\gamma_{n}}\left| \left\langle\mathbf{w}_{i}\mathbf{w}_{i}^{\top},\mathbf{U}_{n-1}^{2}\right \rangle_{F}\right|\] \[=\left\|\mathbf{U}_{n-1}\right\|_{F}^{2}\sum_{i=1}^{\alpha_{n}}w _{i,\alpha_{n},\gamma_{n}}\left\|\mathbf{w}_{i}\right\|_{2}^{2}\] \[\leq c^{2}\left\|\mathbf{U}_{n-1}\right\|_{F}^{2}\sum_{i=1}^{ \alpha_{n}}w_{i,\alpha_{n},\gamma_{n}}(1-w_{i,\alpha_{n},\gamma_{n}})\] \[=c^{2}\left\|\mathbf{U}_{n-1}\right\|_{F}^{2}\left(1-\sum_{i=1}^{ \alpha_{n}}w_{i,\alpha_{n},\gamma_{n}}^{2}\right) \tag{8}\] holds for every \(n\in\mathbb{Z}^{+}\), where we used the cyclicity and linearity of the trace, the Cauchy-Bunyakovsky-Schwarz inequality, and the sub-multiplicative property of the Frobenius norm along the way. Now, using the inequality just obtained in 8, an induction on \(n\) leads to the following estimates for \(\mathbf{U}_{n}\): \[\left\|\mathbf{U}_{n}\right\|_{2} \leq\left\|\mathbf{U}_{n}\right\|_{F}\] \[\leq c^{n-1}\left\|\mathbf{U}_{1}\right\|_{F}\] \[\leq c^{n}\left\|\mathbf{U}_{0}\right\|_{F}\sqrt{1-\sum_{i=1}^{ \alpha_{n}}w_{i,\alpha_{n},\gamma_{n}}^{2}}\] \[\leq c^{n}\left\|\mathbf{U}_{0}\right\|_{F} \tag{9}\] that hold for every \(n\in\mathbb{Z}^{+}\), taking advantage of the subordinate property of the Frobenius norm to the 2-norm and the fact that \(\nu\) is being finitely supported here. This then implies that \(\mathbf{U}_{n}\to\mathbf{00}^{\top}\) as \((n,N,\gamma)\to+\infty\) a.s. Since each norm is equivalent in finite dimension, \(\mathbf{U}_{n}\) converges to \(\mathbf{00}^{\top}\) a.s. in any (matrix) norm completing the proof of Eq. 7. Now, turning to the proof of Eq. 6, observe that using the above notation we have \[\mathbf{u}_{n}=\mathbf{u}_{n-1}+\mathbf{U}_{n-1}\sum_{i=1}^{ \alpha_{n}}w_{i,\alpha_{n},\gamma_{n}}\xi_{i} \tag{10}\] to hold for every \(n\in\mathbb{Z}^{+}\). As a consequence \[\left\|\mathbf{u}_{n}-\mathbf{u}_{n-1}\right\|_{2} \leq\left\|\mathbf{U}_{n-1}\right\|_{2}\left\|\sum_{i=1}^{\alpha_ {n}}w_{i,\alpha_{n},\gamma_{n}}\xi_{i}\right\|_{2}\] \[\leq\left\|\mathbf{U}_{n-1}\right\|_{2}\sup_{\mathbf{x}\in \mathcal{S}}\left\|\mathbf{x}\right\|_{2}\leq c\left\|\mathbf{U}_{n-1}\right\| _{F}\leq c^{n}\left\|\mathbf{U}_{0}\right\|_{F},\] using the previously deduced inequality for \(\mathbf{U}_{n}\) (Ineq. 9), and the relation between the Frobenius- and the 2-norm of a matrix. With the above inequality under the belt, it is not hard to see that \(\mathbf{u}_{n}\) is indeed a Cauchy sequence. Let \(n,m\in\mathbb{Z}^{+}\) such that \(n>m\), then it holds that \[\left\|\mathbf{u}_{n}-\mathbf{u}_{m}\right\|_{2} \leq\sum_{i=1}^{n-m}\left\|\mathbf{u}_{m+i}-\mathbf{u}_{m+i-1} \right\|_{2}\] \[\leq\left\|\mathbf{U}_{0}\right\|_{F}\sum_{i=1}^{n-m}c^{m+i}\leq \left\|\mathbf{U}_{0}\right\|_{F}\frac{c^{m}}{1-c}, \tag{11}\] which goes to zero as \((n,m)\rightarrow+\infty\). It then follows that there exists a \(\mathbf{u}_{\infty}\) random variable such that \(\mathbf{u}_{n}\rightarrow\mathbf{u}_{\infty}\) as \(n\rightarrow+\infty\) a.s., and \[\left\|\mathbf{u}_{n}-\mathbf{u}_{\infty}\right\|_{2}\leq C\cdot c^{n}, \tag{12}\] where \(C=\frac{1}{1-c}\left\|\mathbf{U}_{0}\right\|_{F}\) is an absolute constant depending only on the choice of \(\mathbf{U}_{0}\), that is no matter how we have chosen \((N,\gamma)\). Note that Eq. 10 can be further written as: \[\mathbf{u}_{n}=\mathbf{u}_{0}+\mathbf{U}_{0}\sum_{i=1}^{\alpha_{1}}w_{i, \alpha_{1},\gamma_{1}}\xi_{i}+\sum_{j=2}^{n}\mathbf{U}_{j-1}\sum_{i=1}^{\alpha _{j}}w_{i,\alpha_{j},\gamma_{j}}\xi_{i}\quad(n>0), \tag{13}\] and from this it is easy to see that for each fixed \(n\in\mathbb{Z}^{+}\), the sequence \((\mathbf{u}_{n})_{(N,\gamma)}\) as \((N,\gamma)\rightarrow+\infty\) converges a.s. using the strong law of large numbers. Hence, the limit \(\lim_{(n,N,\gamma)\rightarrow+\infty}\mathbf{u}_{n}\) exists a.s., taking advantage of the uniform estimate pf Ineq. 12. Finally, using the fact that \(\sum_{i=1}^{\alpha_{1}}w_{i,\alpha_{1},\gamma_{1}}^{2}\) goes to \(1\) as \((N,\gamma)\rightarrow+\infty\) such that we have \(N\cdot\exp{(-\gamma)}\to 0\) in place by our assumption, and by recalling Ineq. 9, we can conclude that the double sum (third term) of Eq. 13 converges to zero, whereas the rest approaches the global minimum. \(\blacksquare\) ## 3 Benchmarking The new method was tested on 18 artificial optimization test functions and the results were compared with results obtained from a series of widely used numerical optimization methods. ### Setup for benchmarking The newly proposed stochastic global optimization method, to be referred to as the MAC method was implemented in the Matlab programming language. MAC refers to empirical Mean And empirical Covariance matrix as it was based on a series of calculations of empirical mean and empirical covariance matrix. Several nonlinear uni-modal and multi-modal benchmark functions were selected to compare the performance of the method with other well-known numerical optimization methods. The functions given by an explicit formula that is used for this benchmarking are outlined in Table 1, below. Recently, Layeb (2022) has found a few functions that are even harder to minimize that are included in Table 1 (f9 to f18). Details of the test functions (Table 1) are found in the following references Jamil and Yang (2013); Mishra (2006); Rahnamayan et al. (2007); Shang and Qiu (2006) and Layeb (2022). The MAC method is an iterative technique that starts from some initial values of its hyper-parameters. These initial values of the hyper-parameters depend on the user and problem type. There are two important parameters that control the whole method (hence it has an effect on its convergence, as well), they are: 1. the sample size \(\alpha_{n}\) that is generated at every step \(n\), and 2. another 'learning' parameter denoted by \(\gamma_{n}\). \begin{table} \begin{tabular}{l l c l} ID & Function Name & Dim & Domain \\ \hline \hline f1 & Ackley & 10 & \(\left[-32.768\ 32.768\right]^{5}\) \\ f2 & Cross-in-tray & 2 & \(\left[-10\ 10\right]^{2}\) \\ f3 & Rastrigin & 10 & \(\left[-5.12\ 5.12\right]^{5}\) \\ f4 & Rosenbrock & 10 & \(\left[-5\ 10\right]^{5}\) \\ f5 & RosenbrockSmall & 10 & \(\left[-2.048\ 2.048\right]^{5}\) \\ f6 & Rosenbrock scaled & 4 & \(\left[0\ 1\right]^{4}\) \\ f7 & Sphere & 20 & \(\left[-5.12\ 5.12\right]^{5}\) \\ f8 & Zakharov & 30 & \(\left[-5\ 10\right]^{5}\) \\ f9 & Layeb01 & 5 & \(\left[-100\ 100\right]^{5}\) \\ f10 & Layeb02 & 5 & \(\left[-10\ 10\right]^{5}\) \\ f11 & Layeb03 & 5 & \(\left[-10\ 10\right]^{5}\) \\ f12 & Layeb04 & 5 & \(\left[-10\ 10\right]^{5}\) \\ f13 & Layeb10 & 5 & \(\left[-100\ 100\right]^{5}\) \\ f14 & Layeb11 & 5 & \(\left[-10\ 10\right]^{5}\) \\ f15 & Layeb12 & 5 & \(\left[-100\ 100\right]^{5}\) \\ f16 & Layeb17 & 5 & \(\left[-100\ 100\right]^{5}\) \\ f17 & Layeb19 & 5 & \(\left[-5\ 5\right]^{5}\) \\ f18 & Layeb20 & 5 & \(\left[-5\ 5\right]^{5}\) \\ \end{tabular} \end{table} Table 1: Function ID, dimension, and domain of test functions As discussed in the pseudo-code, the MAC method has several tuning parameters. In this implementation and testing of the MAC method, the following initial values of these tuning parameters and their updating rules were used. \[N=4,\quad n=0\] \[\gamma_{0}:=0.001,\quad\gamma_{n}:=2.8\times\gamma_{n-1}\] \[\alpha_{0}:=1,\quad\alpha_{n}:=\alpha_{n-1}+n\times N.\] ### Benchmark results The final calculated results of the different numerical methods and the MAC method for the benchmark functions are given as tables in the following pages. The interior point (Matlab name fmincon) and the simplex methods were used from the MATLAB Optimization Toolbox, while the pattern search (PS), simulated annealing (SA), particle swarm optimization (PSO), and genetic algorithm (GA) methods were used from the MATLAB Global Optimization Toolbox. All the tested methods were used with their default method parameters. In Table 2, the minimum values are displayed, in Table 3, the number of function evaluations required to obtain the minimum value, and in Table 4, the elapsed time (sec) required are displayed. The calculations were performed on a Windows 10 PC, Intel(R) Core(TM) i5-8250U CPU @ 1.80 GHz, RAM 8.00 GB. ## References \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Function & fmincon & simplex & PS & SA & PSO & GA & MAC \\ \hline Ackley & 163 & 1877 & 3096 & 5621 & 11100 & 5787 & 1723 \\ \hline Cross-in-tray & 24 & 191 & 234 & 1033 & 850 & 429 & 4810 \\ \hline Rastrigin & fail & 840 & 3406 & 4681 & 8700 & 11803 & 4615 \\ \hline Rosenbrock & 702 & 5652 & 10000 & 10001 & 230005 & 250563 & 4825 \\ \hline RosenbrockSmall & 755 & 4851 & 20000 & 8311 & 230050 & 470053 & 773 \\ \hline Rosenbrock Scaled & 400 & 567 & 1906 & 4511 & 2450 & 2920 & 2796 \\ \hline Sphere & 339 & 16323 & 11701 & 13500 & 10550 & 16362 & 5431 \\ \hline Zakharov & 2000 & fail & 20000 & 13441 & fail & 98988 & 2043 \\ \hline Layeb01 & 541 & fail & 321 & fail & fail & 32436 & 4831 \\ \hline Layeb02 & 78 & 60 & 76 & 8906 & 1300 & 9641 & 4810 \\ \hline Layeb03 & 252 & 504 & 722 & 3676 & fail & 9547 & 3233 \\ \hline Layeb04 & 644 & 555 & 704 & 3321 & 5300 & 6445 & 5431 \\ \hline Layeb10 & fail & 22856 & 191 & 2501 & fail & 7949 & 4810 \\ \hline Layeb11 & fail & 1880 & 153 & 5871 & 9050 & 5364 & 3233 \\ \hline Layeb12 & 200 & 979 & 266 & 3616 & 10600 & 9359 & 3713 \\ \hline Layeb17 & 480 & 2212 & 389 & 10306 & 3250 & 5834 & 4828 \\ \hline Layeb19 & 5005 & 1079 & 121 & fail & 2800 & 3578 & 2400 \\ \hline Layeb20 & 132 & 898 & 121 & 2626 & 3650 & 3390 & 4810 \\ \hline \end{tabular} \end{table} Table 3: The number of function evaluations until convergence. ## 6 Conclusion In this paper, we have presented a new method for constructing From the above tabular results, we can say that, the MAC method in general has good performance in most of the benchmark functions except in some of the most complex test functions. In some of the most complex test functions (f9, f11 and f16), the MAC method failed to find an optimum value of the objective function as all the other numerical methods were failed too. According to the summaries of the above tables (final objective function value), the MAC method outperformed almost all the other numerical optimization methods on the test functions: Rastrigin (f3), Zakharov (f8), Layeb02 (f10), Layeb10 (f13) and Layeb11 (f14). In addition to that, the MAC method outperformed the SA in all the test functions except in the Layeb01 (f9) test function. In terms of the number of function evaluations, in general, the MAC method required more function evaluation than all the local optimization methods and less function evaluation than the global optimization methods. The running time requirement in general does not have a significant difference among all the methods and all the test functions. However, in most of the test functions, the MAC method ended up with less running time (sec) than the other global optimization methods. Figure 1: The objective function value corresponding to the number of function evaluation for different benchmark functions using MAC and other numerical optimization methods Summary and outlook A "good" algorithm (or heuristics) must be very "efficient" in such a way as to be applicable to real-life problems, such as e.g. model fitting, maximum likelihood estimate, cost minimization, and performance maximization. What do we mean by "efficient"? On the one hand, we have to take into consideration that \(Q\) is probably expensive to evaluate, and we may or may not be able to efficiently and/or accurately compute its gradient or the Hessian matrix. So our task is rather to find a minimum point of \(Q\) with as few evaluations as possible. On the other hand, we must recognize what we are looking for, we must have well-defined stopping criteria, convergence order, and a clear image of the code. New and new optimization methods are being published even nowadays. One reason for it is that numerical optimization methods are widely used for solving a series of optimization problems having arisen in science and technology. The development of the MAC stochastic optimization method presented in this paper was motivated by the task of the estimation of reaction rate parameters of large chemical kinetics models. Such models consist of large systems of ordinary or partial differential equations. The specialty of these optimization problems is that the prior domain of uncertainty of the fitted parameters is large and within this domain, the objective function contains a large number of local minima. Also, the evaluation of the objective function is very computationally expensive, since it includes the solution of many large systems of differential equations. Such optimization tasks arise in many fields including chemistry science and engineering, combustion science, and systems biology (e.g. development of metabolic models). Our immediate plan is to test the MAC method on a real-life chemical kinetics problem related to combustion models. ## 5 Supplementary material We are attaching as Supplementary material 1. The Matlab code of the MAC method 2. The benchmark functions.
2302.09280
Detecting microbiology in the upper atmosphere: relative-velocity filtered sampling
The purpose of this paper is to re-open from a practical perspective the question of the extent in altitude of the Earth's biosphere. We make a number of different suggestions for how searches for biological material could be conducted in the mesosphere and lower thermosphere, colloquially referred to as the ignoreosphere due to its lack of investigation in the meteorological community compared to other regions. Relatively recent technological advances such as CubeSats in Very Low Earth Orbit or more standard approaches such as the rocket borne MAGIC meteoric smoke particle sampler, are shown as potentially viable for sampling biological material in the ignoreosphere. The issue of contamination is discussed and a potential solution to the problem is proposed by the means of a new detector design which filters for particles based on their size and relative-velocity to the detector.
Arjun Berera, Daniel J. Brener, Charles S. Cockell
2023-02-18T10:25:08Z
http://arxiv.org/abs/2302.09280v1
# Detecting microbiology in the upper atmosphere: relative-velocity filtered sampling ###### Abstract The purpose of this paper is to re-open from a practical perspective the question of the extent in altitude of the Earth's biosphere. We make a number of different suggestions for how searches for biological material could be conducted in the mesosphere and lower thermosphere, colloquially referred to as the biroscopephee due to its lack of investigation in the meteorological community compared to other regions. Relatively recent technological advances such as CubeSats in Very Low Earth Orbit or more standard approaches such as the rocket borne MAGIC meteoric smoke particle sampler, are shown as potentially viable for sampling biological material in the baroeosphere. The issue of contamination is discussed and a potential solution to the problem is proposed by the means of a new detector design which filters for particles based on their size and relative-velocity to the detector. In Press _Astrobiology_ 2023. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to the Author Accepted Manuscript. ## I Introduction The physical pervasiveness of life forms and biological structures around the Earth has been an open and much discussed question for centuries. The idea that the Earth might be a source of life material in the Solar System, and beyond, has both philosophical and practical implications. In terms of pure ecology, the maximum altitude at which life might be found is an important question in its own right as it defines the Earth systems physical extent as a habitat. An example of a practical application is in exploring the upper atmosphere as a possible transport mechanism for pathogenic microorganisms around the Earth [1][2]. Furthermore it has been suggested that if biological particles can be found in the thermosphere then hypervelocity space dust has sufficient momentum to facilitate the planetary escape of such particles, which leads to astrobiology implications [3]. There has been no concerted research effort placed on the mesosphere and thermosphere by the astrobiology community, despite interest [4][5][6]. The concept of the biosphere was first introduced by the geologist Eduard Suess in 1875 as the surface on the Earth where life can be found [7]. This was later extended by the Russian polymath Vladimir I. Vernadsky to be any complete system which encapsulates life [8]. As the word implies, a biosphere has boundaries beyond which life cannot be found. Therefore it is at the boundaries of the biosphere that one finds life at the extremes. Research has established the existence of life at extreme temperatures within volcanoes and in the arctic, at extreme pressures in the deep ocean, and in high radiation environments [9][10][11]. Certain life forms have even been shown to survive in the space environment [12][13]. The extent of the biosphere in the upper atmosphere, i.e. the existence of life at extreme altitudes, remains comparatively little explored. In certain meteorological circles, the mesosphere and lower thermosphere (MLT) is colloquially referred to as the "ignore-osphere" because its unique position makes it inaccessible via balloons and conventional aircraft due to the low air density. Its also inaccessible for satellites as there is still sufficient air for drag to be non-negligible [14]. Thus until recently, the only viable sampling option has been to use sounding rockets. In the last decade new possibilities for sampling have arrived, such as small inexpensive CubeSats and low Earth-orbit (LEO) commercial space craft [15][16]. The highest altitudes where life seems to have been plausibly established are the top of the troposphere and the lower stratosphere [5][17][6]. However, studies deeper into the stratosphere remain controversial due to questions of contamination. In a recent review SantlTemkiv et al. 2022 emphasise that caution is required when interpreting findings in studies where decontamination measures are not fully reported [6]. Earlier studies do not fully report decontamination measures or evaluate the issue compared to modern standards. However, there is a consistent literature dating to the present which reports micrometre sized particles in the stratosphere e.g. [18][19][20][21][22][6]. Additional studies found that these particles include bacteria [23][20][24][6]. The most recent observational campaign of the stratosphere found two bacteria _Bacillus simplex, Staphylococcus passteuri_ and a fungus, _Engyodonitium album_ at 41 km [24]. The purpose of this paper is to entirely re-open the question of what the upper limit of the Earth's biosphere is with practical calculations and suggestions for field campaigns. Our paper builds on the pre-existing observations of the lower atmosphere and on recent work which has identified strong vertical winds, amongst other mechanisms such as volcanic eruptions, as capable of ejecting nanometre sized particles into the MLT [25]. The highest altitude mission that sought to determine the extent of the biosphere was conducted in 1974 by Imshenetsky et al. of the Institute of Microbiology, USSR Academy of Sciences [26]. They used sounding rockets fitted to capture microbiological samples up to 77 km in the mesosphere. They reported finding fungal spores. These findings are the only published study of these altitudes, and therefore there remain concerns about contamination. We propose a new approach to address these questions. Only through multiple and independently run studies will it be possible to settle concerns around contamination, as was done for determining the existence of life in the deep ocean [27][28]. Samples from the surface of the International Space Station (ISS) have been found to have DNA from several kinds of bacteria which were genetically similar to those found in the Barents and Kara seas' coastal zones [29]. It was proposed that the wild land and marine bacteria DNA could transfer from the lower atmosphere into the ionosphere-thermosphere using the ascending branch of the global electric circuit. However, a more likely explanation would be contamination due to spacecraft passing through the lower atmosphere. Atmospheric transmission of microbes has also been postulated as one of the mechanisms behind Antarctic microbial diversity changes [30]. ## II MLT Sampling Methodologies ### Standard approaches Since the late 1940s, there have been rocket sampling missions to high altitudes [31]. These initially involved the use of evacuated steel bottles with an altitude triggered opening and closing mechanism. Vacuum tubes are a perfectly valid method for directly sampling the upper atmosphere but they are limited in terms of temporal and volumetric capacity. With the advent of the Cold War, it became imperative to sample the upper atmosphere for radiative fall-out during nuclear weapons tests. To capture enough of these particles, larger volumes needed to be sampled, leading to the development of the first cryogenic samplers such as the ENCAR-1 [32][31]. Prior contamination of such a sampling device is readily detectable via a Geiger-counter, and there are no large natural sources of radioactive material in the atmosphere, hence these earlier studies had little to contend with contamination wise [31]. The only direct searches for biological material in the MLT have been conducted by Imshenetsky et al. using a specially designed sampler located in the nose cone of a sounding rocket [26]. They assumed that the rocket's outer casing would experience temperatures \(>\)1,000 Kelvin, acting as an ascent sterilsisation. While this is correct in principle, aerodynamically modelling the heating profile for the rocket is the only way to be certain. The assumption by [26] that the heating was uniform and therefore the sterilsisation would also be, is a flaw in their contamination strategy. Since the missions by [26] there have been no further samplings of the MLT conducted looking directly for biological particles. This lead the aerobiology community to conclude 77 km as the maximum empirically defined extent of the biosphere in altitude [4][33]. The review of Smith 2013, criticises this conclusion as Imshenetsky 1978 neglected to give details on contamination precautions other than the aforementioned drag heating of the rocket exterior or the sealing of the samples in-flight [4]. It is crucial that future field campaigns report clearly and transparently all procedures used, even those considered routine. The MLT has been studied by those trying to understand meteoric smoke and the formation of noctilucent clouds. The most relevant to our work is the MAGIC meteoric smoke particle sampler [34]. Meteoric smoke particles are neutrally charged and nanometre in size, making them challenging to detect. Berera et al. 2022 hypothesised that MLT large vertical winds could potentially be strong enough to push particles of this size to around 120 km in rare events [25][3]. The main difficulty with sampling particles of this size is that they tend to follow the flow around the payload, not into the detector [35] due to the low air density complicating the aerodynamics. This problem of the air density is a question of whether the flow is in the continuum limit. Physically this corresponds to whether the particles are dominated by individual molecular collisions, such as in Brownian motion, or by the larger scale fluid dynamics. This is determined by the number density of the air molecules with respect to the characteristic length scale of the object in the flow. MLT flow conditions range from the transition region between continuum and non-continuum to the free molecular flow regime (non-continuum). The rarefaction of a gas interacting with an object with characteristic size L, can be described by the Knudsen number, Kn\(=\frac{\lambda}{L}\), where \(\lambda\) is the mean free path of the gas. For continuum flow conditions one must have \(Kn<<0.1\) and for free molecular flow Kn\(>10\). In the MLT Kn ranges from 0.1 - 10,000, hence any direct flow measuring instrument must be capable of dealing with both transition and free molecular flow, unless separate missions are conducted per regime. Flow conditions and momentum transfer also change depending on the sizes of particle considered. In the MLT particles with \(L\sim 10\) nm interact enough with the surrounding rarefied gases that their interaction can be considered continuous [36]. Most viruses have a radii of around 20 nm, therefore if one only wanted to search for whole viruses and larger bacteria or fungi, then simple continuum modelling may be appropriate. However, accurate modelling of the effect of the gas on heating the detector would need to be carried out. ### New approach: relative-velocity filtered sampling Here we propose a new technique for reducing the risks of contamination effecting sampling at high altitudes. A detector needs to be designed that can sample high volumes over long periods of time in order to have a reasonable chance of encountering a biological particle. Detectors which impede the airflow are not appropriate, as this will increase drag, reducing the flight time and thence the total sampled volume per mission. For a detector continuously sampling a flow, the particles are moving with respect to the detector rest frame. We propose that most contaminants from craft interior and exterior will likely be approximately at rest with respect to the detector. Therefore by eliminating particles which satisfy that criteria, one reduces an aspect of the contamination problem. In terms of detector design, it is easier to apply this thinking by considering the velocities of the particles. Contaminants from the payload delivery system will have a velocity smaller than particles entering the detector as part of the flow generated by the motion of the rocket or orbiting craft. Contaminants are most likely to come from the exterior of the crafts surface and due to the low density at these altitudes, will take longer to reach velocities close to the mean flow. Therefore if the detector moves sufficiently fast, \(>\) an order of magnitude say, relative to the rarefied gas flow, then any particle that detaches itself from the exterior of the craft entering the detector will have a slower velocity. This could allow for such a detector to be placed at a position other than at the head of the rocket. However careful simulation and modelling would be required to ensure the correct relative velocity is selected for an accurate trigger action. In Figure 1 we have drawn a high-level schematic of the triggering process for taking a sample. Once the craft is at the selected sampling altitude, the sampling probe should open to allow the air to pass through it. After some short time, the triggering mechanism is activated. To detect a particle of interest to capture, a small lidar system could be used to scan the incoming flow in the tube detecting both particle size and velocity. The triggering computer will have two options. It can either allow the flow to continue through the detector or it can close a valve, forcing the flow to pass into a sampling chamber where particles can be captured. The decision tree is, 1. If \(v<u\): valve open, particle is rejected from analysis 2. If \(v\sim u\) or \(v>u\) AND size \(>10.0\) nm: valve close, particle accepted for analysis where \(u\) is the relative velocity between the analyser and the air flow, which would approximately be Mach 3, and \(v\) is the speed of the particle detected by the lidar relative to the analyser. Let us make an order of magnitude estimate for the scanning frequency and trigger speed requirements of such a detector. For simplicity, lets assume the flow into the detector travels at 1,000 m/s (approximately Mach 3, a typical sounding rocket velocity). This means that for a particle with characteristic size of 1 nm, a detector of length 10 cm would have \(\sim\)100\(\mu\)s to detect it, which is within the operating capabilities of circuitry. Dyes which bind to specific proteins or nucleic acids (e.g. Sybr GREEN) [37], could be used to illuminate biological material. Currently these take time to bind and illuminate but in principle one could develop an instantaneously binding dye. The air could be sampled, sprayed and then accepted/rejected for analysis based on the detection of fluorescence. An alternative procedure could be to spray the samples once they have been filtered by relative-velocity, as a means of highlighting regions of interest on the film discs. A similar idea has been proposed, not for detection of life, but to eliminate contamination concerns by painting the exterior rocket surface with fluorescent beads to trace and identify contamination pathways [38][4]. There are a number of engineering challenges which would need to be dealt with. The crucial problem is that the detector must function for the transition and free molecular regimes. To do this Direct Simulation Monte Carlo (DSMC) models of the detector will be required. Figure 1: Trigger mechanism key stages. This technique was used for the MAGIC detector experiments as well as the development of other MLT sounding rocket instrumentation [36]. Secondly, the scanning device itself will be a challenge to engineer such that it can operate at high frequency and be relatively compact. There are a number of similar devices already developed for industry as detecting small particles in a flow is a standard problem (see e.g. [39]). However, this technique to mitigate contamination during sampling is only as good as the weakest point in a mission: from craft assembly through to post-sampling processing. These are specific challenges in microbiology due to the wide range of biomasses which need to be analysed. This problem was tackled in [40], where they developed a four-stage ultra-low biomass pipeline: amassment, storage, extraction and nucleic acid analysis. They use several decontamination procedures, including a negative control, where they processed a sample collected with zero airflow into their detector for 1 minute. Such a procedure should be implemented for our proposed technique. The problem of contamination in low microbial biomass microbiome studies has been thoroughly reviewed in Eisenhofer et al. 2019 and we strongly support the recommendations to minimise the influence of contaminant DNA in atmospheric sampling missions [41]. Another approach to contamination control would be to deliberately introduce artificial contaminants, such as polystyrene spheres, onto the surfaces of locations from which contamination might occur. The detection of these spheres by the capture device would allow for contamination to be quantified and identified. Similar approaches are regularly used in deep subsurface microbiological drilling e.g. [42], whereby fluorescent polystyrene spheres are added to the drilling mud to control for the internal contamination of core material. In summary, contamination can never be completely ruled out, as with other low biomass studies, but similarly to deep drilling and core recovery in low biomass environments, there are a number of ways to control for, and quantify, the potential for contamination in the instrument, which would be a critical part of instrument design and development. ## III Mission strategies Firstly, we consider the use of sounding rockets, even simply to repeat the experiments by [26]. Using a sounding rocket is highly beneficial as it provides a high sampling volume for relatively low cost. To zeroth order, even in the free molecular regime, the sampled volume will be given by the product of the flow rate through or over the surface of the detector with the sampling time. The flow rate to zeroth order is the surface area of the detector orthogonal to the flow product with the mean speed of the flow relative to the detector. Assuming a relative velocity of 1,000 m/s and a circular detector area of 0.007 m\({}^{2}\), corresponding to a diameter of 10 cm, then one finds a flow rate of \(\sim\)7 m\({}^{3}\)/s. A typical sounding rocket flight through the MLT lasts around 80 seconds, which would result in \(\sim\)560 m\({}^{3}\) of sampled volume. To find one particle of biological material in such a volume would be a significant result in the MLT. The key factor in the total volume sampled is the time the rocket can spend in the MLT. Even just 20 minutes of horizontal flight at a fixed altitude would yield a sample volume of \(\sim\)8,400 m\({}^{3}\). There are groups working on sampling rockets capable of such flights [43]. Whilst these calculations are extremely crude, they nonetheless demonstrate real potential for defining the biosphere even for extremely low particle concentrations. One could devise a series of sounding rocket missions where the rocket cruises at a fixed altitude while sampling, at intervals of 10 km from 50 - 150 km. These should be launched months apart to avoid inter-launch contamination. Geospatial separation of the launch sites would also be ideal as [25] identified the poles as more likely to have biological material, since these areas experienced stronger vertical winds due to geomagnetic activity. Of vital importance when considering the use of rockets is the aerodynamic design of the rocket and detector itself. This is to ensure that a steady, representative flow passes through the device and that when particles are selected for capture, blow-back is not an issue. Future work should examine ways to ensure that the particles captured at high speed have their momentum sufficiently arrested so that they stick to the sampler. These are crucial problems which have been around for some time in rocket-based sampling [44][45][46][35]. Alternatively, CubeSats represent a significant advance in technology in the last decade. We propose an ambitious potential mission program that would use multiple CubeSats launched months apart to sample both the upper thermosphere down to the lower thermosphere, prior to re-entry. The commercial space industry has already developed CubeSats which can orbit for short periods in Figure 2: CubeSat mission strategy core phases. what is called the Extremely Low Earth Orbit (ELEO) region, and there have already been field campaigns using multiple of such satellites to explore this region [47]. One example of such a CubeSat is the TSAT Global-star ELaNa-5 ELEO satellite (ELEO-SAT) which has dimensions 10x10x20 cm and an orbit velocity of \(\sim\)7 km/s [48]. Using the same simple calculation method as before, this has the potential to sample \(\sim\)4,200 m\({}^{3}\) in 1 hour. Without a propulsion system, it is not possible to control the altitude of the CubeSat, and any such system would add too much additional weight which would reduce the sampling time. Hence, the CubeSat rapidly decreases in altitude and so cannot be used to sample a single altitude accurately. However, there are two key benefits to CubeSats over conventional sounding rockets: firstly, CubeSats can sample for hours and even weeks, whereas rockets are limited to periods minutes. Secondly, they sample the atmosphere globally, as they are in orbit. The ELEO-SAT is designed for atmospheric sampling between 120 and 325 km with over 600 orbits adding up to around 5 weeks flight time. To isolate altitude ranges, we suggest that a series of CubeSat's are deployed to sample different 100 km bands (e.g. 200 km to 100 km). In Figure 2 we have illustrated the mission strategy. In phase 1, the CubeSat is launched and deployed by rocket. The CubeSat could be projected out of the rocket forwards by an explosive charge so that it remains upstream of the rocket on its first orbit (\(\sim 90\) minutes). For phase 2 the CubeSat will sample a region of atmosphere consistent with it orbital resonance as CubeSats can rise and fall by as much as 50 km in a single orbit (see e.g. Figure 16 of [48]). Alternatively, a single ELEO-SAT released from 400 km, travelling on average at 7 km/s for 325 km down to 120 km over 5 weeks (840 hours) would have come into contact with approximately \(10^{5}\) m\({}^{3}\) of air. Due to the high mixing rates at these altitudes it is extremely improbable that the satellite could sample identical volumes of air. CubeSats usually burn up in the mesosphere at around 110 km, however it is necessary that the craft can be fully recovered. This could be achieved by using a commercially available system called a CubeSat Deorbit and Recovery System (DRS) [49]. The CubeSat DRS is a module which can be attached to the CubeSat containing a 1.2 m tension cone heat shield and parachute, all weighing \(<\)1.5 kg. This module is triggered to deploy the shield immediately prior to re-entry [49]. ## IV Discussion It will never be possible to entirely rule out contamination from a human launched craft. We would argue that is not an issue for defining the biosphere as the increased distribution of microbiological material in the upper atmosphere and possibly into space, is now most likely a fact due to human progress. This raises many questions: What is the extent of this? Does the microbiology survive, and does it continue to breed? The astrobiology and aerobiology communities need to consider these questions carefully moving forwards. If naturally occurring biology is to be found in the MLT it is most likely to be on the scale of viruses or smaller bacterial organelles or possibly fungal spores [25][26]. If one recovers viable biological material then one could use such samples, together with the flight data, to better quantify the proportion of particles that are viable as a function of altitude. This could provide valuable data on microbial viability, survival, and destruction in the upper atmosphere. Relative-velocity filtered sampling might also be applicable to searches for evidence of life on other planets. Conventional approaches require the landing of probes on the planet's surface, excavating the soils and performing analysis on samples [50][51]. Our proposal solves two problems: significantly reducing a major part of the contamination concerns, and removing the requirement of landing a probe in order to sample for life. Of course, a land based relative-velocity filter device would not eliminate the need for landing, although it could potentially reduce the need to move the probe around the planet. Using the natural winds to sample the biosphere is, in our opinion, likely to be a more efficient and lower risk strategy than is currently achieved with rovers. An added advantage is that being at ground level, if there is any biology on the planet, it is more likely to be found closer to the surface, assuming the planet has a similar atmospheric structure to Earth [4][25]. ## V Concluding remarks Defining the biosphere's extent in altitude is of interest to the microbiology and aerobiology communities, and has implications for ideas in astrobiology. Very few field campaigns have been conducted to empirically determine the biosphere, largely due the difficulty of sampling the MLT. As with previous microbiological campaigns looking for life at the extremes, the contamination problem must be foremost in any effort to sampling these remote atmospheric regions. We propose that a sampling device could detect the relative-velocity and size of incoming particulates, and thereby filtering sampled air to mitigate against self-contamination. The main underlying feature of the relative velocity filtered sampling method we propose, is that the particle of interest that is to be measured, should be at a different relative velocity to the detector, thus eliminating contamination that may persist on the detector. The proposal in our paper is to utilize this method to look for biological particles in the upper atmosphere, above 100km, which would be a novel discovery. The detector should be on a moving craft, such as as a sounding rocket, satellite, or even a drone, which intakes the surrounding air at high velocity and then detects the particles of interest in flight. We have investigated how such a device could be designed around a CubeSat craft in very low Earth orbit or in the nose cone of a rocket, to enable sampling of the MLT. The ideas involve several engineering challenges, specifically around the rapid detection of particles using a lidar that is light-weight, small and of sufficient temporal and spatial resolution. Other technologies that we are unaware of may be more appropriate or in the future be developed, but the principle of using the relative-velocity of individual particles in a flow will in general always be applicable. ## Acknowledgment The authors were supported by the United Kingdom Science and Technology Facilities Council, with Charles S. Cockell under grant ST/V000586/1.
2301.10281
Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
Neural Architecture Search (NAS) is quickly becoming the go-to approach to optimize the structure of Deep Learning (DL) models for complex tasks such as Image Classification or Object Detection. However, many other relevant applications of DL, especially at the edge, are based on time-series processing and require models with unique features, for which NAS is less explored. This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged as a promising alternative to more complex recurrent architectures. We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive-field and number of features in each layer. The proposed approach searches for networks that offer good trade-offs between accuracy and number of parameters/operations, enabling an efficient deployment on embedded platforms. We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals. Results show that, starting from a single seed network, our method is capable of obtaining a rich collection of Pareto optimal architectures, among which we obtain models with the same accuracy as the seed, and 15.9-152x fewer parameters. Compared to three state-of-the-art NAS tools, ProxylessNAS, MorphNet and FBNetV2, our method explores a larger search space for TCNs (up to 10^12x) and obtains superior solutions, while requiring low GPU memory and search time. We deploy our NAS outputs on two distinct edge devices, the multicore GreenWaves Technology GAP8 IoT processor and the single-core STMicroelectronics STM32H7 microcontroller. With respect to the state-of-the-art hand-tuned models, we reduce latency and energy of up to 5.5x and 3.8x on the two targets respectively, without any accuracy loss.
Matteo Risso, Alessio Burrello, Francesco Conti, Lorenzo Lamberti, Yukai Chen, Luca Benini, Enrico Macii, Massimo Poncino, Daniele Jahier Pagliari
2023-01-24T19:47:40Z
http://arxiv.org/abs/2301.10281v1
# Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge ###### Abstract Neural Architecture Search (NAS) is quickly becoming the go-to approach to optimize the structure of Deep Learning (DL) models for complex tasks such as Image Classification or Object Detection. However, many other relevant applications of DL, especially at the edge, are based on time-series processing and require models with unique features, for which NAS is less explored. This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged as a promising alternative to more complex recurrent architectures. We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive-field and number of features in each layer. The proposed approach searches for networks that offer good trade-offs between accuracy and number of parameters/operations, enabling an efficient deployment on embedded platforms. Moreover, its fundamental feature is that of being _lightweight_ in terms of search complexity, making it usable even with limited hardware resources. We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals: (i) PPG-based Heart-Rate Monitoring, (ii) ECG-based Arrhythmia Detection, (iii) sEMG-based Hand-Gasture Recognition, and (iv) Keyword Spotting. Results show that, starting from a single seed network, our method is capable of obtaining a rich collection of Pareto optimal architectures, among which we obtain models with the same accuracy as the seed, and 15.9-152\(\times\) lower parameters. Moreover, the NAS finds solutions that Pareto-dominate state-of-the-art hand-tuned models for 3 out of the 4 benchmarks, and are Pareto-optimal on the fourth (sEMG). Compared to three state-of-the-art NAS tools, ProxylessNAS, MorphNet and FBNetV2, our method explores a larger search space for TCNs (up to \(10^{12}\times\)) and obtains superior solutions, while requiring low GPU memory and search time. We deploy our NAS outputs on two distinct edge devices, the multicore Green/Waves Technology GAP8 IoT processor and the single-core STMicroelectronics STM32H7 microcontroller. With respect to the state-of-the-art hand-tuned models, we reduce latency and energy of up to 5.5\(\times\) and 3.8\(\times\) on the two targets respectively, without any accuracy loss. Neural Architecture Search, Temporal Convolutional Networks, Deep Learning, Edge Computing, Energy Efficiency ## 1 Introduction Deep Learning (DL) models are at the core of many time-series processing applications. Notable examples are audio classification [1], bio-signals analysis [2, 3] and predictive maintenance [4, 5]. For many years, the state-of-the-art DL models for time-series-based tasks have been Recurrent Neural Networks (RNNs) [6]. Recently, however, new architectures have been proposed as viable alternatives to RNNs, such as Attention-based Transformers and Temporal Convolutional Networks (TCNs) [7]. The latter, in particular, are uni-dimensional Convolutional Neural Networks (CNNs) specialized for time series, which have been shown to provide an accuracy comparable to RNNs, while providing computational advantages, namely higher arithmetic intensity, smaller memory footprint and more data reuse opportunities [7]. Thanks to these features, TCNs are particularly interesting for edge computing applications, where inference is directly executed on Internet of Things (IoT) edge devices, rather than on centralized servers in the cloud. On-device inference requires small and efficient DL models, compatible with the limited memory spaces and tight energy budgets of edge nodes, while avoiding the transmission of raw data to the cloud provides many advantages, such as better privacy, higher energy efficiency and more predictable response latency [8]. To meet such tight constraints, however, selecting an efficient model such as a TCN is just the first step. Next, it is paramount to optimize its architectural hyperparameters based on the task at hand, so that the resulting network occupies as low memory and performs as few operations as possible to reach the desired accuracy level. Nowadays, rather than manually, such architectural optimization is increasingly performed with automatic Neural Architecture Search (NAS) tools. A plethora of different NAS approaches have been proposed in the last few years, and several of these works have targeted edge devices [9, 10, 11, 12, 13]. However, to the best of our knowledge, none of them has focused specifically on models for time-series processing, nor specifically on TCNs, despite the unique features of these networks. In fact, while TCNs share most of their key architectural features with standard CNNs, the peculiar 1D convolution operations at the heart of these networks increase the importance of some hyperparameters, such as the filters dilation and receptive field, resulting in a much larger variety in their values than what is common in 2D models for image-processing. Although there are NAS tools, originally designed for 2D CNNs, that can be easily extended to explore these parameters, they do so in a coarse-grain way, basically creating a different copy of all network layers for each architectural setting [13]. This approach results in highly memory- and time-consuming searches, requiring 100s of GPU hours even for relatively simple tasks, which in turn translate into large energy wastes and CO2 emissions. In contrast, lightweight NAS approaches explore a finer-grain space with lower complexity, but they achieve this result at the cost of focusing only on the key characteristics of a specific model type, e.g., the number of channels in each layer of 2D CNNs for computer vision [10, 11]. In Risso et al. [14], we proposed the first lightweight NAS explicitly designed for optimizing TCNs by tuning the dilation hyperparameter. In this work, we extend and complete [14] by including the optimization of the receptive field and of the number of channels of all convolutional layers in a TCN, as well as the number of neurons in Fully Connected layers, with a search time comparable to that of a single, standard training. Starting from a single seed model, our proposed tool, called _Pruning In Time (PIT)_ can produce a rich set of Pareto optimal architectures in terms of number of operations/parameters versus accuracy. The following are the main contributions of our work: * We frame the optimization of receptive field and dilation as a _structured weight pruning_, in which additional trainable masking parameters are added to different layer's weights so that their binarized values encode valid settings of the architectural hyperparameters. These masks are then trained with a regularizer to reduce the model complexity as much as possible while preserving accuracy. While similar masking approaches already exist for optimizing the number of channels in a 2D convolutional layer [11], our work is the first to extend this approach to filter size and dilation. * We consider two different regularizers, targeting respectively the reduction of the number of parameters and of the number of inference operations. This allows us to enlarge and enrich the collection of Pareto architectures found by our NAS. * We test and validate PIT on four benchmarks relative to real-world time-series processing tasks where TCNs are commonly employed and for which a deployment on edge devices is relevant: (i) PPG-Based Heart-Rate Monitoring; (ii) ECG-based Arrhythmia Detection; (iii) sEMG-based Hand-Gesture Recognition; (iv) Keyword Spotting. Results show that PIT can find multiple Pareto-optimal architectures starting from a single seed network, achieving 15.9-152\(\times\) parameter reduction while maintaining the same accuracy of the seed. PIT is also capable of either matching or surpassing the accuracy and computational cost of state-of-the-art hand-tuned networks. Furthermore, our approach Pareto-dominates three popular NAS tools developed for computer vision, thanks to the exploration of a larger search space. * We deploy some of the relevant Pareto-optimal solutions found for each task on two different edge devices, in order to measure their memory footprint, latency and energy consumption. The two considered platforms are the multicore GAP8 IoT processor [15] and the single-core STM32H7 MCU [16]. The deployment results show that, at iso-accuracy, solutions found by PIT reduce energy consumption and latency up to 5.45\(\times\) on GAP8 and up to 3.83\(\times\) on the STM32H7, compared to hand-tuned networks. The code of PIT is released as open-source at: [https://github.com/EmbeddedML-EDAGroup/PIT](https://github.com/EmbeddedML-EDAGroup/PIT). The rest of the paper is structured as follows. Section 2 provides the required background and surveys some of the most relevant NAS methods proposed in the literature. Section 3 presents the proposed methodology. Section 4 details the target benchmarks while Section 5 discussed the obtained results, and Section 6 concludes the paper. ## 2 Background and Related Works ### _Temporal Convolutional Networks_ Temporal Convolutional Networks are 1-dimensional (1D) CNN variants that have recently gained significant traction for efficient time-series processing, obtaining state-of-the-art results in several tasks [17, 18, 19]. With respect to RNNs and their successive evolutions, such as the Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU), TCNs are less affected by training-time issues, such as vanishing/exploding gradients and the large amount of training memory required by RNNs for long input sequences. Moreover, they also have computational advantages at inference time, since they share the better data locality and arithmetic intensity of standard CNNs, which makes them latency- and energy-efficient [7]. The main building blocks of TCNs are the same ones found in standard CNNs, i.e. Convolutional, Pooling and Fully Connected (FC) layers. However, the convolutional layers of a TCN are characterized by _causality_ and _dilation_, two properties that make them suited for temporal inputs. _Causality_ enforces that the outputs of convolutions do not violate the natural cause-effect ordering of events. In practice, the outputs \(y_{t}\) of a TCN convolution only depend on a finite set of past inputs \(x_{[t-F;t]}\), where \(t\) is a discrete index. _Dilation_ is the mechanism used in TCNs for enlarging the receptive field of convolutions on the time axis, without requiring more trainable parameters and without increasing the number of operations required for inference. It is a fixed step \(d\) inserted between the input samples processed by each convolutional filter. Eq. 1 summarizes the 1D dilated convolution operation implemented by TCN layers: \[y_{t}^{m}=\sum_{i=0}^{K-1}\sum_{l=0}^{C_{in}-1}x_{ts-d\,:}^{l}W_{i}^{l,m}\,, \forall m\in[0,C_{out}-1],\forall t\in[0,T-1] \tag{1}\] where \(x\) and \(y\) are the input/output activations, \(T\) is the output sequence length, \(W\) the array of filter weights, the number of input/output channels, \(K\) the filter size and \(s\) the stride. We also define \(F=d\cdot(K-1)+1\) the _receptive field_ of the layer. ### _Neural Architecture Search_ In recent years, several manually designed efficient and compact convolutional neural network architectures for edge devices have been proposed, including early MobileNets [20], ShuffleNets [21], EfficientNet [22], SqueezeNet [23], etc. While these models are very efficient, obtaining them required a long and time-consuming manual tuning of hyper-parameters, which has to be repeated from scratch when considering a different target task, or a different deployment target. To solve this issue, many automated or semi-automated methods to optimize neural network architectures, easing the burden of designers, have been proposed. These approaches, generally denoted as Neural Architecture Search (NAS) algorithms, explore a large design space made of different combinations of layers and/or hyper-parameter values, selecting solutions that optimize a cost metric. The latter is often a function of both the accuracy of the network, and its computational cost (e.g., number of parameters or inference operations). Table I qualitatively compares some of the most relevant works in this field, in terms of search time, memory requirements during training (Mem.), search space size, and possibility to vary the topology (number and type of layers) of the resulting NNs. For Time and Mem., smaller is better, whereas for Search Space, larger is better. Early NAS tools were based on Reinforcement Learning (RL) [9, 12, 24, 25] or Evolutionary Algorithms (EA) [26]. At each search iteration, these methods sample one or more architectures from the search space. Sampled networks are then trained to convergence to evaluate their accuracy (and possibly cost), which is then used to drive the next sampling. The repeated training in each iteration is the main drawback of these tools, for which a single search requires 1000s of GPU hours, even on relatively simple tasks. Accordingly, these methods are associated with large search time in Table I. Memory occupation is low and comparable to a standard training, since each sampled architecture can be trained separately. The search space size is virtually unlimited, and these tools can easily support variable topologies. Notable exceptions are [9], which searches over a fixed convolutional topology of a variable number of layers without varying their type and the connections between them, and [24], that constrains its search space to a set of only 13 different layers per node. To solve the search time issue of RL and EA methods, more recent _Differentiable_ NAS (DNAS) approaches have proposed the so-called _supernets_[27]. Supernets are DNNs that include all possible alternative layers to be considered during the optimization. For instance, a single supernet layer might include multiple Convolutional layers with different kernel sizes, operating in parallel. The problem of choosing a specific architecture is then translated into the problem of choosing a _path_ in the supernet [27]. The choice between the different paths is encoded with binary variables, jointly trained with the standard weights of the network using gradient-based learning. To search for accurate and efficient architectures, DNAS tools enhance the normal training loss function with an additional differentiable regularization term that encodes the cost of the network. Typical cost metrics are the number of parameters and the number of Floating Point Operations (FLOPs) per inference [11]. Mathematically, DNAS tools search for: \[\min_{W,\theta}\mathcal{L}(W;\theta)+\lambda\mathcal{R}(\theta) \tag{2}\] where \(\mathcal{L}\) is the standard loss function, \(W\) is the set of standard trainable weights (e.g., convolutional filters), \(\theta\) is the set of additional NAS-specific trainable parameters that encode the different paths in the supernet, \(\mathcal{R}\) is the regularization loss that measures the cost of the network and \(\lambda\) is a hand-tuned _regularization strength_, used to balance the two loss terms. While DNAS algorithms are more efficient than early RL/EA-based solutions, training the entire supernet still requires huge computational resources both in terms of training time and memory occupation. This, in turn, translates in a reduction of the explored search space for practical DNASes such as [27], which have to limit the search to few alternatives per layer, in order to keep the memory occupation under reasonable bounds. The authors of [13] have proposed ProxylessNAS, an advanced DNAS that reduces the memory requirements, keeping in memory at most two supernet paths for each batch of inputs. In ProxylessNAS, the normal weights and the additional parameters encoding supernet paths are trained and updated in an alternate manner. First, path parameters are frozen, and based on their current value, one sub-architecture of the supernet is stochastically sampled. Then, the weights of the sampled architecture are updated based on the training set. Second, the normal weights are frozen and the architectural parameters are trained on the validation set. This second phase updates two different paths at a time, sampling them from a multinomial distribution. In turn, this clever strategy allows ProxylessNAS to explore a significantly larger search space compared to other DNAS tools. A further evolution in the direction of lightweight NAS is constituted by DMaskingNAS [10], fine-grain NAS [11] \begin{table} \begin{tabular}{|l|c c c c|} \hline & Time & Mem. & Search Space & Topology \\ \hline \hline \multicolumn{5}{|l|}{**Reinforcement Learning**} \\ \hline \hline Zoph et al. [9] & \(\uparrow\) & \(\downarrow\) & \(\nearrow\) & Variable\({}^{*}\) \\ \hline MNASNET [12] & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) & Variable \\ \hline NASNET [24] & \(\uparrow\) & \(\downarrow\) & \(\nearrow\) & Variable \\ \hline MetaQNN [25] & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) & Variable \\ \hline \hline \multicolumn{5}{|l|}{**Evolutionary**} \\ \hline \hline Real et al. [26] & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) & Variable \\ \hline \hline \multicolumn{5}{|l|}{**DifferentiableNAS**} \\ \hline \hline DARTS [27] & \(\nearrow\) & \(\uparrow\) & \(\downarrow\) & Variable \\ \hline ProxylessNAS [13] & \(\nearrow\) & \(\nearrow\) & \(\nearrow\) & Variable \\ \hline \hline \multicolumn{5}{|l|}{**DmaskingNAS**} \\ \hline \hline FBNetV2 [10] & \(\downarrow\) & \(\downarrow\) & \(\uparrow\) & Fixed \\ \hline MorphNet [11] & \(\downarrow\) & \(\downarrow\) & \(\nearrow\) & Fixed \\ \hline S-Path NAS [28] & \(\downarrow\) & \(\downarrow\) & \(\nearrow\) & Fixed \\ \hline **PIT (this work)** & \(\downarrow\) & \(\downarrow\) & \(\uparrow\) & Fixed \\ \hline \multicolumn{5}{|l|}{\({}^{*}\) Depth only} \\ \end{tabular} \end{table} TABLE I: State-of-the-art NAS (Values: \(\uparrow\)= large, \(\nearrow\)= medium, \(\downarrow\)= small). and Single-Path NAS [28] approaches. In these solutions, the supernet is replaced by a single, usually large, architecture with a unique path. Optimized architectures are found as modifications of this initial _seed model_, obtained tuning hyper-parameters, such as the number of channels in each layer [11]. The key mechanism that enables this tuning within a normal training loop is the use of _trainable masks_, used to prune parts of the network. DMaskingNAS tools pursue the same DNAS objective of (2), where \(\theta\) now represents the set of trainable masks. FBNet-V2 [10], for instance, uses a set of dedicated masks, each of which encodes a different number of output channels or a different spatial resolution, and is weighted with a trainable parameter. At the end of the search, the mask coupled with the largest parameter is used to determine the final architectural setting. Similarly, MorphNet [11] exploit as masking parameters the pre-existing multiplicative terms of batch normalization layers [29]. When these parameters assume a value lower than a threshold, the corresponding channels/feature maps from the preceding Convolutional layer are eliminated. These approaches are more constrained than supernet-based ones in terms of NN topology. In fact, they do not allow to select between alternative layers (e.g., standard convolution versus depth-wise + point-wise convolution). On the other hand, they have two key advantages. First, they have much lower memory cost and search time, while still being able to find high-quality architectures. Crucially, the search time of a DMaskingNAS is comparable to a standard network training. Second, some DMaskingNASes (including our work) can explore the search space at a much finer grain. For example, MorphNet [11] can easily select between 1 and 32 output channels in a Convolutional layer with a granularity of 1, by starting from a 32 channels seed layer, and eliminating those corresponding to the smallest batch normalization multiplicative parameters. Obtaining the same result with a standard DNAS would require a very large supernet, with 32 parallel convolutional layers. The masking and super-net approaches can also be combined, to bypass the limitations of DMaskingNAS [30]. The NAS literature referenced above focuses almost exclusively on 2D-CNNs for computer vision. None of the existing approaches has been applied to time-series processing tasks, despite the fact that a large amount of edge-relevant real-world tasks deal with uni-dimensional time-dependent signals (e.g., bio-signals, audio, energy-traces, sensor readings from industrial machines, etc). Our work tries to fill this gap, by proposing a novel DMaskingNAS that targets the optimization of 1D networks. Moreover, the working principles of our tool are general, and could form the basis for a more general NAS, able to explore temporal hyper-parameters of arbitrary N-dimensional Convolutional layers (e.g., including also 3D-CNNs for spatio-temporal data processing), although this paper focuses exclusively on TCNs. ## 3 Proposed Method We name our proposed tool _Pruning in Time (PIT)_, since it targets networks that process time-series, and the core mechanism of a DMaskingNAS is very similar to structured pruning [31]. PIT explores the architectures of convolutional and fully-connected (FC) layers, the two most compute- and memory-expensive operations present in TCNs. For each convolutional layer, PIT jointly explores the _number of channels (\(C_{out}\))_, the _receptive field (\(F\))_, and the _dilation (\(d\))_. Moreover, by tuning both \(F\) and \(d\), it also indirectly affects the _filter size_\(K\). To the best of our knowledge, no DMaskingNAS from literature has optimized the receptive field or the dilation, even for 2D-CNNs. Similarly, PIT can also optimize the number of output neurons of FC layers1. Footnote 1: This can be seen as a corner case of the \(C_{out}\) optimization, since FC layers are just a special case of 1D convolutions with \(F=K=d=1\) and \(C_{out}\) equal to the number of output neurons. Accordingly, the rest of this section describes PIT’s functionality for convolutions. We provide an overview of the search space explored by our tool and of its general working principle in Section 3.1. Then, we detail the mechanisms used to generate differentiable masks for each considered hyper-parameter in Sections 3.1.1-3.1.4. Finally, the two cost regularizers used to drive the search and the overall training procedure are described in Section 3.2 and 3.3 respectively. Table II summarizes the main mathematical symbols used throughout the paper. ### _Search Space_ As shown in Figure 1, PIT's search space encompasses all sub-architectures derived from a seed TCN by tweaking the three aforementioned hyperparameters. In particular, PIT can decide to _reduce_\(C_{out}\) or \(F\), and to _increase_\(d\) with respect to the seed, all of which have the effect of reducing the complexity and memory occupation of the layer. To achieve this objective, each convolutional/FC layer of the seed is modified to become a function \(L_{n}(W^{(n)};\theta^{(n)})\) of its original weights tensor \(W^{(n)}\) and of a new set of architectural parameters \(\theta^{(n)}\). For a TCN with \(N\) layers, the search space of PIT is therefore defined by the set: \[\mathcal{S}=\{L_{n}(W^{(n)};\theta^{(n)})\}_{n=0}^{N-1} \tag{3}\] \begin{table} \begin{tabular}{|l|l|} \hline **Symbol** & **Description** \\ \hline \(x\) & Input activations of a convolutional layer \\ \(y\) & Output activations of a convolutional layer \\ \(T\) & Output sequence length of a convolutional layer \\ \(C_{in_{\mathcal{L}}},C_{out}\) & Number of input/output channels of a conv. layer \\ \(W\) & Convolutional filter weights \\ \(K\) & Convolution filter size \\ \(s\) & Convolution stride \\ \(d\) & Convolution dilation \\ \(F\) & Convolution receptive field \\ \(\mathcal{L}\) & Task-specific loss function \\ \(\mathcal{R}\) & Regularization loss function \\ \(\lambda\) & Regularization strength \\ \(\mathcal{S},\hat{\mathcal{S}}\) & Search space and sampled architecture \\ \(L_{n}\) & Generic convolutional/FC layer \\ \(N\) & Number of convolutional/FC layers \\ \(\theta,\Theta\) & Generic NAS architectural parameters and corresponding binary mask \\ \(\alpha,\Theta_{A}\) & NAS architectural parameters to optimize \(C_{out}\) and corresponding binary mask \\ & responding binary mask \\ \(\beta\), \(\Theta_{B}\) & NAS architectural parameters for \(F\), and corresponding binary mask \\ \(\gamma\), \(\Gamma\), \(\Theta_{\Gamma}\) & NAS architectural parameters for \(d\), intermediate binary mask \\ & mask elements and final binary mask \\ \(C_{\beta},C_{\gamma}\) & Transformation matrices to generate \(\Theta_{B}\) and \(\Theta_{\Gamma}\) from \(\beta\) and \(\gamma\). \\ \(k(i)\) & Index mapping function used to generate \(\Theta_{\Gamma}\) from \(\Gamma\) \\ \hline \end{tabular} \end{table} TABLE II: List of symbols used in the paper. During the search, the elements of \(\theta^{(n)}\) are properly combined to form a _binary mask_\(\Theta^{(n)}\), which is used to _prune_ a portion of the layer weights. In practice, an architecture \(\hat{\mathcal{S}}\) is sampled from \(\mathcal{S}\) in each search iteration, by performing the Hadamard product between \(W^{(n)}\) and \(\Theta^{(n)}\), i.e., \(\hat{\mathcal{S}}=\{L_{n}(W^{(n)}\odot\Theta^{(n)})\}_{n=0}^{N-1}\). This eliminates the portions of \(W^{(n)}\) that correspond to 0-valued mask elements, effectively letting the seed layer produce the same output that would be obtained with a smaller number of channels or receptive field, or with a larger dilation. The way in which \(\Theta^{(n)}\) is generated from \(\theta^{(n)}\) to produce this effect is the topic of Sections 3.1.1-3.1.3. Having _binary_ masks is required to either completely eliminate slices of \(W^{(n)}\) (with value 0) or keep them untouched (with value 1) when sampling an architecture with the Hadamard product. In practice, this corresponds to sampling only _feasible_ architectures (with integer \(C_{out}\), \(F\) and \(d\)). To this end, we binarize \(\Theta^{(n)}\) in the forward-pass of our search/training, applying an Heaviside step function with a fixed threshold \(\mathit{th}=0.5\). At the same time, we also need to make the \(\theta^{(n)}\rightarrow\Theta^{(n)}\) transformation differentiable, in order to embed the search into the standard gradient-based training of the network, learning contextually both the weights \(W^{(n)}\) and the architectural parameters \(\theta^{(n)}\). To cope with the Heaviside function derivation issues, i.e., derivative equal to 0 almost everywhere and not existent in \(\delta\), we follow the approach proposed in BinaryConnect [32], based on a Straight-Through Estimator (STE). Accordingly, during the backward-pass, the step function is simply replaced with an identity. For notation simplicity, in the rest of the section we divide \(\theta^{(n)}\) parameters in three groups: \(\alpha^{(n)}\), used to tune the number of channels, \(\beta^{(n)}\), which tune the receptive field, and \(\gamma^{(n)}\), which affect the dilation factor. We also drop the superscript \((n)\) when not needed. In PIT, each of these three groups of parameters is used to generate an _independent_ binary mask, which can be then combined with the other two. Having independent masks for \(C_{out}\), \(F\) and \(d\), gives PIT the flexibility to optimize the three hyper-parameters either separately or jointly. At most, during a joint search, PIT explores: \[|\mathcal{S}|\approx\prod_{n=0}^{N-1}(C_{out,seed}^{(n)}\cdot F_{seed}^{(n)} \cdot\lceil\log_{2}(F_{seed}^{(n)})\rceil) \tag{4}\] different solutions, where \(C_{out,seed}\) and \(F_{seed}\) in (4) are those of the seed layers. The logarithmic term in (4) comes from the fact that we only consider power-of-2 dilation factors, as detailed in Section 3.1.3. For a relatively small seed with \(N=8\), \(F_{seed}^{(n)}=17\), and \(C_{out,seed}^{(n)}=128\)\(\forall n\), this corresponds to evaluating \(\approx 10^{32}\) architectures in a single training. #### 3.1.1 Channels Search To explore the number of channels in each convolutional layer, we take inspiration from [11]. In that work, the parameters of batch normalization (BN) layers [29] were transformed into binary masks to prune entire output channels and explore the space of all sub-layers with \(C_{out}<C_{out,seed}\). However, requiring the presence of a BN layer after each convolution, although common in modern 2D-CNNs, still limits the applicability of the approach of [11]. Therefore, in PIT, we decouple the channel search from BN, and instead we exploit a dedicated trainable set of parameters \(\alpha\) to zero-out entire filters from the \(W\) tensor of convolutional layers. PIT treats each output channel independently. So, it uses an \(\alpha\) array of length \(C_{out,seed}\), and it generates binary masks simply as: \[\Theta_{A}=\mathcal{H}(|\alpha|) \tag{5}\] where \(\mathcal{H}\) is the Heaviside binarization. Then, the layer function defined in (1) is modified to: \[\tilde{y}_{t}^{m}=\sum_{i=0}^{K-1}\sum_{l=0}^{C_{in}-1}x_{ts-di}^{l}\cdot( \Theta_{A,m}\cdot W_{i}^{l,m}) \tag{6}\] In practice, each binarized mask element is multiplied with all the weights of the _same convolutional filter_, i.e., with an entire slice of the weights tensor over the output channels axis. Each filter multiplied with a 0-mask effectively removes the corresponding output channel from the layer. Figure 2 depicts the application of \(\Theta_{A}\) parameters to a simple layer with \(C_{out,seed}=4\). Noteworthy, besides reducing the number of channels, PIT can also _eliminate_ entire layers from the network, if the latter includes skip-connections. In particular, if all the \(\Theta_{A,m}\) of a convolutional layer are zeroed-out, then the inputs only flow through the skip connection, effectively reducing the number of the layers in the network by one. If skip connections are not present, instead, at least one output channel is always kept active to avoid breaking the network connectivity. Fig. 1: Search space of PIT. Fig. 2: Channels search example. Each \(\Theta_{A,m}=0\) zeroes-out the \(m\)-th convolutional filter, i.e., a slice of size \(K\times C_{in}\) of the weights tensor \(W\). Our channels search scheme differs significantly from existing DMaskingNAS such as FBNetV2 [10], since we mask _weights tensors_ rather than output activations. Fundamentally, as explained below, this makes our method easily extensible to the exploration of other hyper-parameters such as \(F\) and \(d\), which would be much more difficult to optimize with an activations mask. Moreover, we use _independent_ binarized parameters to mask each channel, rather than a set of predefined masks with an increasing number of trailing 0s, combined via Gumbel Softmax, as done in [10]. This, in turn, means that we can eliminate _any_ combination of channels, not just the trailing ones. #### 3.1.2 Receptive Field Search The second critical hyperparameter that we explore is the receptive field \(F\), i.e., the range of input time-steps involved in a convolution. In standard convolutions, the receptive field is equal to the filter size (\(F=K\)). However, as detailed in Section 2.1, this no longer holds for TCNs when the dilation factor \(d\) is \(>1\), and the general relation becomes: \(F=(K-1)\cdot d+1\). As noted at the beginning of this section, by exploring both \(F\) and \(d\), PIT also indirectly optimizes the filter dimension \(K\). The receptive field is explored using an array of additional trainable parameters \(\beta\) of length \(F_{seed}\). Differently from the output channels, however, the \(\beta\) need to be further combined to define the corresponding binary differentiable masks. The reason is that, to "simulate" the effect of a smaller receptive field through masking, it is not sufficient to mask _any_ set of time-slices in the weights tensor: in a causal TCN convolution, the receptive field extends exclusively in the past. Thus, the slices that should be eliminated are always the "oldest" ones, i.e., those that are multiplied with input time-steps that are farthest in the past. To do so, we derive elements of the binary masks \(\Theta_{B}\) from \(\beta\) as: \[\Theta_{B,i}=\mathcal{H}\left(\sum_{j=1}^{F_{seed}-i}|\beta_{F_{seed}-j}|\right) \tag{7}\] Each \(\Theta_{B,i}\) is then multiplied with a time-slice of the \(W\) tensor during the forward-pass, as shown in Figure 3. Therefore, when searching for the receptive field, (1) becomes: \[y_{t}^{m}=\sum_{i=0}^{K-1}\sum_{l=0}^{C_{in}-1}x_{ts-di}^{l}\cdot(\Theta_{B,di} \odot W_{i}^{l,m}) \tag{8}\] Thanks to the construction of (7), we have that if \(i>j\), then \(\Theta_{B,i}\leq\Theta_{B,j}\). In turn, this ensures that the first weight slices to be pruned are always the leftmost ones, as shown in the example on the right of Figure 3. Importantly, \(\beta_{0}\) is always kept constant and equal to 1. This ensures that, once binarized, \(\Theta_{B,0}\) is also always \(=1\), and consequently, that all convolutions take _at least one time-step_ as input. In practice, for efficiency reasons, we generate binary masks using the matrix transformation: \[\Theta_{B}=\mathcal{H}\left(C_{\beta}\cdot|\beta|\right) \tag{9}\] where \(C_{\beta}\) is a constant upper triangular matrix of 1s generated once at the beginning of a search, as shown on the left of Figure 4. #### 3.1.3 Dilation Search Lastly, PIT also explores the dilation factor \(d\). Similarly to the receptive field, also searching for dilation imposes some constraints on the portions of the weights tensor that should be pruned by our NAS. In particular, we need to ensure that only _regular_ dilation factors are generated, i.e., that the time-steps gaps between consecutive convolution inputs are all equal for a given layer. For example, we do not want to obtain a layer that takes as input time-steps \(t\), \(t-1\), \(t-3\), and \(t-10\), corresponding to gaps of 0, 1, and 6 time-steps respectively. In fact, such a layer would not be supported by most inference libraries, in particular those for edge devices [33, 34], which only implement regular dilation, as the latter enables more regular memory access patterns and better low-level optimizations. Based on these observations, we follow an approach similar to the one described in Section 3.1.2. We start from an array of trainable parameters \(\gamma\), which are then combined to compose differentiable binary masks2. Our method only supports power-of-2 dilation factors which, besides being the most commonly used values, also simplify the generation of the masks. Thus, we have: \(len(\gamma)=\lceil\log_{2}(F_{seed})\rceil\). Footnote 2: In our preliminary work of [14] we used a different mechanism to generate dilation masks as a _product_ of \(\gamma\) elements instead of a sum. However, we found that this new approach is superior as it does not introduce nonlinear terms in the \(\gamma\) gradients. In order to obtain the elements of \(\Theta_{\Gamma}\), we pass through Fig. 4: Example of conversion between trainable architectural parameters \(\beta\) and \(\gamma\) and corresponding binary masks \(\Theta_{B}\) and \(\Theta_{\Gamma}\), for a layer with \(F_{seed}=9\). Fig. 3: Receptive field search example. Each \(\Theta_{B,i}=0\) eliminates the contribution of 1 input time-step from the convolution output, by zeroing out a time-slice of size \(C_{out}\times C_{in}\) of the weights tensor \(W\). an intermediate array \(\Gamma\), generated similarly to (7): \[\Gamma_{i}=\mathcal{H}\left(\sum_{j=1}^{len(\gamma)-i}|\gamma_{len(\gamma)-j}|\right) \tag{10}\] Then, the mask is obtained by further reorganizing the \(\Gamma_{i}\) values into the vector \(\Theta_{\Gamma}\), of length \(F_{seed}\), as follows: \[\Theta_{\Gamma,i}=\Gamma_{k(i)},\text{ with }k(i)=\sum_{p=1}^{len(\gamma)}1- \delta(i\text{ mod }2^{p},0) \tag{11}\] and where \(\delta()\) is Kronecker's Delta function. This reorganization ensures that the \(\Gamma\) element with the largest index (\(\Gamma_{len(\gamma)-1}\)) ends up in all positions corresponding to time-steps that would be skipped by a layer with \(d=2\). Similarly, the element with the second largest index ends-up in positions that are skipped when using \(d=4\), and so on. This, combined with the fact that, by construction of (10), it holds that \(\Gamma_{i}\leq\Gamma_{j}\) for \(i>j\), ensures that the dilation is _progressively increased_. In other words, each new \(\Gamma_{i}\) binarized to 0 increases the dilation by a factor 2. The obtained \(\Theta_{\Gamma}\) vector is multiplied with the \(W\) tensor, exactly as in (9), after setting the seed layer dilation to 1. Again, we fix \(\gamma_{0}=1\) to ensure that we never prune the entire convolution. An example of how the tensor is generated and of its effect on the dilation is shown in Figure 5. In practice, similarly to the receptive field mask, also \(\Theta_{\Gamma}\) is obtained from \(\gamma\) with a simple matrix multiplication: \[\Theta_{\Gamma}=\mathcal{H}(C_{\gamma}\cdot|\gamma|) \tag{12}\] where \(C_{\gamma}\) is a constant matrix composed of 0s and 1s that can be generated procedurally based on the value of \(F_{seed}\). An example of \(C_{\gamma}\) is shown on the right of Figure 4. #### 3.1.4 Joint Search In order to jointly optimize all three aforementioned hyper-parameters, we simply apply all three \(\Theta\) masks to the weight tensor of a layer. Therefore, the equivalent of equation (1) for a seed convolutional layer during a joint search is: \[y_{t}^{m}=\sum_{i=0}^{K-1}\sum_{l=0}^{C_{in}-1}x_{ts-i}^{l}\cdot(\Theta_{B,i} \odot\Theta_{\Gamma,i}\odot(\Theta_{A,m}\cdot W_{i}^{l,m})) \tag{13}\] Note that, as anticipated in Section 3.1.3, we set the seed layer dilation to 1, since we want to let PIT explore all possible \(d\) values. In our experiments, we found that performing such a joint search yields superior results with respect to optimizing the three hyper-parameters sequentially, since PIT can take into account the complex interactions among them (especially among \(F\) and \(d\)), see Section 5.2. ### _Regularization_ Following the same approach of state-of-the-art DNASes [10, 11, 13] PIT searches for accurate yet low-complexity architectures by combining the task-specific loss function \(\mathcal{L}\) with a regularization term \(\mathcal{R}\) as in (2). The additional differentiable term encodes a prior in the loss landscape that directs the optimization towards low-cost solutions. The two cost metrics considered in this work are the number of parameters (or size) of the model, and the number of operations (OPs) for an inference. The corresponding two regularizers \(\mathcal{R}_{size}\) and \(\mathcal{R}_{ops}\) are differentiable functions of the _pre-binarization_ masks \(\tilde{\Theta}_{A}\), \(\tilde{\Theta}_{B}\) and \(\tilde{\Theta}_{\Gamma}\), i.e., the outputs of (5), (9) and (12) but without the Heaviside binarization. The latter, in turn, depend on the trainable architectural parameters \(\alpha\), \(\beta\) and \(\gamma\). We use pre-binarization masks as in [11], because this yields a smoother loss landscape, improving convergence. The details of the two regularizers are provided below. #### 3.2.1 Size Regularizer The Size Regularizer \(\mathcal{R}_{size}\) estimates, during each forward-pass, the _effective_ number of parameters of the network, based on the values of the differentiable binary masks. The number of parameters of a convolutional layer, i.e., the size of weight tensor \(W\), is equal to \(C_{in}\times C_{out}\times K\). Accordingly, we define the size regularizer for a TCN with \(N\) convolutional (or FC) layers as: \[\mathcal{R}_{size}=\sum_{n=0}^{\text{N-1}}(\mathcal{R}_{size}^{(n)})=\sum_{n= 0}^{\text{N-1}}C_{out,eff}^{(n-1)}\cdot C_{out,eff}^{(n)}\cdot K_{eff}^{(n)} \tag{14}\] where: \[C_{out,eff}^{(n)}=\sum_{i=0}^{C_{out,seed}^{(n)}-1}\tilde{\Theta}_{A,i}^{(n)} \tag{15}\] is the effective number of channels in the \(n\)-th layer, and: \[K_{eff}^{(n)}=\sum_{i=0}^{F_{seed}^{(n)}-1}\frac{\tilde{\Theta}_{B,i}^{(n)}}{F _{seed}-i}\cdot\frac{\tilde{\Theta}_{\Gamma,i}^{(n)}}{len(\gamma)-k(i)} \tag{16}\] is the effective kernel size, which depends both on the total receptive field and on the dilation. For the 1st layer of the network, \(C_{out,eff}^{(n-1)}\) is constant and equal to the number of channels of the input signal. The definitions of (15) and (16) are continuous relaxations of the number of _active_ (non-pruned) channels and time-slices of \(W^{(n)}\) respectively. By minimizing \(R_{size}\), PIT is encouraged to reduce the \(\tilde{\Theta}\) values, bringing them below the binarization threshold. Depending on the regularization strength \(\lambda\) of (2) PIT balances the corresponding reduction in cost with the accuracy drop caused by eliminating \(W\) slices from the layer, reducing only the \(\tilde{\Theta}\) elements associated to unimportant slices. The denominators in (16) are needed to make sure that, when \(\beta\) and \(\gamma\) are equal to 1 (i.e., the initialization value, see Section 3.3), \(K_{eff}^{(n)}\) corresponds to the real filter size of the seed. In fact, each \(\tilde{\Theta}_{B/\Gamma}\) is obtained as sum of a different number of \(\gamma\) (or \(\beta\)) elements. As a result, without normalization, the estimated cost would be higher than the real filter size. For instance, in a layer with \(F_{seed}=5\) and with all \(\beta/\gamma\) initialized at 1, without the denominators, we would have \(K_{eff}=33\), which is clearly incorrect. Conversely, with the denominators, we have \(K_{eff}=5=F_{seed}\), which is correct, since the initialization of \(\gamma=1\) implicitly imposes \(d=1\). #### 3.2.2 OPs Regularizer The second proposed regularizer \(R_{ops}\) estimates the number of operations required to perform an inference. Since the number of OPs of a 1D convolutional layer is \(T\times C_{in}\times C_{out}\times K\), where \(T\) is the output sequence length defined in (1), the regularizer expression is simply: \[\mathcal{R}_{ops}=\sum_{n=1}^{\mathrm{N}}(\mathcal{R}_{size}^{(n)}\cdot T^{(n )}) \tag{17}\] In practice, when targeting the reduction of the total OPs for inference, the only difference in the regularizer is that the cost of each layer is weighted by the output sequence length. This is particularly important in presence of layers such as pooling, strided convolution, etc., which significantly reduce \(T\), and consequently the number of OPs for the downstream part of the network. ### _Training Procedure_ Algorithm 1 summarizes the three main phases of a PIT architecture search. The first phase consists of \(\text{Steps}_{\text{wu}}\) iterations of warmup. At this stage of the algorithm, all \(\theta\) parameters (i.e., \(\alpha\), \(\beta\) and \(\gamma\)) are initialized to 1 and frozen. Accordingly, all elements of the binary masks \(\Theta\) are also binarized to 1. Therefore, warmup coincides with a normal training of the seed network, where the only objective is minimizing the task loss function \(\mathcal{L}\). The number of warmup iterations is a user-defined parameter. In practice, in all our experiments, we warm up to convergence. The second phase is where the actual NAS takes place. In the search loop, the model weights \(W\) and the architectural parameters \(\theta\) are optimized simultaneously. Accordingly, the goal of this phase is to minimize the sum of the task-specific loss \(\mathcal{L}\) and of one of the two the regularization losses \(\mathcal{R}\) discussed in Section 3.2, weighted by the regularization strength \(\lambda\). The duration of the search phase is controlled by an early-stop mechanism which monitors the value of \(\mathcal{L}\) on an unseen validation split of the target dataset, and stops the search when the latter does not improve for 20 epochs. Finally, in the third and last phase the \(\theta\) parameters, and corresponding \(\Theta\) binary masks, are frozen to their latest values. This corresponds to sampling from the search space the architecture that PIT determined as optimal during the previous phase. Then, the weights \(W\) of the selected network are fine-tuned or re-trained from scratch, considering only the \(\mathcal{L}\) loss. In order to obtain different Pareto points in the accuracy versus cost (size or OPs) space with PIT, it suffices to repeat Algorithm 1 changing the regularization strength \(\lambda\). More precisely, the warmup phase can be performed just once, saving the final weights of the seed network. Overall, Algorithm 1 has a complexity that is comparable to a single TCN training. Moreover, the requirements in terms of GPU time and memory are greatly reduced with respect to a supernet-based DNAS. Therefore, obtaining 10s of Pareto points by changing \(\lambda\) still has a manageable cost, as shown for example by the results of Figure 10. ## 4 Benchmarks We test PIT on four edge-relevant real-world benchmarks. We select diverse benchmarks to comprehensively evaluate the effectiveness of the proposed NAS. Specifically, we consider regression as well as classification tasks; the inputs analyzed are both raw or extracted features, and the TCNs employed as seed are based on different architectural styles. ### _PPG-based Heart-Rate Monitoring_ The first benchmark deals with Heart-Rate (HR) monitoring on wrist-worn devices, using Photoplethysmography (PPG) sensors coupled with tri-axial accelerometers to mitigate the effect of motion artifacts [17, 35]. We target the PPG-Dalia [35] dataset, and the task is formulated as a _regression_ of the HR value, whose ground truth is derived with ECG measurements. All results refer to the same input windowing and cross-validation scheme proposed in [35]. The seed network for this task is TEMPONet, a TCN originally proposed in [2] and later used for HR monitoring with state-of-the-art results in [17]. The network is composed of three _feature extraction_ blocks and a final _regressor_ module with three FC layers. Each feature extraction block is made of three convolutional layers with BatchNorm and ReLU activation, followed by an average pooling. The FC layers are also followed by BatchNorm and ReLU, and by a dropout layer with 50 % rate. With respect to the original TEMPONet, our seed is obtained doubling the receptive field of all convolutions and setting the dilation to 1. ### _ECG-based Arrhythmia Detection_ Our second benchmark deals with Electrocardiogram (ECG)-based arrhythmia detection, for wearable medical devices. We target the ECG5000 dataset [36], and the task consists in classifying the ECG signals in 5 classes: Normal, R-on-T Premature Ventricular Contraction, Premature Ventricular Contraction, Supraventricular Premature or Ectopic beat, and Unclassified Beat. The reference TCN is ECGTCN, originally proposed in [37]. Differently from TEMPONet, ECGTCN is based on residual blocks. It has a first convolutional layer that enlarges the number of input channels, followed by three modular blocks, each including two dilated convolutions with ReLU activation, BatchNorm and 50% dropout. The input and output feature maps of each block are then summed together. When the number of input and output channels differs, the residual path also includes a point-wise convolution (i.e., \(K=1\)) in order to adapt the tensor sizes. The PIT seed is obtained from ECGTCN, setting the dilation of all layers to 1, while keeping the original receptive field. ### _sEMG-based Hand-Gesture Recognition_ The third benchmark deals with hand-gesture recognition based on surface electromiography (sEMG) signals. For this task, we target the NinaPro DB1 dataset [38], which includes 52 heterogeneous gesture classes, using the same data pre-processing and augmentation described in [19]. The seed network is TCCNet, originally proposed in [19]. The architecture includes three feature extraction blocks, each composed of two dilated convolutions with ReLU and dropout (5% rate) and a residual branch with a point-wise convolution. The classifier includes an attention layer of the type described in [39] and a final FC layer with 53 output neurons (52 hand-gestures + 1 unknown class). The PIT seed is obtained simply setting the dilation to 1 in all layers. ### _Keyword Spotting_ Our last benchmark is keyword spotting (KWS). We target the Speech Commands v2 dataset [40], following the pre-processing scheme proposed by the MLPerf Tiny benchmark suite [41] which produces 12 possible labels, including 10 words and two special classes for "unknown" and "silence". As seed, we use the TCN presented in [18], called TC-ResNet14. The main difference with the other reference TCNs is that the original TC-ResNet14 did not use dilation, and the modular convolutional blocks alternate plain convolutions with strided convolution with \(s=2\). PIT's seed is obtained doubling the receptive field in each layer. ## 5 Experimental Results This section discusses the results obtained by PIT on the four aforementioned benchmarks. In particular, in Section 5.1, we present the global results of our NAS search in the accuracy versus number of parameters and accuracy versus number of OPs planes. In Section 5.2 we conduct ablation studies on one of the benchmarks, and in Section 5.3 we compare our approach with a state-of-the-art DNAS, ProxylessNAS [13], and with two state-of-the-art DMaskingNAS approaches, namely, MorphNet [11] and FBNetV2 [10]. Since the code for [10] is not publicly available, we re-implemented it based on the information provided in the paper. Finally, Section 5.4 presents the memory, latency and energy consumption results obtained deploying some of the networks found by PIT on two commercial edge devices. PIT is written in Python (v3.6) and it is based on PyTorch (v1.7.1). All our training experiments and NAS searches are performed on a single NVIDIA TITAN Xp GPU with 12GB memory. The two deployment targets considered are: i) the multicore GAP-8 IoT processor by GreenWaves Technologies [15] and ii) the single-core STM32H7 MCU by STMicroelectronics [16]. As inference software backend, we use the open-source layers library of [42] coupled with the tiling tool of [43] for GAP-8, and the CMSIS-NN library [44] for the STM32H7. All deployed networks are quantized to 8-bit, using PyTorch's built-in quantization algorithm. ### _Search Space Exploration_ Figure 6 shows the results of applying PIT to the four benchmarks. The graphs report the TCNs accuracy (for classification tasks) or Mean Absolute Error (MAE, for regression tasks) on the x axis, and the number of parameters or OPs per inference on the y axis. The curves correspond to the outputs of PIT, where different points are obtained varying the regularization strength \(\lambda\) and considering both size and OPs regularizers. Moreover, each plot also reports the metrics of two additional TCNs. Black triangles correspond to the results obtained by the _hand-tuned_ state-of-the-art TCNs directly taken from [17, 18, 19, 37], with the original number of channels, receptive fields, and dilation factors. Black squares, instead, indicate the metrics of the PIT _seeds_, i.e., the same networks modified as described in Section 4 (setting \(d=1\) everywhere, etc.) to enlarge the PIT search space. The upper-left part of Figure 6 reports the results on the PPG-DaLia dataset for the PPG-based HR monitoring task. This is the only regression task considered, so the network performance is measured with the MAE, for which lower values are better. As shown by the graphs, starting from a single seed network, PIT is able to obtain a rich collection of Pareto-optimal architectures, spanning more than one order of magnitude both in terms of parameters (4.7k-78k) and OPs (0.27M-9.6M). Notably, PIT networks dominate in the Pareto sense both the seed architecture and the hand-tuned state-of-the-art TEMPONet. In particular, we obtain a similar MAE to the seed TCN (5.38 vs 5.40 BPM), with 120.0\(\times\) less parameters and 96.0\(\times\) less operations. Moreover, PIT also finds a new state-of-the-art deep learning model for this task, achieving a MAE of just 5.03 BPM while requiring only 53k parameters and 5.1M OPs, improving the best performing architecture proposed in [17]5 requiring \(8.03\times\) and \(5.42\times\) less parameters and OPs. Footnote 5: Note that this result is achieved without applying any additional post-processing as described in [17]. The Upper-right pair of charts shows the results obtained on the ECG5000 dataset for Arrhythmia Detection. PIT results span almost one order of magnitude in parameters (0.91k-5.36k) and OPs (50.3k-293.5k). Moreover, both the seed network and the hand-tuned one are Pareto-dominated. The best performing architecture found by our NAS improves the accuracy of the hand-tuned network (+1.03%) reducing both the number of parameters (-64.7%) and the FLOPs (-85.8%). Lower-left part of Figure 6 shows the results obtained for the sEMG-based Hand-Gesture Recognition task on the NinaPro-DB1 dataset. The richness and diversity of the found architectures in terms of size and number of OPs are similar to the previous two benchmarks. However, while PIT results still dominate the seed, in this case the hand-tuned TCNNet sits on the Pareto front. Indeed, the PIT network that is nearest to the hand-tuned architecture on the curve, achieves a slightly lower accuracy (-0.47%) traded-off with a reduction of size (-3.33%). This result demonstrates the goodness of the original TCNNet proposed in [19] but, at the same time, it shows the good quality of the architectures found by our NAS, which despite starting from an oversized seed, is still able to produce optimized networks that closely resemble those tuned by experts. Lastly, the lower-right part of Figure 6 shows the two Pareto fronts obtained on the Google Speech Commands dataset for Keyword Spotting. Once again, we largely outperform both the seed and the hand-tuned TCNs. Specifically, the most accurate PIT architecture slightly improves the accuracy of the hand-tuned network (+0.36%) while greatly reducing both the number of parameters (-82.53%) and FLOPs (-44.53%). Moreover, we obtain Pareto points that span 10k-98k parameters and 0.87M-3.98M OPs. It is important to note that the bad performance obtained by the seeds (black squares) for all four benchmarks is due to over-fitting, which in turn is caused by the large number of channels and receptive fields, and the absence of dilation. Table III reports the range of regularizer strengths \(\lambda\) used on the four benchmarks to obtain these results. In general, \(\lambda\) should be set so that the two additive terms in the loss (\(\mathcal{L}\) and \(\lambda\mathcal{R}\)) assume comparable values at the beginning of a training. This ensures that PIT takes into account both accuracy and inference cost in its search, without degenerating to one of the two corner cases, i.e., accuracy-driven-only and cost-driven-only optimization. The corresponding values of \(\lambda\) vary for different tasks, as shown in the table. However, we found that a good rule of thumb, which works for all benchmarks, to identify the order of magnitude of the regularization strength is to start from \(\lambda=1/(\text{Seed Model Size})\). Then, based on the results of a PIT search with this initial value, one can decide to increase/decrease \(\lambda\) to obtain smaller/more accurate TCNs respectively. By monitoring the loss in the initial epochs, it is also very easy to detect when the NAS is falling in one of the corner cases (one term much larger than the other) and stop the search immediately, without wasting training time. ### _Ablation Studies_ This section analyzes the impact of some of the most important PIT parameters. Due to space limitations, we report the results of this study only for the PPG-based HR monitoring benchmark. #### 5.2.1 Hyper-parameters Figure 7 analyzes the contribution of different hyper-parameters to the quality of results found by PIT. For this experiment, we use the \(\mathcal{R}_{size}\) regularizer and consider solutions in the MAE versus number of parameters space. We then repeat the NAS search 3 times. In each run, we freeze two of the three sets of architectural parameters (\(\alpha\), \(\beta\) and \(\gamma\)) to 1, letting PIT tune the third set. This gives us: i) the results of a search that only optimizes the number of channels in each layer (_Ch-Only_), performed on a TCN with maximal receptive field and \(d=1\), ii) the results of a receptive field-only search (_RF-Only_), on a TCN with maximal \(C_{out}\) and \(d=1\), and iii) the results of a dilation-only search (_Dil-Only_) on a network with maximal \(F\) and \(C_{out}\). The Pareto fronts obtained in each of these 3 conditions by varying the regularization strength \(\lambda\) are shown in the figure, together with the output of a complete search that optimizes all three hyper-parameters simultaneously (_All-in-One_). The results clearly show that the main source of parameters reduction and performance improvement is the search along the channels dimension. This is probably due to the fact that the channels represent a large source of redundancy in hand-tuned TCNs, since their number is typically set using common heuristics, irrespective of the target task (e.g., \(C_{out}\) multiple of 32, progressively increasing along the depth of the network). However, Figure 7 also shows that optimizing _only_ the number of channels is not sufficient, and that a combined optimization that also consider receptive field and dilation can yield Pareto-optimal networks across the entire MAE/parameters range. \begin{table} \begin{tabular}{c|c c c c} Regularizer & PPG & ECG & sEMG & KWS \\ \hline \(\mathcal{R}_{size}\) & 1e-7 : 5e-4 & 5e-7 : 7.5e-3 & 1e-7 : 5e-6 & 5e-10 : 1e-5 \\ \hline \(\mathcal{R}_{ops}\) & 1e-8 : 5e-5 & 5e-8 : 5e-4 & 5e-10 : 5e-8 & 1e-10 : 1e-6 \\ \end{tabular} \end{table} TABLE III: Range of regularizer strength (\(\lambda\)) values for the four benchmarks. Fig. 6: Overall PIT Pareto fronts for the four target benchmarks, and comparison with seed and hand-tuned TCNs. #### 5.2.2 Regularizers Figure 8 compares the Pareto fronts obtained using the \(\mathcal{R}_{size}\) regularizer (with orange stars) and \(\mathcal{R}_{ops}\) regularizer (with green diamonds). Note that the PPG-based HR monitoring benchmark is the one for which the distinction between model size and number of OPs is most relevant, due to the presence of several layers (average pooling and strided convolution) that modify the activation array length \(T\). The Figure shows that, as expected, the majority of the Pareto points in the MAE versus number of parameters plane are produced when using the \(R_{size}\) regularizer, with the few exceptions being due to local minima. Vice versa, the \(R_{ops}\) regularizer tends to generate superior solutions in terms of MAE versus number of OPs. ### _Comparison with state-of-the-art NAS tools_ Figure 9 compares the Pareto fronts obtained with PIT and three state-of-the-art NAS tools, namely ProxylessNAS [13] MorphNet [11] and FBNetV2 [10], on the HR monitoring benchmark. Results show that PIT outperforms all three across the entire design space, except for one MorphNet and one FBNetV2 point, that achieve a very low number of operations, although at the cost of a quite large MAE. The main reason for the superior results of PIT is the fact that our NAS explores a larger and finer-grain search space with respect to the baselines. For what concerns MorphNet and FBNetV2, this is partly due to the intrinsic nature of those tools, which cannot explore receptive field, nor dilation [10, 11]. Accordingly, \(F\) and \(d\) in their respective seeds have been set to the hand-tuned values of the state-of-the-art network. This different search starting point compared to PIT is the reason why, in the low-size/high-MAE regime, these tools find a single Pareto-optimal point. two values of each hyper-parameter that have been chosen more frequently. The \(2^{3}\) possible combinations of the latter are used to generate the _combined_ search space for ProxylessNAS, which, accordingly, includes 8 layer variants in each super-net node. The Pareto fronts of Figure 9 are obtained running ProxylessNAS multiple times on this combined search space, with different regularization strengths. We report both the results obtained with the training scheme proposed in the original paper (_Original_ curve), which runs for a fixed number of epochs, and with the same early-stop mechanism employed for PIT (_EarlyStop_ curve). As shown, the quality of results is similar in both cases. Figure 10 compares the search space dimension and average execution time of the different tools on the HR monitoring benchmark. For reference, the execution time of a standard training of the seed network is also reported. All time results refer to the search phase only (without warm up), and are obtained on a single NVIDIA Titan XP GPU with a batch-size of 128. For ProxylessNAS, we report the results of the the initial single-hyper-parameter searches (_Proxyless-Single_) and of the final combined search (_Proxyless-Multiple_), both with and without early-stopping. Our algorithm explores a \(10^{26}\times\)/\(10^{12}\times\) larger search space than Proxyless-Single/-Multiple. Further, it is only \(1.13\times\) slower than Proxyless-Single with early-stopping, and \(3.55\times\) faster than the variant without early-stopping, while it is \(3.0\times\)/\(14.22\times\) faster compared to Proxyless-Multiple with/without the early-stopping. With respect to MorphNet, we explore a \(10^{11}\times\) larger search space at the cost of a small \(1.07\times\) increase in runtime. FBNetV2-Coarse is the fastest tool, converging in few search epochs. Whereas offering a \(2.5\times\) speedup with respect to PIT, the explored search space is \(10^{26}\) smaller. Instead, FBNetV2-Fine explores a \(10^{11}\) smaller space while requiring the same search time of the proposed approach. Lastly, PIT's time-overhead with respect to a normal training is only \(34\%\). ### _Embedded Deployment_ This section analyzes the results obtained deploying two TCNs for each target benchmark on the GAP8 IoT processor (running at 100 MHz) and on the STM32H7 MCU (at 480 MHz). For each task, we deploy the best performing network in terms of MAE or accuracy (L). Moreover, we also select a small network that achieves a MAE drop \(<1\) BPM, or an accuracy drop \(<5\%\) with respect to the best performing one (S). For comparison, we also deploy the baseline hand-tuned architectures (HT). Table IV reports the results in terms of performance (MAE or accuracy, depending on the dataset), memory footprint, inference latency and energy consumption, while Figure 11 shows the hyper-parameters selected by our NAS. PIT finds competitive solutions for both hardware targets and for all 4 tasks, despite the large difference in complexity among them, testified by the more than two orders of magnitude span in memory, latency and energy consumption in the results of Table IV. For PPG-based HR monitoring, the L/S models achieve a 0/0.70 BPM MAE increase with respect to the hand-tuned TEMPONet respectively, while resulting in a 8.03/9.8\(\times\) lower memory footprint, and a 5.45/19.6\(\times\) lower latency and energy consumption on GAP8. On the STM32 MCU, the latency and energy reduction of the two PIT outputs is 3.83/18.2\(\times\). PIT's L/S models for ECG processing, instead, achieve +0.07%/-1.36% accuracy with respect to the hand-tuned ECGNET, with a 2.83/16.8\(\times\) lower memory footprint, 2.13/3.44\(\times\) latency and energy reduction on GAP8, and 2.34/3.7\(\times\) on the STM32. For the sEMG gesture recognition task, the L/S models found by PIT obtain +2.31%/-1.92% accuracy compared to TCCNet. In this case, the higher accuracy of the large model is paid with a 3.57\(\times\) larger memory footprint, and a 3.85\(\times\) latency and energy increase on GAP8 (3.33\(\times\) on the STM32H7), proving once again the goodness of the hand-tuned model for this task. The small TCN, instead, \begin{table} \begin{tabular}{|c| results in a 2.51\(\times\) memory reduction, and 1.54\(\times\) and 1.72\(\times\) lower latency and energy on the two targets. Lastly, the L/S PIT outputs for KWS obtain +0.16%/-5% accuracy with respect to TCRset-14, with a 5.72/33.1\(\times\) lower memory footprint, 3.58/9.54\(\times\) lower energy and latency on GAP8, and 2.9/11.54\(\times\) lower energy and latency on STM32H7. Figure 11 shows the high variability of hyper-parameters settings found by PIT, and the different optimization behaviours for different benchmarks. In general, comparing PIT outputs with the respective seeds, we can observe that the optimized networks found by our tool contradict several "rules of thumb" of manual DNN design, such as progressively increasing the number of channels and dilation for deeper layers. Accordingly, our NAS could also provide interesting insights for better TCN design. For instance, for the PPG benchmark, PIT finds solutions that are characterized by a large number of channels in the first and last layers, while keeping an overall high receptive field in the core of the network. ## 6 Conclusion We have proposed PIT, a lightweight NAS tool for TCNs, able to explore a large, fine-grained search space of architectures with low GPU memory requirements. PIT is, to the best of our knowledge, the first DMaksingNAS tool explicitly designed for 1D convolutional networks, and the first to target the optimization of the receptive field and dilation of convolutional layers. With experiments of four real-world benchmarks, we have shown that PIT is able to find improved versions of state-of-the-art TCNs, with a memory compression of up to 8.03\(\times\) (90.8\(\times\)) and a latency and energy reduction of up to 5.45\(\times\) (19.6\(\times\)) without (with a reasonable) accuracy drop, when deployed on commercial edge devices. Our future work will focus on extending PIT principles to generic N-dimensional CNNs.
2307.14369
On Negative Mass, Partition Function and Entropy
This work examines some aspects related to the existence of negative mass. The requirement for the partition function to converge leads to two distinct approaches. Initially, convergence is achieved by assuming a negative absolute temperature, which results in an imaginary partition function and complex entropy. Subsequently, convergence is maintained by keeping the absolute temperature positive while introducing an imaginary velocity. This modification leads to a positive partition function and real entropy. It seems the utilization of imaginary velocity may yield more plausible physical results compared to the use of negative temperature, at least for the partition function and entropy.
S. D. Campos
2023-07-25T12:21:23Z
http://arxiv.org/abs/2307.14369v3
# On Negative Mass, Partition Function and Entropy ###### Abstract This work examines some aspects related to the existence of negative mass. The requirement for the partition function to converge leads to two distinct approaches. Initially, convergence is achieved by assuming a negative absolute temperature, which results in an imaginary partition function and complex entropy. Subsequently, convergence is maintained by keeping the absolute temperature positive while introducing an imaginary velocity. This modification leads to a positive partition function and real entropy. It seems the utilization of imaginary velocity may yield more plausible physical results compared to the use of negative temperature, at least for the partition function and entropy. ## I Introduction The current laws of physics were formulated under the assumption that the observed mass in the universe is always positive. Nevertheless, certain key equations in physics yield results that suggest the possibility of negative mass. This raises a fundamental question: Why haven't we detected this type of matter? Although experimental evidence is lacking, there are indications that the presence of negative matter could be indirectly inferred through its physical effects. One compelling example is the concept of negative mass being associated with dark energy, which explains the observed acceleration of the universe (see [1] and references therein). From a naive perspective, mass is often defined as the amount of matter contained within a given volume. However, in the realm of physics, various conceptual frameworks allow for more nuanced definitions1, as Max Jammer points out in his book [2]. Classical physics, for instance, is built upon the assumption that these definitions should hold even for negative mass. Footnote 1: A collection of definitions can be found in the Appendix _On Relativistic Mass_ (written by V. Petkov) in the book of A. Einstein, _Relativity_, (Minkowski Institute Press, Montreal, 2018); or, in E. V. Huntington, _Bibliographical Note on the Use of the Word Mass in Current Textbooks_, The Amer. Math. Month. **25**(1), 1 (1918). The concept of negative mass in physics is not new, and its existence has been a subject of investigation since at least the end of the nineteenth century [3; 4]. Over the years, numerous studies have explored the possibility of the existence of such an entity [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In general relativity, the seminal approach is due to Bondi, which divides the concept of mass into three entities depending on their measurement origin [8]. The inertial mass, \(m_{i}\), is the one that suffers the effects of inertia. The passive gravitational mass, \(m_{p}\), is the one that feels the force generated by the gravitational potential. The last one, the active gravitational mass, \(m_{a}\), generates the gravitational potential felt by \(m_{p}\). The third law conduces to \(m_{p}=m_{a}=m\), and the equivalence principle leads to \(m_{i}=m\). As well-known, it is an empirical observation that positive matter attracts positive matter. It is assumed, then, that positive matter attracts _all types of mass_, including negative one. Certainly, if such a scenario were to occur, it is plausible an analogy between charge and mass, and it seems reasonable that negative matter should repel negative matter as well as _all types of matter_. However, due to the equivalence principle, the positive and negative mass should be accelerated in the same direction when immerse in a gravitational field [20]. Indeed, the interaction between positive and negative matter can lead to peculiar outcomes in physics [18]. In the present work, I assume that the laws of classical physics hold for negative mass. Considering this, one studies the partition function and entropy associated with this kind of matter. Some interesting results emerge from the approach considered here. This paper is organized as follows. Section II presents some considerations about negative mass. Section III introduces some concepts about negative absolute temperature and imaginary velocity. In Section IV, one presents the partition function and entropy for negative mass. The discussion is left for Section V. ## II Positive and negative mass interaction In accordance with Bonnor [19], one assumes here that the equivalence principle, general relativity, and electromagnetism hold for positive and negative mass separately. Considering this, notice that in Newtonian physics the concept of a center-of-mass coordinate system for \(n\) particles with positive mass is written as (only for \(x\) here) \[X_{CM}=\frac{m_{1}x_{1}+m_{2}x_{2}+...+m_{n}x_{n}}{m_{1}+m_{2}+...+m_{n}}, \tag{1}\] where \(X_{CM}\) stands for the \(x\)-coordinate in the center-of-mass system, and each \(m_{j}\), \(j=1,2,...,n\) represent a mass. On the other hand, considering a system composed of \(n\) particles possessing only negative mass, one has the same result as can be easily verified. The problem arises when one considers a system comprising \(n\) particles with both positive and negative mass. In this case, we do not have a trivial result as above. Indeed, suppose one has \(i=1,...k-1\), \(1\leq k\leq n\), positive mass particles. Then, one writes the center-of-mass system as \[X_{CM}=\frac{m_{1}x_{1}+m_{2}x_{2}+...+m_{k}x_{k}+m_{k+1}x_{k+1}+m_{k+2}x_{k+2}+...+m_{n}x_{n}}{m_{1}+m_{2}+...+m_{k}+m_{k+1}+m_{k+2}+...+m_{n}}. \tag{2}\] This result implies the center-of-mass system can only exist in Newtonian physics for positive and negative mass if the sum \(m_{1}+m_{2}+...+m_{j}+m_{k+1}+m_{k+2}+...+m_{n}\neq 0\). Assumes that \(m_{1}+m_{2}+...+m_{k}\) is the positive mass contribution while \(+m_{k+1}+m_{k+2}+...+m_{n}\) is due to the negative mass. Therefore, the center-of-mass \(X_{CM}\) can only exist if \(+m_{k+1}+m_{k+2}+...+m_{n}=-\gamma(m_{1}+m_{2}+...+m_{k})\), \(\gamma>0\), resulting in \[X_{CM}=\frac{m_{1}x_{1}+m_{2}x_{2}+...+m_{k}x_{k}+m_{k+1}x_{k+1}+m_{k+2}x_{k+2 }+...+m_{n}x_{n}}{(1-\gamma)(m_{1}+m_{2}+...+m_{k})}. \tag{3}\] Hence, a system consisting of \(n\) particles with positive and negative mass exhibits a well-defined center-of-mass coordinate only if the sum of positive and negative mass is non-null. For a simple pair composed of two masses \(m_{1}>0\) and \(m_{2}<0\), the above result implies the center-of-mass coordinates exist only if \(m_{2}=-\gamma m_{1}\), \(\gamma>0\). The explanation above highlights how negative mass can introduce conceptual challenges that extend to well-defined quantities in classical physics. From hereafter, one considers a system containing only either positive or negative mass particles. ## III Negative temperature and imaginary velocity ### Negative Absolute Temperature As well-known, for a thermal bath of an ideal gas, the probability of occupation of the state \(i\) decreases exponentially with energy since [23] \[P_{i}\propto e^{-\frac{E_{i}}{k_{B}T}}, \tag{4}\] where \(k_{B}\) is the Boltzmann constant, \(T\) is the absolute temperature2, and \(E_{i}\) is the energy at the state \(i\). Consequently, in this scenario, the occupation probability of state \(i\) decreases as the energy of that state increases. In other words, the higher the energy in the state, the less likely it is to be occupied. Indeed, this reasoning is meaningful only for positive temperatures (\(T>0\)). At absolute zero temperature (\(T=0\)), all systems tend to their ground state, and the occupation probability of the lowest energy state becomes unity. However, as the temperature increases (\(T>0\)), the probabilities of higher energy states increase, leading to a more distributed occupation of states in accordance with the Boltzmann distribution. For negative temperature, there must be an upper bound to \(E_{i}\)[21; 22]. Surprisingly, there are physical systems exhibiting a negative temperature [23; 24; 25; 26]. In general, negative \(T\) is defined as that one hotter than the positive one, and it could be even infinite [21; 22; 27; 28]. This concept arises from certain systems that exhibit unique energy-level structures, where higher energy states have lower probabilities of occupation compared to lower energy states, as previously discussed. Moreover, it is important to note that in certain physical systems, negative temperatures can be theoretically achievable. These systems have limited accessible energy levels, which means that adding energy to the system increases its entropy. Additionally, negative temperatures can be interpreted as temperatures tending towards infinity on the Kelvin scale, as they correspond to systems with energy distributions favoring high-energy states over lower-energy states. ### Imaginary Velocity Imaginary velocity is a mathematical consequence of the imaginary time definition, which is a handy tool in special relativity and quantum mechanics. As well-known, imaginary time can be defined through the ordinary time \(t\) as \[\tau=it, \tag{5}\] being \(\tau\) orthogonal to \(t\). Consequently, \(\tau\) forms a non-ordered set of events3, although the events in \(t\) can be ordered. The imaginary time can be used to write the line element in the Minkowski spacetime as Footnote 3: There is no concept of past, present, and future in \(\tau\). \[ds^{2}=dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}-cdt^{2},\] where \(dx_{4}^{2}=-cdt^{2}=c(idt)^{2}=cd\tau^{2}\). In addition to the utilization of imaginary time, the imaginary velocity can be also introduced in general relativity as a manifestation of the equivalence principle: gravitational force is a consequence of inertial motion. Therefore, one can define the imaginary velocity \(v_{i}=iv\) where \(v\) is the usual real velocity. It means that a negative mass particle has an imaginary linear momentum, but it has a measurable kinetic energy. In quantum mechanics, imaginary time is used to explain, for example, particle tunneling through vibrating barriers [29]. The set of imaginary velocities can not be ordered, i.e. there is no definition of an upper limiting velocity, for example. Conversely, the limiting velocity \(v\) for massive or massless particles in general relativity is the velocity of light \(c\). Despite its non-straightforward physical interpretation, imaginary velocity is used in certain physical systems. For some elastoplastic materials, the occurrence of imaginary velocities is due to a bad choice of boundary conditions [30]. In this case, the appearance of imaginary velocity stems from an incorrect selection of geometry for the given problem. A black hole emerging in 10 dimensions \(p\)-branes is linearly unstable to long-wave perturbations along its world-volume [31]. Within the framework of relativistic hydrodynamics, this instability can be viewed as a dynamic instability of the fluid. The emergence of an imaginary velocity for sound waves means the non-propagation of density perturbations. In this context, the presence of such instability is explained as a consequence of the interplay between the relativistic effects and the behavior of the fluid [31]. In this problem, the imaginary velocity is the manifestation of a non-possibility. As a last example, the group velocity at the resonance absorption of a dispersive medium may be negative [32]. However, group velocity can be greater than the phase velocity, turning its physical interpretation a hard matter [33]. At this point, imaginary velocity can confuse. Here, one adopts a simple interpretation for the imaginary velocity: if it could be measured, then there exists a physical mechanism preventing this measure to be greater than \(c\). ## IV Partition function and entropy The existence of negative mass leads to the necessity of embracing unusual assumptions, such as negative temperature and imaginary velocity, which may consequently yield intriguing results, as will be explored further below. ### Partition Function A partition function describes the statistical properties of a physical system in thermodynamic equilibrium. Mathematically speaking, a partition function is a sum of the number of accessible states of a system, where each state is weighted by a convenient positive number representing the accessibility of such state. Considering this, a partition function is a positive quantity, which is a statistical mechanical requirement. However, in some situations, it is possible to obtain a negative partition function. The Potts cluster model is commonly used to study phase transitions and critical phenomena [34]. For a line of \(n\) even vertices, one writes the partition function for this model as [35] \[Z(T_{n},q,v)=q(q+v)^{n-1}, \tag{6}\] where \(q-v<0\) and \(T_{n}\) is the temperature at \(n\). The parameter \(q\) is limited, \(0<q<1\), and at a finite physical temperature, one has \(Z(T_{n},q,v)<0\). To circumvent this problem, it is assumed \(n\) odd. Also, the tensor renormalization group method [36] can lead to a negative partition function value [37], that can be avoided by assuming a significant cut-off parameter or by increasing the precision of the data [38]. The two above examples show the possibility of negative partition functions, despite the lack of physical interpretation. As shall be seen, the existence of negative mass can lead to this kind of partition function. For our purposes, one begins considering a 1-particle system composed of a positive mass particle, writing its semi-classical partition function as \[Z_{1}^{+}=4\pi V\left(\frac{m}{h}\right)^{3}\int_{0}^{\infty}v^{2}e^{-\frac{ mv^{2}}{2k_{B}T}}dv, \tag{7}\] where \(V\) is the reservoir volume, and \(h\) is the Planck constant. For a system composed of \(N\) identical positive mass particles, the above partition function yields the well-known result \[Z_{N}^{+}=\frac{1}{N!}\left(\frac{2\pi mk_{B}T}{h^{2}}\right)^{3N/2}V. \tag{8}\] It is important to stress that, for a positive mass, the convergence of the integral on the r.h.s of Eq. (7) is obtained assuming that \(T\) is always positive, and the reference frame is chosen in such a way that \(0<v\). From now on, one assumes a gas composed of \(N\) identical negative mass particles inside \(V\). Notice the particles in such a gas repel each other and repel the internal boundaries of the reservoir. It is plausible that given a finite time \(t\), the internal spatial configuration of each particle should tend to an almost stable position4. The parity of the number of particles influences this configuration, meaning that it is not the same for an even or odd number of particles5. In the absence of a reservoir, a plausible scenario is that the negative mass particles move outward from a hypothetical center at a finite time. It is important to note that this effect does not occur for positive mass particles since each particle is mutually attracted to the others. Taking into account the convergence of integral in Eq. (7), one studies the physical effects of the two possible transformations \[(i)\ \ T\rightarrow-T,\ {\rm keeping}\ v\] \[(ii)\ \ v\to iv,\ {\rm keeping}\ T\] Adopting the transformation \((i)\), the convergence of the integral in Eq. (7) is guaranteed allowing us to write for the 1-particle system \[Z_{1}^{-}=-\left(\frac{2\pi mkT}{h^{2}}\right)^{3/2}V, \tag{9}\] which corresponds to a negative partition function. In general, this result describes a non-physical system. However, from this result, one can calculate the semi-classical partition function for the \(N\)-particle system \[Z_{N}^{-}[T]=(-1)^{N}\frac{1}{N!}\left(\frac{2\pi mkT}{h^{2}}\right)^{3N/2}V=( -1)^{N}Z_{N}^{+}, \tag{10}\] where one writes \(Z_{N}^{-}[T]\) only to emphasizes that \(T\) is negative. Notice the sign of the result shown in Eq. (10) depends on the parity of \(N\). If \(N\) is odd, then \(Z_{N}^{-}[T]\) is negative; \(N\) even implies that \(Z_{N}^{-}[T]\) is positive. Then, \[Z_{N}^{-}[T]\ =\ Z_{N}^{+},\ \ N\ \ {\rm even}, \tag{11}\] \[Z_{N}^{-}[T]\ =\ -Z_{N}^{+},\ \ N\ \ {\rm odd}. \tag{12}\] This anomalous partition function is quite similar to the aforementioned Potts model, where the negative partition problem is avoided by the correct choice of \(N\) (even here). Notice, however, that the choice of \(N\) (even or odd) may be related to the final internal spatial configuration of the particles, which may produce modifications in the partition function. For \(N\) is even, the spatial configuration lead to a partition function describing a system where the low energy states are occupied, as usual. In terms of the partition function given in Eq. (8), there is no difference between a system with an even number of negative mass particles and one with positive mass (whatever its number). Therefore, notwithstanding \(T\) is negative, there is no influence of it on the partition function for an even number of negative mass particles. On the other hand, for a \(N\) odd, the spatial configuration changes favoring the highest energy state occupation in accordance with the negative temperature understanding [21; 22; 27; 28]. Thus, assuming \((i)\), whether \(N\) is even or odd leads to different final spatial configurations for the particles in the reservoir, thereby altering the occupation of energy levels within the system. Notice that if one assumes \(m\), \(T\), and \(v\) positive, then the partition functions Eqs. (11) and (12) can be obtained using the simple mathematical transformation given below \[-\frac{mv^{2}}{2k_{B}T}\rightarrow-\frac{mv^{2}}{2k_{B}T}+\ln(-1)=-\frac{mv^{ 2}}{2k_{B}T}+i\pi. \tag{13}\] The transformation (13) takes the real energy level into the complex domain, resulting in severe modifications to the partition function, despite \(m\), \(T\), and \(v\) positive. For the 1-particle system, the partition function is \[Z_{1}=4\pi V\left(\frac{m}{h}\right)^{3}\int_{0}^{\infty}v^{2}e^{-\frac{mv^{2} }{2k_{B}T}+\ln(-1)}dv=-4\pi V\left(\frac{m}{h}\right)^{3}\int_{0}^{\infty}v^{ 2}e^{-\frac{mv^{2}}{2k_{B}T}}dv, \tag{14}\] where one can see the emergence of the negative partition function. Thus, the introduction of negative \(T\) has the same physical effect as a change in the domain of the energy levels: from the real domain to the complex one. Thereby, one can adopt \(m=|m|\) and \(T=|T|\) in the partition function and study the effects of a negative mass using the transformations (13). Therefore, the difference between negative and positive mass in the partition function is due to the introduction of the complex domain to the energy occupation levels. On the other hand, adopting the transformation \((ii)\), there is no change in the partition function given by Eq. (8). To see that is sufficient to replace \(v^{2}\) by \((iv)^{2}=-v^{2}\) in Eq. (8). For the \(N\)-particle system the resulting partition function is just \[Z_{N}^{-}[v]=Z_{N}^{+}. \tag{15}\] In fact, as mentioned earlier, the utilization of an imaginary velocity may imply the existence of a physical mechanism acting when it is measured. As seen above, the sum of the energy occupation level in the domain of imaginary velocity is a real positive quantity. Thus, an imaginary velocity, despite its nonphysical appeal, has a physical influence on the partition function. Therefore, the transformation \((ii)\) does not produce an anomalous partition function. ### Entropy In this subsection, only the negative mass case is studied. Then, considering the transformation \((i)\) and for \(N\) even, one can write the entropy from the partition function given by Eq. (11) as \[S^{-}[T]=k_{B}\ln Z_{N}^{+}+k_{B}T\left(\frac{\partial\ln Z_{N}^{+}}{\partial T} \right)_{V}. \tag{16}\] Despite transformation \((i)\), which leads to an anomalous partition function, the result for \(N\) even shown in Eq. (16) is the usual entropy for positive mass particles which is in accordance with the discussion of the preceding subsection. Conversely, for \(N\) odd, one has the partition function (12), resulting in the following entropy \[S^{-}[T]=k_{B}\ln Z_{N}^{+}+k_{B}T\left(\frac{\partial\ln Z_{N}^{+}}{\partial T }\right)_{V}+k_{B}\ln(-1)^{N}. \tag{17}\] One can explicitly write for \(N\) odd \(N=2n-1\), for \(0<n\in\mathbb{Z}\), and from Eq. (17), one has \[S^{-}[T] = k_{B}\ln Z_{N}^{+}+k_{B}T\left(\frac{\partial\ln Z_{N}^{+}}{ \partial T}\right)_{V}+\ln(-1)^{2n-1}= \tag{18}\] \[= k_{B}\ln Z_{N}^{+}+k_{B}T\left(\frac{\partial\ln Z_{N}^{+}}{ \partial T}\right)_{V}+ik_{B}(2n-1)\pi,\] which represents a complex entropy, where the real and imaginary parts of \(S^{-}[T]\) are, respectively, \[\text{Re}S^{-}[T]=k_{B}\ln Z_{N}^{+}++k_{B}T\left(\frac{\partial\ln Z_{N}^{+} }{\partial T}\right)_{V}\text{~{}~{}and~{}~{}Im}S^{-}[T]=k_{B}(2n-1)\pi. \tag{19}\] The complex entropy is, in general, associated with information measures in the context of Shannon entropy [39]. The real part of the complex entropy shown in Eq. (19) corresponds to the entropy associated with positive temperature. Therefore, the real part corresponds to the classical entropy for positive mass particles. On the other hand, the imaginary part in Eq. (19) may represent the entropy of the non-accessible energy states, which does not contribute to the total entropy of the accessible states (the real part). Now, considering transformation \((ii)\), one has for \(N\) even and odd the same result as shown in Eq. (16). Then, \[S^{-}[v]=k_{B}\ln Z_{N}^{+}+k_{B}T\left(\frac{\partial\ln Z_{N}^{+}}{\partial T }\right)_{V}, \tag{20}\] which is in accordance with the assumption that if a negative mass has imaginary velocity, then it has measurable kinetic energy which contributes to the partition function and entropy. Discussion Assuming \((i)\), then the results for the partition function and entropy may depend on the spatial configuration of the negative mass particles inside \(V\). In turn, this configuration depends on whether \(N\) is even or odd. For an even number of particles, the entropy for the positive and negative mass particles has the same numerical value. On the other hand, for an odd number of negative particles, the resulting entropy is a complex function where the real part is equal to the entropy for the positive mass case while the imaginary part may represent the entropy associated with the non-accessible energy states. However, as commented along the text, for positive mass and temperature, the transformation (13) yields the same results as assuming a negative mass and temperature. Thus, it seems the use of negative temperature generates the same result as moving the energy levels to the complex domain. On the other hand, using \((ii)\), the results for the partition function and entropy are the expected ones. Nonetheless, even though it results in an imaginary linear momentum, it gives rise to a real kinetic energy, which has actual physical implications on the partition function and entropy. Transformation \((i)\) implies both the partition function can be negative and entropy complex while the use of \((ii)\) yields both a real kinetic energy and an imaginary momentum, which may create experimental difficulties to detect this kind of mass using the usual scattering processes. On the other hand, the introduction of interaction between positive and negative mass particles and the use of \((ii)\) may solve this problem. In general, an imaginary velocity is associated with particles traveling faster than light in a vacuum, which would lead to a violation of causality. However, it should be stressed that causality is a concept valid for non-imaginary time and velocity, having no physical meaning for imaginary quantities. Then, the eventual measurement of such particles implies the existence of some physical mechanism preventing the violation of causality for real quantities. This physical mechanism may be similar to the wave-function collapse, where the superposition of states is transformed into a measurable classical state. Suppose one can use \((ii)\) to the negative mass in a system composed of \(N\) positive mass particles and \(N\) negative mass. If the entropy in this system is an extensive property, then the resulting total entropy is given by \[S_{total}=S_{N}^{+}+S_{N}^{-}=2\left[k_{B}\ln Z_{N}^{+}+k_{B}T\left(\frac{ \partial\ln Z_{N}^{+}}{\partial T}\right)_{V}\right]. \tag{21}\] On the other hand, if the entropy in this system is a non-extensive property, the result can be given in terms of the Tsallis entropy [27; 28] \[S_{total} = S_{N}^{+}+S_{N}^{-}+(1-q)S_{N}^{+}S_{N}^{-}= \tag{22}\] \[= k_{B}\ln Z_{N}^{+}\big{[}2+(1-q)k_{B}\ln Z_{N}^{+}\big{]}+\] \[+ k_{B}T\left[2+(1-q)k_{B}\left(\ln Z_{N}^{+}+T\left(\frac{\partial \ln Z_{N}^{+}}{\partial T}\right)_{V}\right)\right]\left(\frac{\partial\ln Z_{ N}^{+}}{\partial T}\right)_{V},\] where the possible interaction between \(S_{N}^{+}\) and \(S_{N}^{-}\) is measured by the entropic-index \(q\). If the possible measurements of \(S_{total}\) tend to linearity in \(\ln Z_{N}^{+}\), then \(q\to 1\) while for a quadratic behavior, \(q<1\). Thus, \(q\) may be used to measure the possible interaction between positive and negative mass particles in the system. Another interesting question is total entropy production by negative mass particles in the post-inflationary era [40]. The production of negative mass particles during the decaying of some fundamental field deserves attention since the total entropy production should take into account the negative mass contribution. This question is presently under study. ###### Acknowledgements. The author thanks to UFSCar.
2307.03615
New physics analysis of $Λ_b\to (Λ^*(\to pK^-), Λ(\to pπ))(μ^{+}μ^{-},\,ν\barν)$ baryonic decays under SMEFT framework
The di-leptons and di-neutrinos observed in the final states of flavor-changing neutral b decays provide an ideal platform for probing physics beyond the standard model. Although the latest measurements of $R_{K^{(*)}}$ agree well with the standard model prediction, there exists several other observables such as $P_5^{\prime}$, $\mathcal{B}(B_s\to \phi \mu^{+}\mu^{-})$ and $\mathcal{B}(B_s\to \mu^{+}\mu^{-})$ in $b\to s \ell\ell$ transition decays that shows deviation from the standard model prediction. Similalry, very recently Belle II collaboration reported a more precise upper bound of $\mathcal{B}(B\to K^+\nu\bar{\nu}) < 4.1\times 10^{-5}$ by employing a new inclusive tagging approach and it also deviates from the standard model expectation. The $b\to s l^{+}l^{-}$ and $b\to s\nu\bar{\nu}$ transition decays are related not only in the standard model but also in beyond the standard model physics due to $SU(2)_L$ gauge symmetry, and can be most effectively investigated using the standard model effective field theory formalism. Additionally, the $b\to s\nu\bar{\nu}$ decay channels are theoretically cleaner than the corresponding $b\to s l^{+}l^{-}$ decays, as these processes do not get contributions from non-factorizable corrections and photonic penguin contributions. In this context, we study $\Lambda_b\to (\Lambda^*(\to pK^-), \Lambda(\to p\pi))({\mu}^{+}\mu^{-},\,\nu\bar{\nu})$ baryonic decays undergoing $b\to s \ell^{+}\ell^{-}$ and $b\to s\nu\bar{\nu}$ quark level transitions in a standard model effective field theory formalism. We give predictions of several observables pertaining to these decay channels in the standard model and in case of several new physics scenarios.
Nilakshi Das, Rupak Dutta
2023-07-07T14:12:02Z
http://arxiv.org/abs/2307.03615v1
New physics analysis of \(\Lambda_{b}\to(\Lambda^{*}(\to pK^{-}),\Lambda(\to p\pi))(\mu^{+}\mu^{-},\,\nu\bar{\nu})\) baryonic decays under SMEFT framework ###### Abstract The di-leptons and di-neutrinos observed in the final states of flavor-changing neutral b decays provide an ideal platform for probing physics beyond the standard model. Although the latest measurements of \(R_{K^{*}}\) agree well with the standard model prediction, there exists several other observables such as \(P_{5}^{\prime}\), \({\cal B}(B_{s}\to\phi\mu^{+}\mu^{-})\) and \({\cal B}(B_{s}\to\mu^{+}\mu^{-})\) in \(b\to s\ell\ell\) transition decays that shows deviation from the standard model prediction. Similarly, very recently Belle II collaboration reported a more precise upper bound of \({\cal B}(B\to K^{+}\nu\bar{\nu})<4.1\times 10^{-5}\) by employing a new inclusive tagging approach and it also deviates from the standard model expectation. The \(b\to sl^{+}l^{-}\) and \(b\to s\nu\bar{\nu}\) transition decays are related not only in the standard model but also in beyond the standard model physics due to \(SU(2)_{L}\) gauge symmetry, and can be most effectively investigated using the standard model effective field theory formalism. Additionally, the \(b\to s\nu\bar{\nu}\) decay channels are theoretically cleaner than the corresponding \(b\to sl^{+}l^{-}\) decays, as these processes do not get contributions from non-factorizable corrections and photonic penguin contributions. In this context, we study \(\Lambda_{b}\to(\Lambda^{*}(\to pK^{-}),\Lambda(\to p\pi))(\mu^{+}\mu^{-},\,\nu \bar{\nu})\) baryonic decays undergoing \(b\to s\ell^{+}\ell^{-}\) and \(b\to s\nu\bar{\nu}\) quark level transitions in a standard model effective field theory formalism. We give predictions of several observables pertaining to these decay channels in the standard model and in case of several new physics scenarios. ## I Introduction In high-energy physics experiments, such as those at particle accelerators, it is possible to produce and detect intermediate states of quantum particles that have much greater mass than the initial and final particles. These intermediate states are often short-lived and can only be observed through the detection of their decay products. In this context, study of flavor changing charged current (FCCC) and neutral current (FCNC) transitions of \(b\) hadrons is crucial as they can provide important information regarding such intermediate quantum states. Moreover, FCNC transition decays are, in principle, more sensitive to various new physics (NP) effects as they proceed either via loop level or box level diagrams where the intervention of heavier particles comes into the picture. Hence, study of these decays would offer a powerful tool to search for NP that lies beyond the standard model (SM). Over the past several years, the FCNC B decays have been the center of attention of the particle physics community especially due to discrepancies observed at BaBar, Belle and more recently at LHCb. The measured values of the lepton flavor sensitive observable such as the ratio of branching fractions \(R_{K}\) and \(R_{K^{*}}\) in \(B\to K^{(*)}\,\ell\bar{\ell}\,(\ell\in e,\mu)\) decays deviate from the SM prediction. These discrepancies hint for a possible violation of lepton flavor universality (LFU) in \(b\to s\,l^{+}\,l^{-}\) transition decays. Earlier LHCb measurement of \(R_{K}\) in \(q^{2}\in[1.1,6.0]\) GeV\({}^{2}\) showed \(3.1\sigma\) deviation from the SM expectation [1]. Similaliry, earlier measurements of \(R_{K^{*}}\) from both LHCb [2; 3] and Belle [4] in \(q^{2}\in[0.045,1.1]\) and \(q^{2}\in[1.1,6.0]\) GeV\({}^{2}\) bins showed \(2.2-2.5\sigma\) deviation [5; 6] from SM. However, very recent LHCb results [7; 8], announced in December 2022, has completely changed the entire scenario. The latest measured values of \(R_{K}=0.994^{+0.090}_{-0.082}(\mathrm{stat})^{+0.027}_{-0.029}(\mathrm{syst})\) and \(R_{K^{*}}=0.927^{+0.093}_{-0.087}(\mathrm{stat})^{+0.034}_{-0.033}(\mathrm{syst})\) in \(q^{2}\in[0.045,1.1]\) GeV\({}^{2}\) and \(R_{K}=0.949^{+0.042}_{-0.041}(\mathrm{stat})^{+0.023}_{-0.023}(\mathrm{syst})\) and \(R_{K^{*}}=1.027^{+0.072}_{-0.068}(\mathrm{stat})^{+0.027}_{-0.027}(\mathrm{syst})\) in \(q^{2}\in[1.1,6.0]\) GeV\({}^{2}\) show an overall agreement with the SM prediction with \(0.2\) standard deviation [7; 8]. Although \(R_{K}\) and \(R_{K^{*}}\) seem to be SM like, the possibilities of NP cannot be completely ruled out. Apart from \(R_{K}\) and \(R_{K^{*}}\), there are several other observables where the discrepancy between the measured value and the SM prediction still exists. Measurement of \(P_{5}^{\prime}\) from LHCb [9; 10] and ATLAS [11] show \(3.3\sigma\) deviation from the SM prediction. Similarly, CMS [12] and Belle [13] measurements show \(1\sigma\) and \(2.6\sigma\) deviations, respectively [14; 15; 16]. Again, the measured value of the branching fraction of \(B_{s}\to\phi\,\mu\bar{\mu}\) in \(q^{2}\in[1.1,6.0]\) GeV\({}^{2}\) deviates from the SM prediction at \(3.3\sigma\)[17; 18; 19]. Moreover, measurements of the ratio of branching ratio [20]\(R_{K^{0}_{s}}\) and \(R_{K^{*+}}\), isospin partners of \(R_{K}\) and \(R_{K^{*}}\), also deviate from the SM prediction at \(1.4\sigma\) and \(1.5\sigma\), respectively. There exists another class of FCNC transition decays with neutral leptons in the final state that are mediated via \(b\to s\nu\bar{\nu}\) quark level transitions. Theoretically these di-neutrino channels are clean as they do not suffer from hadronic uncertainties beyond the form factors such as the non-factorizable corrections and photon penguin contributions. However, they are very challenging from the experimental point of view due to the presence of neutrinos in the final state. Inspite of that BaBar, Bell/Belle II has managed to provide the upper bounds of \(B\to K^{(*)}\,\nu\bar{\nu}\) decays to be \({\cal B}(B^{+}\to K^{+}\,\nu\bar{\nu})\leq 4.1\times 10^{-5}\)[21] and \({\cal B}(B^{0}\to K^{0*}\,\nu\bar{\nu})\leq 1.8\times 10^{-5}\), respectively. Combined with the previous measurements from Belle and BaBar one estimates the world average value of the branching fraction to be \({\cal B}(B^{+}\to K^{+}\,\nu\bar{\nu})\leq(1.1\pm 04)\times 10^{-5}\)[21]. A combined analysis of \(b\to s\ell\bar{\ell}\) and \(b\to s\nu\bar{\nu}\) decays is theoretically well motivated as these two channels are closely related not only in the SM but also in beyond the SM under \(SU(2)_{L}\) gauge symmetry. Moreover, a more precise measurements of \(B\to K^{(*)}\,\nu\bar{\nu}\) branching fraction in future may provide useful insight into NP that may be present in \(b\to s\,l^{+}\,l^{-}\) transition decays. Various analyses, both model-dependent and model-independent, have been performed to account for these anomalies. A non-exhaustive compilation of relevant literature can be found in the references [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. To confirm the presence of NP, we need to perform measurements of similar observables in different decay processes that proceed via same quark level transitions. Similarly, it is very important to perform a detailed angular analysis in order to look for several form factor independent angular observables which are sensitive to NP. In this context, baryonic \(\Lambda_{b}^{0}\to\Lambda^{(*)}\,(\to pK)\,\mu^{+}\mu^{-}\) decay mode has got lot of attention. The recent measurement from LHCb suggests that although the ratio \(R_{pK}\) is compatible with SM, there is suppression in \({\cal B}(\Lambda_{b}\to pK\mu\bar{\mu})\) compared to \({\cal B}(\Lambda_{b}\to pKe\bar{e})\)[38]. To interpret this result, it is essential to have a precise theoretical knowledge of various excited states of \(\Lambda\) baryon contributing to \(pK\) region. The \(\Lambda_{b}\) decay to \(\Lambda^{*}\equiv\Lambda^{*}(1520)\) has the largest contribution among the various semileptonic modes of \(\Lambda_{b}\) decays to hadrons. Due to its spin parity of \(J^{P}=3/2^{-}\) and strong decay into the \(N\bar{K}\) pair, the \(\Lambda^{*}\) is readily distinguishable from nearby hadrons, including the \(\Lambda(1600)\), \(\Lambda(1405)\), and weakly decaying \(\Lambda(1116)\), which have a spin parity of \(J^{P}=1/2^{\pm}\). In Ref [39; 40], the authors calculate the LQCD form factors in the weak transition of \(\Lambda_{b}\to\Lambda(1520)\) decay, while in Ref. [41] and Ref. [42], the authors performed angular analyses of \(\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}\) decays for massless and massive leptons, respectively. Additionally, in Ref [43], the authors investigated the angular distributions of \(\Lambda_{b}\to\Lambda(1520)\ell^{+}\ell^{-}\) and discussed the potential for identifying NP effects. Similarly, the authors in Ref[44] study the \(\Lambda_{b}\to\Lambda(1520)(\to N\bar{K})\ell^{+}\ell^{-}\) process with \(N\bar{K}=\{pK^{-},n\bar{K}^{0}\}\) and examine several angular observables. The study is performed with a set of operators where the SM operator basis is supplemented with its chirality flipped counterparts and new scalar and pseudoscalar operators. The three-body light-front quark model based on the gaussian expansion method is used to systematically investigate the \(\Lambda_{b}\to\Lambda(1520)(\to N\bar{K})\ell^{+}\ell^{-}\) (\(\ell=e,\mu,\tau\)) decay process. Several theoretical methods, such as lattice QCD (LQCD) [45; 46], QCD sum rules (QCDSR) [47], light-cone sum rule (LCSR) [48; 49; 50; 51; 52], covariant quark model (CQM) [53], nonrelativistic quark model [54], and Bethe-Salpeter approach [55], have been used to study the rare decay \(\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}\). The initial measurement of the decay was conducted by the CDF Collaboration [56], followed by a subsequent measurement by the LHCb Collaboration [57; 58]. In ref [59] QCD sum rules were used to calculate the \(\Lambda_{b}\to\Lambda\) transition form factors and to study the unpolarized decay. The form factors for \(\Lambda_{b}\to\Lambda\) at large recoil were analyzed using a sum-rule approach to study spectator-scattering corrections [60]. Light-cone distribution amplitude of \(\Lambda_{b}\) wave function was studied in [61; 62; 63] to further understand the theoretical aspects. A model-independent analysis for unpolarized \(\Lambda_{b}\to\Lambda(\to N\pi)\ell^{+}\ell^{-}\) decay was performed in [64; 65; 66; 67; 42] using a complete set of dimension-six operators. The angular distribution of the decay with unpolarised \(\Lambda_{b}\) baryon has been explored in Refs.[67; 68], while in Ref.[69], the study involved polarized \(\Lambda_{b}\) baryon. Furthermore, in Ref.[70], the \(b\to s\mu^{+}\mu^{-}\) Wilson coefficients were examined by utilizing the complete angular distribution of the rare decay \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) measured by the LHCb Collaboration [58]. Similarly, in Ref[71], the authors calculate the branching fraction of \(\Lambda_{b}\to\Lambda\nu\bar{\nu}\) decay by taking the polarised \(\Lambda_{b}\) and \(\Lambda\). Moreover, in Ref [72], the authors analyse \(\Lambda_{b}\to\Lambda\nu\bar{\nu}\) decay by considering the \(Z^{{}^{\prime}}\) model. Here the authors calculate the branching ratio as well as the longitudinal, transversal and normal polarizations of the di-neutrino decay channel of the baryonic decay \(\Lambda_{b}\to\Lambda\) within the SM as well as in the presence of leptophobic \(Z^{{}^{\prime}}\) model. In this paper, we study the implication of \(b\to s\,l^{+}\,l^{-}\) anomalies on \(\Lambda_{b}\to\Lambda^{(*)}\,l^{+}\,l^{-}\) and \(\Lambda_{b}\to\Lambda^{(*)}\,\nu\bar{\nu}\) decays in a model independent way. Our work differs significantly from others. For NP analysis, we construct several \(1D\) and \(2D\) NP scenarios emerging out of dimension six operators in the standard model effective field theory (SMEFT) formalism. We obtain the allowed NP parameter space by performing a global fit to the \(b\to s\,l^{+}\,l^{-}\) data. Moreover, we also use the measured upper bound on \({\cal B}(B\to K^{(*)}\nu\bar{\nu})\) to check the compatibility of our fit results. The paper is organized as follows. In Section II, we start with a brief description of the SMEFT framework and write down the effective Hamiltonian for the \(b\to s\nu\bar{\nu}\) and \(b\to s\,l^{+}\,l^{-}\) quark level transition decays. Subsequently, we report all the relevant formulae for the observables in Section II. In Section III, we first report all the input parameters that are used for our analysis. A detailed discussion of the results pertaining to \(\Lambda_{b}\to(\Lambda^{*}(\to pK^{-}),\Lambda(\to p\pi))\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to(\Lambda^{*}(\to pK^{-}),\Lambda(\to p\pi))\nu\bar{\nu}\) baryonic decay observables in the SM and in case of NP scenarios are also presented. Finally, we conclude with a brief summary of our results in Section IV. ## II Theory Till date, no direct evidence of new particles near the electroweak scale has been observed from searches conducted in the Large Hadron Collider (LHC). Nevertheless, these searches provide indirect evidence supporting the existence of NP at a scale beyond the electroweak scale. To explore indirect signatures of NP in a model-independent way, the SMEFT framework offers a more efficient approach. The SMEFT Lagrangian explains particle interactions in the SM and in all possibleextensions of SM. It is constructed by incorporating higher-dimensional operators into the SM Lagrangian while maintaining the \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\) gauge symmetry. These higher-dimensional operators are suppressed by a factor that depends on a new energy scale. The SMEFT Lagrangian comprises all sets of these higher-dimensional operators that are consistent with the underlying gauge symmetry. For investigating NP beyond the SM at low energies, this framework provides an excellent platform. From the fundamental aspect of the electroweak theory, the left-handed charged leptons are related to neutral leptons through the \(SU(2)_{L}\) symmetry. In this study, we concentrate on the connection between \(b\to s\,l^{+}\,l^{-}\) and \(b\to s\nu\bar{\nu}\) transition decays within the SMEFT framework by considering dimension six operators. If no new particles are observed at the LHC, it will imply a NP scale that is greater than the energy scale of the LHC. The SMEFT analysis would be crucial in this situation as it offers a way to examine the implications of NP indirectly by evaluating their effects on SM low energy processes. The effective Lagrangian corresponding to dimension six operators is expressed as [73] \[\mathcal{L}^{(6)}=\sum_{i}\frac{c_{i}}{\Lambda^{2}}\,\mathcal{Q}_{i}\,. \tag{1}\] Among all the operators, the relevant operators contributing to both \(b\to s\nu\bar{\nu}\) and \(b\to s\,l^{+}\,l^{-}\) decays are \[\mathcal{Q}^{(1)}_{Hq}=i(\bar{q}_{L}\gamma_{\mu}q_{L})H^{\dagger} D^{\mu}H,\hskip 14.226378pt\mathcal{Q}^{(3)}_{Hq}=i(\bar{q}_{L}\gamma_{ \mu}\tau^{a}q_{L})H^{\dagger}D^{\mu}\tau_{a}H,\hskip 14.226378pt\mathcal{Q}_{Hd} =i(\bar{d}_{R}\gamma_{\mu}d_{R})H^{\dagger}D^{\mu}H,\] \[\mathcal{Q}^{(1)}_{ql}=(\bar{q}_{L}\gamma_{\mu}q_{L})(\bar{l}_{ L}\gamma^{\mu}l_{L}),\hskip 14.226378pt\mathcal{Q}^{(3)}_{ql}=(\bar{q}_{L} \gamma_{\mu}\tau^{a}q_{L})(\bar{l}_{L}\gamma^{\mu}r_{a}l_{L}),\hskip 14.226378pt \mathcal{Q}_{dl}=(\bar{d}_{R}\gamma_{\mu}d_{R})(\bar{l}_{L}\gamma^{\mu}l_{L})\,. \tag{2}\] Similarly, the operators contributing only to \(b\to s\,l^{+}\,l^{-}\) decays are \[\mathcal{Q}_{de}=(\bar{d}_{R}\gamma_{\mu}d_{R})(\bar{e}_{R}\gamma^ {\mu}e_{R}),\hskip 14.226378pt\mathcal{Q}_{ge}=(\bar{q}_{L}\gamma_{\mu}q_{L})( \bar{e}_{R}\gamma^{\mu}e_{R})\,. \tag{3}\] At low energy, the most general \(\Delta F=1\) effective Hamiltonian governing both \(b\to s\nu\bar{\nu}\) and \(b\to s\,l^{+}\,l^{-}\) decays can be written as [74; 35], \[\mathcal{H}_{eff}=-\frac{4G_{F}}{\sqrt{2}}\,V_{tb}V_{ts}^{*}\,\frac{e^{2}}{16 \pi^{2}}\,\sum_{i}C_{i}\,\mathcal{O}_{i}\,+h.c., \tag{4}\] where \(G_{F}\) is the Fermi coupling constant, \(|V_{tb}V_{ts}^{*}|\) are the associated Cabibbo Kobayashi Maskawa (CKM) matrix elements. The sum \(i=L,R\) comprises the operators \(\mathcal{O}_{L,R}\) with the corresponding WCs \(C_{L,R}\) contributing to \(b\to s\nu\bar{\nu}\) decays. They are \[\mathcal{O}_{L}=(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\nu}\gamma^{\mu}(1-\gamma_{5 })\nu),\hskip 14.226378pt\mathcal{O}_{R}=(\bar{s}\gamma_{\mu}P_{R}b)(\bar{\nu} \gamma^{\mu}(1-\gamma_{5})\nu). \tag{5}\] Here, \(P_{L,R}=(1\mp\gamma_{5})/2\) represents the projection operator. In the SM, \(C_{R}^{\rm SM}=0\) and the value of \(C_{L}^{\rm SM}\) is calculated to be \[C_{L}^{\rm SM}=-X_{t}/s_{w}^{2}=-6.38\pm 0.06,\hskip 28.452756ptX_{t}=1.469\pm 0.017,\hskip 28.452756pts_{w}^{2}=0.23126(5). \tag{6}\] Similarly, for \(i=9^{(\prime)},10^{(\prime)}\), the sum comprises the operators \(\mathcal{O}_{9^{(\prime)},10^{(\prime)}}\) with the corresponding WCs \(C_{9^{(\prime)},10^{(\prime)}}\) that contribute to \(b\to s\,l^{+}\,l^{-}\) decays. The operators are \[\mathcal{O}^{(\prime)}_{9}=(\bar{s}\gamma_{\mu}P_{L(R)}b)(\bar{l} \gamma^{\mu}l),\hskip 14.226378pt\mathcal{O}^{(\prime)}_{10}=(\bar{s}\gamma_{ \mu}P_{L(R)}b)(\bar{l}\gamma^{\mu}\gamma_{5}l). \tag{7}\] In the presence of dimension six SMEFT operators, the WCs \(C_{9,10,L}\) and \(C_{9^{\prime},10^{\prime},R}\) get modified. They can be expressed as follows. [74] \[C_{9} = C_{9}^{\rm SM}+\widetilde{c}_{qe}+\widetilde{c}_{ql}^{(1)}\,+ \widetilde{c}_{ql}^{(3)}\,-\zeta\widetilde{c}_{Z}\] \[C_{10} = C_{10}^{\rm SM}+\widetilde{c}_{qe}-\widetilde{c}_{ql}^{(1)}\,- \widetilde{c}_{ql}^{(3)}\,+\widetilde{c}_{Z}\] \[C_{L}^{\nu} = C_{L}^{\rm SM}+\widetilde{c}_{ql}^{(1)}\,-\widetilde{c}_{ql}^{ (3)}\,+\widetilde{c}_{Z}\] \[C_{9}^{\prime} = \widetilde{c}_{de}\,+\widetilde{c}_{dl}\,-\zeta\widetilde{c}_{Z}^ {\prime}\] \[C_{10}^{\prime} = \widetilde{c}_{de}\,-\widetilde{c}_{dl}\,+\widetilde{c}_{Z}^{ \prime}\] \[C_{R}^{\nu} = \widetilde{c}_{dl}\,+\widetilde{c}_{Z}^{\prime}\,, \tag{8}\] where, \(\widetilde{c}_{Z}=\frac{1}{2}(\widetilde{c}_{Hq}^{(1)}+\widetilde{c}_{Hq}^{(3 )})\), \(\widetilde{c}_{Z}^{\prime}=\frac{1}{2}(\widetilde{c}_{Hd})\) and \(\zeta\approx 0.08\) represents the small vector coupling to charged leptons. Differential decay distribution and \(q^{2}\) dependent observables for \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\ell^{+}\ell^{-}\) decays The four-fold angular distribution for \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\ell^{+}\ell^{-}\) decay can be expressed as [42] \[\frac{d^{4}{\cal B}}{dq^{2}d{\rm cos}\,\theta_{\ell}d{\rm cos}\, \theta_{\Lambda^{*}}d\phi} = \frac{3}{8\pi}\Bigg{[}\bigg{(}K_{1c}{\rm cos}\,\theta_{\ell}+K_{1 cc}{\rm cos}^{2}\,\theta_{\ell}+K_{1ss}{\rm sin}^{2}\,\theta_{\ell}\bigg{)}{ \rm cos}^{2}\,\theta_{\Lambda^{*}} \tag{9}\] \[\qquad+\,\bigg{(}K_{2c}{\rm cos}\,\theta_{\ell}+K_{2cc}{\rm cos}^ {2}\,\theta_{\ell}+K_{2ss}{\rm sin}^{2}\,\theta_{\ell}\bigg{)}{\rm sin}^{2}\, \theta_{\Lambda^{*}}\] \[\qquad+\,\bigg{(}K_{3ss}{\rm sin}^{2}\,\theta_{\ell}\bigg{)}{\rm sin }^{2}\,\theta_{\Lambda^{*}}\,{\rm cos}\,\phi+\bigg{(}K_{4ss}{\rm sin}^{2}\, \theta_{\ell}\bigg{)}{\rm sin}^{2}\,\theta_{\Lambda^{*}}\,{\rm sin}\,\phi\,{ \rm cos}\,\phi\] \[\qquad+\,\bigg{(}K_{5s}{\rm sin}\,\theta_{\ell}+K_{5sc}{\rm sin}\, \theta_{\ell}{\rm cos}\,\theta_{\ell}\bigg{)}{\rm sin}\,\theta_{\Lambda^{*}}{ \rm cos}\,\theta_{\Lambda^{*}}\,{\rm cos}\,\phi\] \[\qquad+\,\bigg{(}K_{6s}{\rm sin}\,\theta_{\ell}+K_{6sc}{\rm sin}\, \theta_{\ell}{\rm cos}\,\theta_{\ell}\bigg{)}{\rm sin}\,\theta_{\Lambda^{*}}{ \rm cos}\,\theta_{\Lambda^{*}}\,{\rm sin}\,\phi\,\bigg{]}\,,\] where \(\theta_{\Lambda^{*}}\) represents the angle formed by the proton with the daughter baryon \(\Lambda^{*}\) in the rest frame of \(\Lambda_{b}\). Similarly, in the rest frame of the lepton pair, \(\theta_{\ell}\) denotes the angle formed by the \(\ell^{-}\) with respect to the direction of the daughter baryon \(\Lambda^{*}\). Moreover, in the rest frame of \(\Lambda_{b}\), \(\phi\) defines the angle between the planes containing \(p\,K^{-}\) and the lepton pair. The angular coefficients \(K_{\{\cdots\}}\), \(\{\cdots\}=1c,\cdots 6sc\), can be expressed as \[K_{\{\cdots\}}=K_{\{\cdots\}}+\frac{m_{\ell}}{\sqrt{q^{2}}}K_{\{ \cdots\}}^{\prime}+\frac{m_{\ell}^{2}}{q^{2}}K_{\{\cdots\}}^{\prime\prime}\,. \tag{10}\] Here the first term \(K\) corresponds to massless leptons, whereas, \(K^{\prime}\) and \(K^{\prime\prime}\) correspond to linear (\({\cal O}(m_{\ell}/\sqrt{q^{2}})\)) and quadratic (\({\cal O}(m_{\ell}^{2}/q^{2})\)) mass corrections, respectively. The explicit expressions for \(K_{\{\cdots\}}\), \(K_{\{\cdots\}}^{\prime}\) and \(K_{\{\cdots\}}^{\prime\prime}\) in terms of transversely amplitude are taken Ref [42]. From the Differential decay distributions, one can construct several physical observables. * The differential branching ratio \(d{\cal B}/dq^{2}\), the lepton forward-backward asymmetry \(A_{FB}^{l}(q^{2})\), the fraction of longitudinal polarization \(F_{L}(q^{2})\) and the ratio of branching fraction \(R_{\Lambda^{*}}(q^{2})\) are defined as \[\frac{d{\cal B}}{dq^{2}} = \frac{1}{3}\bigg{[}K_{1cc}+2K_{1ss}+2K_{2cc}+4K_{2ss}+2K_{3ss} \bigg{]}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ It is important to note that the angular coefficients (\(K_{1c}\), \(K_{2c}\)), (\(K_{1cc}\), \(K_{2cc}\)), and (\(K_{1ss}\), \(K_{2ss}\)) exhibit a strict relation in the SM. That is \[\frac{K_{1c}}{K_{2c}}=4\,,\quad\frac{K_{1cc}}{K_{2cc}}=4\,,\quad\frac{K_{1ss}}{K _{2ss}}=4\,. \tag{13}\] Differential decay distribution and \(q^{2}\) dependent observables for \(\Lambda_{b}\to\Lambda(\to p\pi)\ell^{+}\ell^{-}\) The four fold angular distribution for \(\Lambda_{b}\to\Lambda(\to p\pi)\ell^{+}\ell^{-}\) is defined as [65] \[\frac{d^{4}{\cal B}}{dq^{2}d\cos\theta_{\ell}d\cos\theta_{\Lambda} d\phi} = \frac{3}{8\pi}(K_{1ss}\sin^{2}\theta_{\ell}+K_{1cc}\cos^{2}\theta _{\ell}+K_{1c}\cos\theta_{\ell}) \tag{14}\] \[+ (K_{2ss}\sin^{2}\theta_{\ell}+K_{2cc}\cos^{2}\theta_{\ell}+K_{2c }\cos\theta_{\ell})\cos\theta_{\Lambda}\] \[+ (K_{3sc}\sin\theta_{\ell}\cos\theta_{\ell}+K_{3s}\sin\theta_{ \ell})\sin\theta_{\Lambda}\sin\phi\] \[+ (K_{4sc}\sin\theta_{\ell}\cos\theta_{\ell}+K_{3s}\sin\theta_{ \ell})\sin\theta_{\Lambda}\cos\phi\,.\] where the angular coefficients \(K_{ijk}\) can be expressed as \[K_{ijk}=K_{ijk}+\frac{m_{\ell}}{\sqrt{q^{2}}}K^{\prime}_{ijk}+\frac{m_{\ell}^{ 2}}{q^{2}}K^{\prime\prime}_{ijk}\,, \tag{15}\] with \(ijk=1ss\cdots\ 4s\). The explicit expressions for \(K\), \(K^{\prime}\) and \(K^{\prime\prime}\) are taken from Ref [65]. We define several physical observables pertaining to this decay mode. * Differential branching ratio \(d{\cal B}/dq^{2}\), the lepton forward-backward asymmetry \(A_{FB}^{l}(q^{2})\), the fraction of longitudinal polarization \(F_{L}(q^{2})\) and the ratio of branching fraction \(R_{\Lambda}(q^{2})\) are defined as \[\frac{d{\cal B}}{dq^{2}}=2K_{1ss}+K_{1cc}\,\,\,\,\,\,\,\,F_{L}= \frac{2K_{1ss}-K_{1cc}}{2K_{1ss}+K_{1cc}}\,,\quad\,A_{\rm FB}^{\ell}=\frac{3}{2 }\frac{K_{1c}}{2K_{1ss}+K_{1cc}}\,,\quad\,\,\,\,R_{\Lambda}(q^{2})=\frac{d{ \cal B}/dq^{2}|_{\mu-mode}}{d{\cal B}/dq^{2}|_{e-mode}}\] (16) * Angular observables such as \(\hat{K}_{1c}\), \(\hat{K}_{1cc}\), \(\hat{K}_{1ss}\), \(\hat{K}_{2c}\), \(\hat{K}_{2cc}\), \(\hat{K}_{2ss}\), \(\hat{K}_{3ss}\), \(\hat{K}_{3sc}\), \(\hat{K}_{4sc}\), \(\hat{K}_{4s}\) are defined as \[\hat{K}_{1c} = \frac{K_{1c}}{d{\cal B}/dq^{2}}\,\,\,\,\,\,\,\,\hat{K}_{1cc}= \frac{K_{1cc}}{d{\cal B}/dq^{2}}\,\,\,\,\,\,\,\,\hat{K}_{1ss}=\frac{K_{1ss}}{d{ \cal B}/dq^{2}}\,\,\,\,\,\,\,\,\hat{K}_{2c}=\frac{K_{2c}}{d{\cal B}/dq^{2}}\, \,\,\,\,\,\,\hat{K}_{2cc}=\frac{K_{2cc}}{d{\cal B}/dq^{2}}\] \[\hat{K}_{2ss} = \frac{K_{2ss}}{d{\cal B}/dq^{2}}\,\,\,\,\,\,\,\hat{K}_{3sc}=\frac {K_{3sc}}{d{\cal B}/dq^{2}}\,\,\,\,\,\,\,\hat{K}_{3s}=\frac{K_{3s}}{d{\cal B}/ dq^{2}}\,\,\,\,\,\,\,\,\hat{K}_{4sc}=\frac{K_{4sc}}{d{\cal B}/dq^{2}}\,\,\,\,\,\,\,\hat{K}_{4s}= \frac{K_{4s}}{d{\cal B}/dq^{2}}\] (17) ## III Results ### Input parameters The numerical values of all the input parameters used in the paper are summarized in the Table 1. Input parameters, such as the masses of mesons and quarks are expressed in GeV units, the Fermi coupling constant \(G_{F}\) is in GeV\({}^{-2}\) units and the life time of \(\Lambda_{b}\) baryon is in seconds. For hadronic inputs such as \(\Lambda_{b}\to\Lambda^{*}\) form factors, we use the values reported in Ref. [54], and for \(\Lambda_{b}\to\Lambda\) form factors, we use the LQCD results of Ref [46]. The relevant formula for the \(\Lambda_{b}\to\Lambda^{*}\) form factors pertinent for our discussion is as follows [54] \[F(\hat{s})=(a_{0}+a_{2}p_{\Lambda}^{2}+a_{4}p_{\Lambda}^{4})\exp\Big{(}-\frac {3m_{q}^{2}}{2\hat{m}_{\Lambda}^{2}}\frac{p_{\Lambda}^{2}}{\alpha_{\Lambda^{ \prime}}^{2}}\Big{)}\,, \tag{18}\] where \[p_{\Lambda}=\frac{m_{\Lambda b}}{2}\sqrt{\phi(\hat{s})}\,,\qquad\qquad\phi(\hat{s} )=(1-r)^{2}-2(1+r)\hat{s}+\hat{s}^{2}\,,\qquad\qquad\alpha_{\Lambda\Lambda^{ \prime}}=\sqrt{\frac{\alpha_{\Lambda}^{2}+\alpha_{\Lambda^{\prime}}^{2}}{2}}\,. \tag{19}\] Here \(r=m_{\Lambda^{*}}^{2}/m_{\Lambda_{b}}^{2}\) and \(\hat{s}\equiv q^{2}/m_{\Lambda b}^{2}\). We consider \(5\%\) uncertainty in the input parameters \(F_{i}\in(i=1...4)\), \(G_{i}\in(i=1...4)\) and \(H_{i}(i=1...6)\). The values of these parameter, taken from Ref [54], are reported in Table II. Similarly, for \(\Lambda_{b}\to\Lambda\) transition form factors, we use the relevant form factor formula from Ref [46]. That is \[f(q^{2})=\frac{1}{1-q^{2}/(m_{\rm pole}^{f})^{2}}\big{[}a_{0}^{f}+a_{1}^{f}z(q ^{2},t_{+})\big{]}\,. \tag{20}\] To calculate the statistical uncertainties of the observable, we utilize the parameters from the "nominal" fit. However, to estimate the systematic uncertainties, we use a "higher-order" fit where the fit function is given by \[f(q^{2})=\frac{1}{1-q^{2}/(m_{\rm pole}^{f})^{2}}\big{[}a_{0}^{f}+a_{1}^{f}z(q ^{2},t_{+})+a_{2}^{f}(z(q^{2},t_{+}))^{2}\big{]}\,. \tag{21}\] The function \(z(q^{2},t_{+})\) is defined as \[z(q^{2},t_{+})=\frac{\sqrt{t_{+}-q^{2}}-\sqrt{t_{+}-t_{0}}}{\sqrt{t_{+}-q^{2}} +\sqrt{t_{+}-t_{0}}}\,, \tag{22}\] where \(t_{0}=(m_{\Lambda_{b}}-m_{\Lambda})^{2}\) and \(t_{+}=(m_{B}+m_{K})^{2}\). The fit parameters and masses used in our analysis are taken from Ref. [46]. For completeness, we report them in Table III. ### SM predictions In this section, we present the central values and the \(1\sigma\) uncertainties of several observables for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\ell^{+}\ell^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\ell^{+}\ell^{-}\) decay channels. More specifically, we give prediction of the branching ratio (\(BR\)), the ratio of branching ratios (\(R_{\Lambda^{(*)}}\)), the forward-backward asymmetry (\(A_{FB}^{l}\)), the longitudinal polarization fraction \((F_{L})\) for the \(\mu^{+}\mu^{-}\) modes, respectively. We also report various angular observables such as \(\hat{K}_{1ss}\), \(\hat{K}_{2c}\), \(\hat{K}_{2cc}\), \(\hat{K}_{2ss}\), \(\hat{K}_{3ss}\), \(\hat{K}_{4ss}\), \(\hat{K}_{4s}\), \(\hat{K}_{5s}\) for \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\ell^{+}\ell^{-}\) decay mode. Similarly, we report angular observables such as \(\hat{K}_{1c}\), \(\hat{K}_{1cc}\), \(\hat{K}_{1ss}\), \(\hat{K}_{2c}\), \(\hat{K}_{2cc}\), \(\hat{K}_{2ss}\), \(\hat{K}_{3ss}\), \(\hat{K}_{3sc}\), \(\hat{K}_{4sc}\), \(\hat{K}_{4s}\) for \(\Lambda_{b}\to\Lambda(\to p\pi)\ell^{+}\ell^{-}\) decay mode as well. Moreover, we give predictions of several observables pertaining to \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay modes. The central values of the observables are obtained using the central values of the input parameters, whereas the uncertainties in each observable are determined by varying the uncertainties associated with inputs such as form factors and the CKM matrix elements within \(1\sigma\) of their central values. For the \(\mu^{+}\mu^{-}\) final states, we explore two \(q^{2}\) bins, namely \((1.1-6.0)\) and \((14.2-q^{2}_{max})\) for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\ell^{+}\ell^{-}\) decay mode and \((1.1-6.0)\) and \((15.0-q^{2}_{max})\) for the \(\Lambda_{b}\to\Lambda(\to p\pi)\ell^{+}\ell^{-}\) decay mode, respectively. All the results are listed in Table 4 and Table. 5, respectively. Our observations are as follows. * The branching ratio of \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) mode is found to be of \({\cal O}(10^{-9})\), while the branching ratio of \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay mode is observed to be of \({\cal O}(10^{-7})\). * The values of \(F_{L}\), \(A^{l}_{FB}\), \(\hat{K}_{1c}\), \(\hat{K}_{2c}\), \(\hat{K}_{2ss}\), and \(\hat{K}_{4ss}\) are observed to be lower at high \(q^{2}\) bin compared to the values obtained in the low \(q^{2}\) bin. * In the case of the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay mode, values of \(\hat{K}_{3s}\) and \(\hat{K}_{4ss}\) are zero in the low \(q^{2}\) bin, whereas they are non-zero in the high \(q^{2}\) bin. * We found the ratios \(K_{1c}/K_{2c}\), \(K_{1cc}/K_{2cc}\) and \(K_{1ss}/K_{2ss}\) to be equal to 4. * As expected, value of \(R_{\Lambda^{(*)}}\) is very close to unity. * The branching fraction of both \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay channels are found to be of \({\cal O}(10^{-6})\). * It is observed that the uncertainties in the di-neutrino channels are less than the uncertainty in di-lepton decay channels. For completeness we also report the branching ratio for the \(\Lambda_{b}\to\Lambda(\to p\pi)\tau^{+}\tau^{-}\) mode to be \((1.9\pm 0.43)\times 10^{-7}\) in \(q^{2}\in[15.0-q^{2}_{max}]\) which is quite similar to the value reported in ref.[65]. A slight difference is observed due to the different choice of input parameters. ### Global fit Our primary objective in this work is to use a model-independent SMEFT formalism to investigate the effects of \(b\to s\,l^{+}\,l^{-}\) anomalies on several baryonic \(b\to s\,l^{+}\,l^{-}\) and \(b\to s\,\nu\,\bar{\nu}\) transition decays. The SMEFT coefficients for left chiral currents, namely \(\widetilde{c}^{(1)}_{ql}\), \(\widetilde{c}^{(3)}_{ql}\), and \(\widetilde{c}_{Z}\) contribute to WCs \(C_{9,10}\) in \(b\to s\,l^{+}\,l^{-}\) and to \(C_{L}\) in \(b\to s\nu\bar{\nu}\) transitions decays. Similarly, the SMEFT coefficients for right chiral currents such as \(\widetilde{c}_{dl}\) and \(\widetilde{c}^{\prime}_{Z}\) are connected to \(C^{\prime}_{9,10}\) in \(b\to s\,l^{+}\,l^{-}\) and \(C_{R}\) in \(b\to s\nu\bar{\nu}\) transition decays. We construct several \(1D\) and \(2D\) NP scenarios. For \(1D\) NP scenario, we consider NP contribution from a single NP operator, whereas, for \(2D\) NP scenario, we consider NP contribution from two different NP operators simultaneously. We use a naive \(\chi^{2}\) analysis and determine the scenario that best explains the anomalies observed in \(b\to s\,l^{+}\,l^{-}\) transition decays. To obtain the best-fit values of these NP Wilson coefficients, we use all the available \(b\to s\,l^{+}\,l^{-}\) experimental data. We define our \(\chi^{2}\) as follows \[\chi^{2}=\sum_{i}\frac{\left({\cal O}^{\rm th}_{i}-{\cal O}^{\rm exp}_{i} \right)^{2}}{(\Delta{\cal O}^{\rm exp}_{i})^{2}+(\Delta{\cal O}^{\rm th}_{i})^ {2}}\,, \tag{23}\] where \({\cal O}_{i}^{\rm th}\) and \({\cal O}_{i}^{\rm exp}\) denote the theoretical and measured central values of each observables, respectively. The uncertainties associated with theory and experimental values are represented by \(\Delta{\cal O}_{i}^{\rm th}\) and \(\Delta{\cal O}_{i}^{\rm exp}\). In our \(\chi^{2}\) analysis, we include total eight measurements, namely (\(R_{K}\), \(R_{K^{*}[q^{2}=1.1-6]}\) (BELLE), \(R_{K^{*}[q^{2}=1.1-6]}\) (BABAR), \(P^{\prime}_{[5[q^{2}=4-6]}\), \(P^{\prime}_{5[q^{2}=4.3-6]}\), \(P^{\prime}_{5[q^{2}=4-8]}\), \({\cal B}(B_{s},\rightarrow,\phi,\mu^{+},\mu^{-})\), and \({\cal B}(B_{s}\rightarrow\mu^{+}\mu^{-})\)). The best fit values and the corresponding allowed ranges of all the SMEFT coefficients for various \(1D\) and \(2D\) scenarios are reported in Table. 6. We also report the \(\chi^{2}_{\rm min}\)/d.o.f and the \({\rm Pull}_{\rm SM}=\sqrt{\chi^{2}_{\rm SM}-\chi^{2}_{\rm NP}}\) for each scenario in Table. 6. We consider eight measured parameters in our \(\chi^{2}\) analysis. Hence, the number of degrees of freedom (d.o.f) will be \(8-1=7\) for each \(1D\) NP scenario and \(8-2=6\) for each \(2D\) NP scenario. We first determine the \(\chi^{2}_{\rm min}\)/d.o.f in the SM to be 5.578, which determines the degree of disagreement between the SM prediction and the current experimental data. In each case, the \(\chi^{2}_{\rm min}\) value represents the best-fit value. We impose \(\chi^{2}\leq 12.592\) constraint to obtain the allowed range of each \(1D\) NP coefficient at 95% percent confidence level (CL). Similarly, the allowed range for each \(2D\) NP coefficient at the 95% CL is obtained by imposing \(\chi^{2}\leq 11.070\) constraint. It is evident from Table 6 that not all the SMEFT coefficients can explain the observed deviations in \(b\to s\,l^{+}\,l^{-}\) data. In fact, NP scenarios represented by \(\widetilde{c}_{dl}\), \(\widetilde{c}^{\prime}_{Z}\) and \((\widetilde{c}_{dl},\widetilde{c}^{\prime}_{Z})\) WC's are ruled out because the \(\chi^{2}_{min}\) values obtained for these scenarios are higher than the \(\chi^{2}\) value obtained in SM. Hence, we will not discuss them any further. Nevertheless, there are a few \(2D\) scenarios, namely \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\), and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), for which the \({\rm Pull}_{\rm SM}\) is considerably larger than the rest of the NP scenarios. Furthermore, these scenarios exhibit better compatibility with \(R_{K}\), \(R_{K^{*}}\), \(P^{\prime}_{5}\), \({\cal B}(B_{s}\rightarrow\phi\mu^{+}\mu^{-})\) and \({\cal B}(B_{s}\rightarrow\mu^{+}\mu^{-})\) data. In Table. 7, we present the central values and the corresponding \(1\sigma\) uncertainties associated with each observable pertaining to \(B\rightarrow,K^{(*)}\,\mu^{+}\mu^{-}\) decays in the SM and in case of several NP scenarios. The experimental values till 2022 December for \(R_{K}\), \(R_{K^{*}[q^{2}=1.1-6]}\), \(P^{\prime}_{5[q^{2}=4-6]}\), \(P^{\prime}_{5[q^{2}=4.3-6]}\), \(P^{\prime}_{5[q^{2}=4-8]}\) \begin{table} \begin{tabular}{|c||c|c|c|} \hline SMEFT couplings & Best fit & \(\chi^{2}_{\rm min}\)/d.o.f & Pull\({}_{\rm SM}\) \\ \hline SM & — & 5.578 & — \\ \hline \(\widetilde{c}^{(1),(3)}_{ql}\) & -0.495 & 2.953 & 1.620 \\ \(1\sigma\rightarrow\) & [-8.683, 0.335] & & \\ \hline \(\widetilde{c}_{Z}\) & 0.862 & 3.374 & 1.484 \\ \(1\sigma\rightarrow\) & [-0.529, 9.599] & & \\ \hline \(\widetilde{c}_{Z}\) & -0.114 & 6.728 & — \\ \(1\sigma\rightarrow\) & [-1.106, 1.027] & & \\ \hline \(\widetilde{c}_{dl}\) & -0.114 & 6.954 & — \\ \(1\sigma\rightarrow\) & [-0.696, 0.693] & & \\ \hline \((\widetilde{c}^{(1)}_{ql},\widetilde{c}^{(3)}_{ql})\) & (-9.444, 8.797) & 3.391 & 1.478 \\ \(1\sigma\rightarrow\) & ([-9.999, 9.969], [-9.998, 9.937]) & & \\ \hline \((\widetilde{c}^{(1),(3)}_{ql},\widetilde{c}_{Z})\) & (-1.732, -1.608) & 3.211 & 1.539 \\ \(1\sigma\rightarrow\) & ([-8.951, 2.619], [-5.293, 8.713]) & & \\ \hline \((\widetilde{c}^{(1),(3)}_{ql},\widetilde{c}_{dl})\) & (-0.660, 0.211) & 4.324 & 1.120 \\ \(1\sigma\rightarrow\) & ([-8.457, 0.175],[-2.333, 1.128]) & & \\ \hline \((\widetilde{c}^{(1),(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) & (-3.774, -4.827) & 1.402 & 2.044 \\ \(1\sigma\rightarrow\) & ([-8.431, 0.175], [-6.038, 5.537]) & & \\ \hline \((\widetilde{c}_{Z},\widetilde{c}_{dl})\) & (0.969, 0.211) & 4.679 & 0.948 \\ \(1\sigma\rightarrow\) & ([-0.167, 4.647], [-0.749, 1.837]) & & \\ \hline \((\widetilde{c}_{Z},\widetilde{c}_{Z})\) & (4.492, -4.057) & 1.863 & 1.927 \\ \(1\sigma\rightarrow\) & ([-0.195, 6.346], [-5.175, 4.725]) & & \\ \hline \((\widetilde{c}_{dl},\widetilde{c}_{Z})\) & (-0.105, -0.164) & 8.395 & — \\ \(1\sigma\rightarrow\) & ([-2.616, 1.341], [-1.634, 0.881]) & & \\ \hline \hline \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) & -0.495 & 2.953 & 1.620 \\ \(1\sigma\rightarrow\) & [0.343, 1.157] & (-0.840, -2.655], [-5.422, 8.742]) & 3.383 & 1.482 \\ \(1\sigma\rightarrow\) & ([-8.804, 2.655], [-5.422, 8.742]) & & \\ \hline \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}_{dl})\) & ( -0.645, -0.010) & 3.671 & 1.381 \\ \(1\sigma\rightarrow\) & ([-8.453, 0.157], [-2.323, 1.084]) & & \\ \hline \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) & (-3.776, -4.938) & 1.417 & 2.040 \\ \(1\sigma\rightarrow\) & ([-8.447, 0.157], [-6.078, 5.545] ) & & \\ \hline \end{tabular} \end{table} Table 6: Best fit values and the allowed ranges of SMEFT coefficients at 95% CL in several \(1D\) and \(2D\) scenarios. \({\cal B}(B_{s}\to\phi\mu^{+}\mu^{-})\), and \({\cal B}(B_{s}\to\mu^{+}\mu^{-})\) are also listed in the first row of Table. 7. We now move to analyse the goodness of our fit results with the measured values of \({\cal B}(B\to K^{(*)}\,\nu\,\bar{\nu})\). We report, in Table. 8, the best fit values and the corresponding allowed ranges of \({\cal B}(B\to\,K^{(*)}\,\nu\,\bar{\nu})\), \(F_{L}^{K^{*}}\) and also the ratios \({\cal R}_{\cal K}\), \({\cal R}_{\cal K^{*}}\) and \({\cal R}_{\cal F_{C}}^{K^{*}}\) obtained with the best fit values and the allowed ranges of each NP couplings at 95% CL of Table. 6. We also report the SM central values and the corresponding \(1\sigma\) uncertainties associated with each observable in Table. 8. In the SM, the branching fractions of \(B\to\,K^{(*)}\,\nu\,\bar{\nu}\) decays are of \({\cal O}(10^{-6})\). The ratios \({\cal R}_{\cal K}\), \({\cal R}_{\cal K^{*}}\) and \({\cal R}_{\cal F_{C}}^{K^{*}}\) are equal to unity in the SM. Hence any deviation from unity in these parameters could be a clear signal of beyond the SM physics. Moreover, there exists a few experiments that provide the upper bound on the branching ratio of \(B\to K^{(*)}\,\nu\,\bar{\nu}\) to be \({\cal B}(B\to K\,\nu\,\bar{\nu})<11\times 10^{-6}\) and \({\cal B}(B\to K^{*}\,\nu\,\bar{\nu})<27\times 10^{-6}\), respectively. Ignoring any theoretical uncertainty, we estimate the upper bound on \({\cal R}_{\cal K}^{(*)}\) to be \({\cal R}_{\cal K}<2.75\) and \({\cal R}_{\cal K^{*}}<2.89\), respectively. We observe that the range of \(B\to K^{(*)}\,\nu\,\bar{\nu}\) and \({\cal R}_{\cal K}^{(*)}\) obtained with the allowed range of each NP couplings are compatible with the experimental upper bound of \(B\to K^{(*)}\,\nu\,\bar{\nu}\) and \({\cal R}_{\cal K}^{(*)}\). However, the best fit values of \(B\to K^{(*)}\,\nu\,\bar{\nu}\) and \({\cal R}_{\cal K}^{(*)}\) obtained with the best fit values of \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{ql}^{(3)})\), \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) SMEFT coefficients are larger than the experimental upper bound. Hence a simultaneous explanation of \(b\to s\,l^{+}\,l^{-}\) and \(b\to s\nu\bar{\nu}\) is not possible with these NP scenarios. Moreover, the values of \(B\to K^{(*)}\,\nu\,\bar{\nu}\) and \({\cal R}_{\cal K}^{(*)}\) obtained with \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z})\) SMEFT coefficients are quite large. More precise measurement on \(B\to K^{(*)}\nu\bar{\nu}\) branching fraction in future can exclude this NP scenario. Again it can be seen from Table. 8 that \({\cal R}_{\cal F_{C}}^{K^{*}}\) remains SM like for all the scenarios with left handed currents. However, with the inclusion of right handed currents, its value seem to differ from unity. Hence a deviation from unity in \({\cal R}_{\cal F_{C}}^{K}\) would be clear signal of NP through right handed currents. It should be emphasized that the value of \({\cal R}_{\cal F_{C}}^{K^{*}}\) obtained with \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z}^{\prime})\), \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\), and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) SMEFT couplings deviates significantly from the SM prediction. In Fig. 1, we show the allowed ranges of \({\cal B}(B\to\,K^{(*)}\,\nu\,\bar{\nu})\) with few selected NP scenarios such as \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z}^{\prime})\), \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\), \((\widetilde{c}_{Z}^{\prime},\widetilde{c}_{Z}^{\prime})\), and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) that best explains the \(b\to s\,l^{+}\,l^{-}\) data. Best fit values of \({\cal B}(B\to\,K^{(*)}\,\nu\,\bar{\nu})\) are shown with a black dot in Fig. 1. The allowed range of each observable is obtained by using the allowed ranges of the NP couplings. The red and green line represents the experimental upper bound of \({\cal B}(B\to\,K\,\nu\,\bar{\nu})\) and \({\cal B}(B\to\,K^{*}\,\nu\,\bar{\nu})\), respectively. It is evident that the allowed ranges of \({\cal B}(B\to\,K\,\nu\,\bar{\nu})\) and \({\cal B}(B\to\,K^{*}\,\nu\,\bar{\nu})\) obtained with \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z}^{\prime},\widetilde{c}_{Z}^{\prime})\) SMEFT scenarios are compatible with the experimental upper bound. In case of \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) NP scenarios, although the best fit value does not simultaneously satisfy the experimental upper bound, there still exist some NP parameter space that can, in principle, satisfy both the constraint. It is also evident that, the best fit value of \({\cal B}(B\to\,K\,\nu\,\bar{\nu})=12.9\times 10^{-6}\) obtained with \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) NP coupling is very close to the experimental upper bound of \(11\times 10^{-6}\). Effects of SMEFT coefficients in \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay observables Our main objective is to investigate NP effects on \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay observables in a model independent SMEFT framework. Based on our \(\chi^{2}\) analysis and the constraint imposed by the experimental upper bound of \(\mathcal{B}(B\to\,K\,\nu\,\bar{\nu})\) and \(\mathcal{B}(B\to\,K^{*}\,\nu\,\bar{\nu})\), we chose three NP scenarios namely, \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z}^{\prime},\widetilde{c}_{Z})\), and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) that corresponds to larger Pull\({}_{\rm SM}\) value than the rest of the NP scenarios. The results are listed in Table. 9, Table. 10, Table. 11 and Table. 12, respectively. Our observations are as follows * \(\mathbf{BR}:\) In case of \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay, branching ratio deviates from the SM prediction by \(\approx 1\sigma\) in the presence of \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}_{Z})\) SMEFT couplings at the low \(q^{2}\) region, whereas, almost \(2\sigma\) deviation is observed in the presence of \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}_{Z}^{\prime})\) coupling at the high \(q^{2}\) region. In case of \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channel, no significant deviation is observed at low \(q^{2}\) region. However, at high \(q^{2}\) region, there is more than \(1\sigma\) deviation in presence of \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. Moreover, with \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP coupling, the deviation from the SM prediction is quite significant and it is distinguishable from the SM prediction at more than \(5\sigma\). * \(\mathbf{F}_{L}\): For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay channel, \(F_{L}\) deviates from the SM prediction by \(1\sigma\) in the presence of \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP couplings at the low \(q^{2}\) region. Moreover, a significant deviation of more than \(3\sigma\) is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. At the high \(q^{2}\) region, \(F_{L}\) deviates more than \(2.8\sigma\) and \(1.75\sigma\) in the presence of \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings, respectively. Similarly, a deviation of more than \(3.5\sigma\) is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP coupling. In case of \(\Lambda_{b}\to\Lambda^{*}(\to p\pi)\mu^{+}\mu^{-}\) channel, a deviation of more than \(1\sigma\) is observed with \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP coupling at both low and high \(q^{2}\) region. * \(\mathbf{A^{\mu}_{FB}}\): For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay, a significant deviation of more than \(5\sigma\) from the SM prediction \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{\(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay (\(\mu\) mode)} \\ \hline \hline SMEFT Couplings & \(q^{2}\) bin & \(K_{1e}\) & \(K_{1e}\) & \(K_{1as}\) & \(K_{2e}\) & \(K_{2uc}\) & \(K_{2ss}\) & \(K_{4as}\) & \(K_{5s}\) \\ \hline \hline & \(1.1-6.0\) & 0.074 & 0.294 & 0.853 & 0.019 & 0.074 & 0.213 & -0.000 & -0.007 \\ & & [\(\cdot\)0.369, 0.422] & [\(\cdot\)0.196, 0.949] & [\(\cdot\)0.525, 0.902] & [\(\cdot\)0.092, 0.105] & [\(\cdot\)0.049, 0.237] & [\(\cdot\)0.131, 0.225] & [\(\cdot\)0.001, 0.001] & [\(\cdot\)0.010, 0.023] \\ \hline \((\overline{c}_{\rm qf}^{(33)},\overline{c}_{Z}^{\prime})\) & \(14.2-q_{max}^{2}\) & 0.518 & 0.736 & 0.009 & 0.134 & 0.182 & 0.008 & -0.046 & 0.539 \\ & & [\(\cdot\)0.465, 0.123] & [\(\cdot\)0.430, 0.672] & [\(\cdot\)0.661, 0.773] & [\(\cdot\)0.119, 0.030] & [\(\cdot\)0.116, 0.170] & [\(\cdot\)0.160, 0.185] & [\(\cdot\)0.049, 0.036] & [\(\cdot\)0.054, 0.067] \\ \hline & \(1.1-6.0\) & 0.023 & 0.198 & 0.901 & 0.006 & 0.050 & 0.225 & 0.000 & -0.005 \\ & & [\(\cdot\)0.174, 0.116] & [\(\cdot\)0.090, 0.262] & [\(\cdot\)0.869, 0.955] & [\(\cdot\)0.044, 0.029] & [\(\cdot\)0.023, 0.066] & [\(\cdot\)0.215, 0.239] & [\(\cdot\)0.001, 0.001] & [\(\cdot\)0.013, 0.017] \\ \hline \((\overline{c}_{\rm zf}^{(2)},\overline{c}_{Z}^{\prime})\) & \(14.2-q_{max}^{2}\) & 0.001 & 0.601 & 0.694 & 0.000 & 0.155 & 0.175 & -0.002 & -0.016 \\ & & [\(\cdot\)0.467, 0.266] & [\(\cdot\)0.459, 0.653] & [\(\cdot\)0.670, 0.758] & [\(\cdot\)0.119, 0.068] & [\(\cdot\)0.124, 0.166] & [\(\cdot\)0.169, 0.188] & [\(\cdot\)0.047, 0.007] & [\(\cdot\)0.054, 0.065] \\ \hline & \(1.1-6.0\) & 0.073 & 0.293 & 0.854 & 0.018 & 0.073 & 0.213 & -0.000 & -0.007 \\ & & [\(\cdot\)0.369, 0.419] & [\(\cdot\)0.196, 0.951] & [\(\cdot\)0.524, 0.902] & [\(\cdot\)0.092, 0.105] & [\(\cdot\)0.049, 0.238] & [\(\cdot\)0.131, 0.225] & [\(\cdot\)0.001, 0.001] & [\(\cdot\)0.010, 0.023] \\ \hline \((\overline{c}_{\rm zf}^{(1)}+\overline{c}_{\rm zf}^{(2)})\) & \(14.2-q_{max}^{2}\) & 0.038 & 0.519 & 0.735 & 0.010 & 0.134 & 0.182 & 0.030 & -0.046 \\ & & [\(\cdot\)0.465, 0.126] & [\(\cdot\)0.429, 0.671] & [\(\cdot\)0.661, 0.773] & [\(\cdot\)0.119, 0.030] & [\(\cdot\)0.116, 0.170] & [\(\cdot\)0.160, 0.185] & [\(\cdot\)0.049, 0.036] & [\(\cdot\)0.049, 0.067] \\ \hline \end{tabular} \end{table} Table 10: Angular observables \(K_{i}\) for the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay mode in case of few selected \(2D\) NP scenarios. is observed in all the three NP scenarios at both low and high \(q^{2}\) region. For the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channel, the deviation from the SM prediction is observed to be \(1\sigma\) in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) coupling in the low \(q^{2}\) region, whereas, at the high \(q^{2}\) region, a deviation of more than \(10\sigma\) is observed in case of all the NP scenarios. * \(\mathbf{R_{\Lambda^{(*)}}}\): We observe a deviation of more than \(5\sigma\) and \(10\sigma\) from the SM prediction in the ratio of branching fractions for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channels in case all the NP scenarios at both low and high \(q^{2}\) region. * \(\mathbf{K_{1c}}\) : For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay channel, the angular observable \(K_{1c}\) deviates from the SM prediction at more than \(5\sigma\) significance in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\), and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings at the low and high \(q^{2}\) regions. For the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channel, \(1\sigma\) deviation from the SM prediction is observed at low \(q^{2}\) region with the \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings, whereas, at high \(q^{2}\) region, a deviation of more than \(10\sigma\) is observed with \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. * \(\mathbf{K_{1ce}}\) : In case of \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay channel, in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings the angular observable \(K_{1cc}\) deviates from the SM prediction at more than \(3\sigma\) at the low \(q^{2}\) region, whereas, it deviates more than \(10\sigma\) at the high \(q^{2}\) region. For the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay, a deviation of more than \(1\sigma\) from the SM prediction is observed at the low and the high \(q^{2}\) regions with \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. * \(\mathbf{K_{14s}}\) : In \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) channel, we observe a deviation of \(3\sigma\) from the SM prediction in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings at low \(q^{2}\) region, whereas, it deviates more than \(5\sigma\) at high \(q^{2}\) region. Similarly, with \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) coupling, we observe a deviation of more than \(1\sigma\) at the high \(q^{2}\) region for the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay mode. * \(\mathbf{K_{2c}}\) : For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay, we observe a deviation of more than \(5\sigma\) and \(10\sigma\) in case of all the NP scenarios at low and high \(q^{2}\) regions, respectively. For the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channel, no significant deviation is observed at the low \(q^{2}\) region. At the high \(q^{2}\) region, however, it deviates more than \(10\sigma\) in case of each NP scenarios. * \(\mathbf{K_{2ce}}\) : A deviation of around \(3\sigma\) and \(10\sigma\) is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) couplings at the low and high \(q^{2}\) regions, respectively for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay mode. Similarly, at the low and high \(q^{2}\) regions, a deviation of more than \(2\sigma\) and \(10\sigma\) is observed in case of each NP scenarios for the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channel. * \(\mathbf{K_{2ss}}\) : In \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay channel, a deviation of around \(2\sigma\) from the SM prediction is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) coupling in the low \(q^{2}\) region. However, in the high \(q^{2}\) region, a deviation of more than \(5\sigma\) is observed in the presence of the \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) couplings. Similarly, for the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay channel, a deviation of more than \(2\sigma\) and \(10\sigma\) is observed in each NP scenarios at the low and high \(q^{2}\) regions, respectively. * \(\mathbf{K_{4ss}}\) : For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay channel, no significant deviation is observed in the low \(q^{2}\) region. However, a deviation of more than \(4\sigma\) and \(10\sigma\) is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings at the high \(q^{2}\) region. * \(\mathbf{K_{5s}}\) : There is a deviation of more than \(4\sigma\) in the low \(q^{2}\) region for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay channel in case of all the NP scenarios. Moreover, at the high \(q^{2}\) region, more than \(5\sigma\) deviation is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) couplings. * \(\mathbf{K_{4sc}}\) : In the low \(q^{2}\) region, no significant deviation is observed in \(K_{4sc}\) for the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay mode. However, in the high \(q^{2}\) region, a deviation of more than \(5\sigma\) is observed in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\), and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. * \(\mathbf{K_{4s}}\) : For the \(\Lambda_{b}\to\Lambda(\to pK)\mu^{+}\mu^{-}\) decay channel, a deviationof around \(1\sigma\) is observed in the low \(q^{2}\) region with \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. However, in the high \(q^{2}\) region, \(K_{4s}\) deviates from the SM prediction by more than \(5\sigma\) in each NP scenarios. In Fig. 2 and Fig. 3, we display several \(q^{2}\) dependent observables pertaining to \(\Lambda_{b}\to\Lambda(\to pK)\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay modes in the SM and in few selected NP scenarios, namely \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), respectively. The SM central line and the corresponding uncertainty band obtained at 95% CL are shown with blue color, whereas, the effects of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings are shown with green, orange and red color respectively. Our main observations are as follows. * \(\mathbf{dB/dq^{2}(q^{2})}\): The differential branching ratio for \(\Lambda_{b}\to\Lambda^{*}\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) decays is reduced at all \(q^{2}\) in case of most of the NP scenarios. In \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) decays, the differential branching ratio is enhanced with \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP coupling. All the NP scenarios are distinguishable from the SM prediction at more than \(1\sigma\) in the high \(q^{2}\) region. In the low \(q^{2}\) region, however, it lies within the SM error band. The deviation from the SM prediction is more pronounced in case of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP scenario. * \(\mathbf{F_{L}(q^{2})}\) : For the \(\Lambda_{b}\to\Lambda^{*}\mu^{+}\mu^{-}\) decay channel, deviation in \(F_{L}(q^{2})\) from the SM prediction is more pronounced in case of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP scenarios in the low \(q^{2}\) region. In the high \(q^{2}\) region, the deviation from SM prediction is more prominent in case of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP scenarios and they are clearly distinguishable from the SM at more than \(2\sigma\) significance. In the case of \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) decay, although a slight deviation is observed in case of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP scenario, it, however, is indistinguishable from the SM prediction. * \(\mathbf{A_{FB}(q^{2})}\): For the \(\Lambda_{b}\to\Lambda^{*}\mu^{+}\mu^{-}\) decay channel, a significantdeviation from the SM prediction is observed in \(A_{FB}(q^{2})\) in case of all the NP scenarios and they are clearly distinguishable from the SM at more than \(6\sigma\) at low and high \(q^{2}\) regions. In the SM, we observe the zero crossing point of \(A_{FB}\) at \(q^{2}=2.4\pm 0.6\,\mathrm{GeV}^{2}\) and at \(q^{2}=16.6\pm 0.1\,\mathrm{GeV}^{2}\), respectively. With NP, there is no zero crossing of \(A_{FB}\) at low \(q^{2}\) region. However, at the high \(q^{2}\) region, we observe the zero crossing point at \(q^{2}=15.3\,\mathrm{GeV}^{2}\) and \(q^{2}=16.3\,\mathrm{GeV}^{2}\) with \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{{}^{\prime}})\), \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{{}^{\prime}})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{{}^{ \prime}})\) NP couplings, respectively. For \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) decay, in the low \(q^{2}\) region, a slight deviation in \(A_{FB}\) is observed with all the NP scenarios but they are indistinguishable from the SM prediction. However, at high \(q^{2}\) region, the deviation observed is quite significant and all the NP scenarios are distinguishable from the SM prediction at more than \(10\sigma\). In the SM, a zero crossing point of \(A_{FB}\) is observed at \(q^{2}=3.3\pm 1.5\,\mathrm{GeV}^{2}\). However, no zero crossing point is observed with NP couplings for this decay channel. * \(\mathbf{R_{\Lambda^{(*)}}(q^{2})}\): The ratio of branching fraction \(R_{\Lambda^{(*)}}(q^{2})\) shows significant deviation in case of all the NP scenarios and it is clearly distinguishable from the SM prediction at more than \(10\sigma\) significance at both low and high \(q^{2}\) regions. In Fig. 4 and Fig. 5, we display the NP sensitivities of several \(K\) observables for the \(\Lambda_{b}\to\Lambda(\to pK)\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay modes in the low and high \(q^{2}\) regions. The SM central line and the error band is shown with blue. The green, orange and red lines correspond to NP contributions coming from the best fit values of \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\), \((\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{{}^{ \prime}})\) NP couplings of Table. 6. Although deviation from the SM prediction in the \(K\) observables is observed in case of all the NP scenarios, it is, however, more pronounced in case of \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) NP scenarios. For the \(\Lambda_{b}\to\Lambda^{*}\mu^{+}\mu^{-}\) channel, it is observed that, irrespective of the NP contribution, the ratios \(K_{1c}/K_{2c}\), \(K_{1cc}/K_{2cc}\) and \(K_{1ss}/K_{2ss}\) remain independent of both short distance and long distance physics. For \(K_{1c}\) and \(K_{2c}\) the dependence on the new physics follow the same pattern as in \(A_{FB}\). Similarly, for \(K_{1cc}\) and \(K_{2cc}\), the dependence on the new physics follow the same pattern as in \(F_{L}\). For the \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) channel, NP dependence of \(K_{1c}\) follows the same pattern as in \(A_{FB}\). Similarly, for \(K_{1cc}\) and \(K_{1ss}\), the NP dependence is quite similar to that of \(F_{L}\). Moreover, variation of \(K_{2cc}\) and \(K_{2ss}\) as a function of \(q^{2}\) looks quite similar in case of \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) decay channel. We observe that deviation from the SM prediction is more pronounced in case of \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) NP scenarios. We now proceed to discuss the effects of NP in \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay observables. Figure 3: \(q^{2}\) dependence of differential branching ratio \(dB/dq^{2}(q^{2})\), longitudinal polarization fraction \(F_{L}(q^{2})\), lepton forward backward asymmetry \(A_{FB}^{l}(q^{2})\) and the ratio of branching ratio \(R_{\Lambda^{*}}(q^{2})\) for the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decay mode. The SM central line and the corresponding error band is shown with blue. The green, orange and red lines correspond to the best fit values of \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{{}^{\prime}})\), \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\), respectively. Effects of SMEFT coefficients in \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay observables Study of rare decays mediated via \(b\to s\nu\bar{\nu}\) quark level transition can, in principle, provide complementary information regarding NP in \(b\to s\,l^{+}\,l^{-}\) transition decays. In this connection, we wish to explore the effects of NP in \(b\to s\,l^{+}\,l^{-}\) transition decays on several observables pertaining to \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay modes. We consider three NP scenarios such as \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) from Table. VI that best explain the anomalies present in the \(b\to s\,l^{+}\,l^{-}\) data. Effect of these NP couplings on \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay observables are listed in Table. XIII. \begin{tabular}{|c|c|c||c|c|} \hline & \multicolumn{2}{|c|}{\(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) decay} & \multicolumn{2}{|c|}{\(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay} \\ \hline SMEFT Couplings & BR\(\times 10^{-6}\) & \(F_{L}\) & BR\(\times 10^{-6}\) & \(F_{L}\) \\ \hline \multirow{2}{*}{\((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\)} & 1.205 & 0.717 & 1.007 & 0.595 \\ & [-0.001, 3.931] & [0.507, 0.731] & [0.001, 4.243] & [0.318, 0.710] \\ \hline \multirow{2}{*}{\((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\)} & 0.834 & 0.716 & 0.689 & 0.581 \\ & [0.006, 3.288] & [0.506, 0.730] & [0.000, 3.624] & [0.333, 0.721] \\ \hline \multirow{2}{*}{\((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{ \prime}_{Z})\)} & 2.053 & 0.669 & 2.060 & 0.624 \\ & [0.006, 3.288] & [0.506, 0.730] & [1.327, 4.650] & [0.330, 0.709] \\ \hline \end{tabular} **TABLE 10.** The branching ratio (BR) and longitudinal polarization fraction \(F_{L}\) for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay modes in case of few selected \(2D\) NP scenarios. **TABLE 10.** Our main observations are as follows. * \({\bf BR:}\) In the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) decay channel, branching ratio deviates more than \(1\sigma\) from the SM prediction in the presence of \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP couplings. Similarly, in the \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay channel, the branching ratio deviates more than \(2\sigma\) in the presence of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP couplings. * \(F_{L}\) : For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\nu\bar{\nu}\) decay mode, \(F_{L}\) shows \(2\sigma\), \(3.3\sigma\) and \(4\sigma\) deviations from the SM prediction in the presence of \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP couplings, respectively. Similarly, for the \(\Lambda_{b}\to\Lambda(\to p\pi)\nu\bar{\nu}\) decay mode, \(F_{L}\) shows deviations of around \(1\sigma\) and \(2\sigma\) from the SM prediction in presence of these NP couplings. In Fig. 6, we display differential branching ratio \(dB/dq^{2}\) and longitudinal polarization fraction \(F_{L}(q^{2})\) pertaining to \(\Lambda_{b}\to\Lambda^{(*)}\nu\bar{\nu}\) decay modes in the SM and in case of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP scenarios. The SM central line and the corresponding uncertainty band obtained at \(95\%\) CL are shown with blue color, whereas, the effects of \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\), \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) are represented by violet, orange and red color respectively. Our observations are as follows. * \(\mathbf{dB/dq^{2}(q^{2})}\) : The differential branching ratio for \(\Lambda_{b}\to\Lambda^{*}\nu\bar{\nu}\) decays is enhanced at all \(q^{2}\) below \(q^{2}<12\,\mathrm{GeV}^{2}\), whereas, it is reduced at the high \(q^{2}\) region in case of \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP scenario. With \((\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) NP coupling, the differential branching ratio lies within the SM error band except at \(q^{2}>12\,\mathrm{GeV}^{2}\). Similarly, with \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP coupling, it is reduced at all values of \(q^{2}\). The deviation from the SM prediction is more pronounced in case of \((\widetilde{c}^{(1)}_{ql}+\widetilde{c}^{(3)}_{ql},\widetilde{c}^{\prime}_{Z})\) and \((\widetilde{c}_{Z},\widetilde{c}^{\prime}_{Z})\) NP scenarios and they are clearly distinguishable from the SM prediction at more than \(2\sigma\). It should be noted that, in all the NP scenarios, the peak of the \(q^{2}\) distribution appears at slightly lower value of \(q^{2}\) than in the SM. In case of \(\Lambda_{b}\to\Lambda\nu\bar{\nu}\) decays, the differential branching ratio is slightly enhanced at all \(q^{2}\) below \(q^{2}<13\,\mathrm{GeV}^{2}\) whereas, it is reduced at the high \(q^{2}\) region in case of \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{ \prime})\) NP scenario. However, it is reduced at all \(q^{2}\) with \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) NP couplings. The deviation from the SM prediction is more pronounced in case of \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) NP scenarios and they are clearly distinguishable from the SM prediction at more than \(3\sigma\). Moreover, similar to \(\Lambda_{b}\to\Lambda^{*}\nu\bar{\nu}\) decays, the peak of the distribution appears at slightly lower value of \(q^{2}\) than in the SM. * \(\mathbf{F_{L}\left(q^{2}\right)}\): For both the decay modes, the longitudinal polarization fraction \(F_{L}(q^{2})\) is enhanced at all \(q^{2}\) in case of all the NP scenarios. The deviation from the SM prediction observed in the high \(q^{2}\) region is quite significant and they are clearly distinguishable from the SM prediction at more than \(3\sigma\). The deviation from the SM prediction is more pronounced in case of \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) NP scenario. ## IV Conclusion In light of anomalies observed in various \(b\to s\,l^{+}\,l^{-}\) quark-level transition decays, we perform an in-depth angular analysis of baryonic \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})(\mu^{+}\mu^{-},\,\nu\bar{\nu})\) and \(\Lambda_{b}\to\Lambda(\to p\pi)(\mu^{+}\mu^{-},\,\nu\bar{\nu})\) decays mediated via \(b\to s\,l^{+}\,l^{-}\) and \(b\to s\nu\bar{\nu}\) quark level transition. Our main aim of this study is to explore the connections between \(b\to s\,l^{+}\,l^{-}\) and \(b\to s\nu\bar{\nu}\) quark level transition decays in a model independent way. In this context, we use the standard model effective field theory formalism with dimension six operators that can, in principle, provide correlated NP effects in these decay modes. For the \(\Lambda_{b}\to\Lambda^{*}\) form factors we use the values obtained from MCN, whereas, for the \(\Lambda_{b}\to\Lambda\) form factors, we use the recent results obtained from LQCD approach. We construct several NP scenarios based on NP contributions from single operators as well as from two different operators and try to find the scenario that best explains the anomalies present in \(b\to s\,l^{+}\,l^{-}\) transition decays. To find the best fit values of the SMEFT coefficients, we perform a naive \(\chi^{2}\) analysis with the \(b\to s\,l^{+}\,l^{-}\) data. We include total eight measurements in our \(\chi^{2}\) fit. It should, however, be mentioned that, in our \(\chi^{2}\) fit, we have not included the latest \(R_{K}^{(*)}\) measurement from LHCb. It is observed that the \(2D\) scenarios provide better fit to the \(b\to s\,l^{+}\,l^{-}\) data than the \(1D\) scenarios. More specifically, we get much better fit with \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z}^{\prime})\), \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\), \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\), and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{ \prime})\) NP scenarios. The pull\({}_{SM}\) for these \(2D\) scenarios are comparatively larger than any other scenarios. Next we check the compatibility of our fit results with the measured values of \({\cal B}(B\to\,K^{(*)}\,\nu\,\bar{\nu})\). It is observed that the allowed ranges of \({\cal B}(B\to\,K\,\nu\,\bar{\nu})\) and \({\cal B}(B\to\,K^{*}\,\nu\,\bar{\nu})\) obtained with \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{\prime})\) SMEFT scenarios are compatible with the experimental upper bound. In case of \((\widetilde{c}_{ql}^{(1)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{ \prime})\) NP scenarios, although the best fit value does not simultaneously satisfy the experimental upper bound, there still exist some NP parameter space that can, in principle, satisfy both the constraint. A brief summary of our results are as follows. * The differential branching ratio for the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) and \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decays deviates from the SM prediction in case of all the NP scenarios and they are distinguishable from the SM prediction at more than \(1\sigma\) in the high \(q^{2}\) region. Similarly, \(A_{FB}(q^{2})\) deviates significantly from the SM prediction in case of all the NP scenarios. For the \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\mu^{+}\mu^{-}\) decay mode, the zero crossing point of \(A_{FB}(q^{2})\) at \(q^{2}=15.3\,\mathrm{GeV}^{2}\) and \(q^{2}=16.3\,\mathrm{GeV}^{2}\) with \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{{}^{\prime}})\), \((\widetilde{c}_{ql}^{(1),(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{ \prime})\) NP couplings are clearly distinguishable from the SM zero crossing point at \(q^{2}=16.6\pm 0.1\,\mathrm{GeV}^{2}\). For the \(\Lambda_{b}\to\Lambda(\to p\pi)\mu^{+}\mu^{-}\) decays, although there is a zero crossing point at \(q^{2}=3.3\pm 1.5\,\mathrm{GeV}^{2}\), no zero crossing point is observed with NP couplings for this decay channel. Moreover, the ratio of branching ratio \(R_{\Lambda^{(*)}}\) deviates significantly from the SM prediction in case of all the NP scenarios. * In case of \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})\,\nu\bar{\nu}\) decay, the deviation from the SM prediction in the differential branching ratio is more pronounced in case of \((\widetilde{c}_{ql}^{(1)}+\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{ \prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{{}^{\prime}})\) NP scenarios and they are clearly distinguishable from the SM prediction at more than \(2\sigma\). In case of \(\Lambda_{b}\to\Lambda\nu\bar{\nu}\) decays, The deviation from the SM prediction is more pronounced in case of \((\widetilde{c}_{ql}^{(3)},\widetilde{c}_{Z}^{\prime})\) and \((\widetilde{c}_{Z},\widetilde{c}_{Z}^{{}^{\prime}})\) NP scenarios and they are clearly distinguishable from the SM prediction at more than \(3\sigma\). Similarly, \(F_{L}\) deviates significantly from the SM prediction in the high \(q^{2}\) region and it is clearly distinguishable from the SM prediction at more than \(3\sigma\). Study of \(\Lambda_{b}\to\Lambda^{*}(\to pK^{-})(\mu^{+}\mu^{-},\,\nu\bar{\nu})\) and \(\Lambda_{b}\to\Lambda(\to p\pi)(\mu^{+}\mu^{-},\,\nu\bar{\nu})\) mediated via \(b\to s\,l\,\bar{l}\) and \(b\to s\nu\bar{\nu}\) transition decays can be valuable in understanding the anomalies observed in \(B\) meson decays. Our analysis can be further improved once more precise data on the \(\Lambda_{b}\to\Lambda^{*}\) form factor is available from LQCD. Moreover, more precise data on \({\cal B}(B\to\,K\,\nu\,\bar{\nu})\) and \({\cal B}(B\to\,K^{*}\,\nu\,\bar{\nu})\) in future, can, in principle, put severe constraint on several NP scenarios. ## Acknowledgement We would like to express our gratitude to N. Rajeev for insightful and engaging discussions related to the topic addressed in this article. We would also like to thank Diganta Das, Jaydeb Das and Stefan Meinel for their helpful exchanges regarding \(\Lambda_{b}\to\Lambda^{*}\) transition form factors.
2303.00833
Hearing Shapes via p-Adic Laplacians
For a finite graph, a spectral curve is constructed as the zero set of a two-variate polynomial with integer coefficients coming from p-adic diffusion on the graph. It is shown that certain spectral curves can distinguish non-isomorphic pairs of isospectral graphs, and can even reconstruct the graph. This allows the graph reconstruction from the spectrum of the associated p-adic Laplacian operator. As an application to p-adic geometry, it is shown that the reduction graph of a Mumford curve and the product reduction graph of a p-adic analytic torus can be recovered from the spectrum of such operators.
Patrick Erik Bradley, Ángel Morán Ledezma
2023-03-01T21:37:49Z
http://arxiv.org/abs/2303.00833v1
# Hearing Shapes via \(p\)-Adic Labelacians ###### Abstract For a finite graph, a spectral curve is constructed as the zero set of a two-variate polynomial with integer coefficients coming from \(p\)-adic diffusion on the graph. It is shown that certain spectral curves can distinguish non-isomorphic pairs of isospectral graphs, and can even reconstruct the graph. This allows the graph reconstruction from the spectrum of the associated \(p\)-adic Laplacian operator. As an application to \(p\)-adic geometry, it is shown that the reduction graph of a Mumford curve and the product reduction graph of a \(p\)-adic analytic torus can be recovered from the spectrum of such operators. ## 1 Introduction The aim of spectral geometry is to describe the relationship between the geometry of certain objects like surfaces, or more general Riemannian manifolds, and the spectra of differential operators, like Laplacians, defined on them. In other words, as stated by M. Kac in [11], "Can one hear the shape of a drum?" Ideally, one would like to be able to recover the geometric object, up to isometry, from the spectra of one or several naturally defined operators. Many counter-examples for Riemanian manifolds have appeared showing that isospectral but non-isometric manifolds exist, which gives a negative answer to the question. For example for the drum problem: \[\begin{cases}\nabla u=-\lambda u\\ u|_{\partial D}=0\end{cases}\] Carolyn Gordon, David Webb, and Scott Wolpert in 1992, showed the existence of a pair of drums with different shapes but which are isospectral [7]. On the other hand, information about the geometry of the object can be extracted from the spectrum. This kind of problems is known as "inverse problems". Many famous results towards this direction have been stablished, for example the famous Weyl asymptotic law [26]. These problems extend to objects other than Riemannian manifolds, like graphs, where for the adjacency matrix and the Laplacian, non-isomorphic isospectral graphs have been found. This problem has been intensively studied, cf. e.g. [21, 18, 24]. Recovering the structure of a graph from the spectrum of an operator may lead to an invariant to describe the topology of the graph. This could lead to new applications, like recovering the structure of a graph from a diffusion process, which has many potential applications, e.g. for topological access methods for spatial data [10], to name only one. Prime numbers play a fundamental role in many mathematical theories and applications to sciences. From the realm of arithmetic as the fundamental blocks or "atoms" of integers, to applications in physics, from quantum physics to the theory of complex disordered systems and geophysics, information processing, biology, and cognitive science, see [20] and the references therein. One powerful framework for the application of number theory in sciences is the so called \(p\)-adic analysis or more general ultrametric analysis [25, 14]. An important example is given in the theory of disordered systems (spin glasses) where the \(p\)-adic structure is encoded in a Parisi matrix which arises by the intrinsic hierarchical structure inside the spin glasses [19]. This lead in the middle of the 80s to the idea of using ultrametric spaces to describe the state of complex systems. A central idea in physics of complex systems (like proteins) states that the dynamics on such systems is generated by a random walk (diffusion equation) in the corresponding energy landscape. By using interbasin kinetics methods, an energy landscape is approximated by an ultrametric space and a function on this space describing the distribution of the activation barriers, see e.g. [15] and the references therein. Most of the applications towards this direction require well-defined and natural pseuddifferential operators constructed on ultrametric structures such as Non-Archimedean fields, where the Taibleson-Vladimirov operator plays a fundamental role for diffusion on the field \(p\)-adic numbers [25]. Differential operators and spectral geometry on Riemannian manifolds have been extensively studied, nevertheless there is no comparable theory of Non-Archimedean spectral geometry of pseudodifferential operators over \(p\)-adic structures. Many other operators have been developed, some of them with the aim of applications, and others as generalisations to more general structures. For the former we have many classes of \(p\)-adic operators from the work of W. Zuniga, Kozyrev, Khrennikov, where the relation of graph theory and \(p\)-adic integral and pseudodifferential operators is explicitly stated, see [27, 14, 17], and the reference there in. For the latter one of the authors initiated the study of heat equations and integral operators on the Non-Archimedean kin of Riemannian surfaces i.e. Mumford Curves [3]. All those developments in the theory of pseudodifferential equations over Non-Archimedean spaces clearly deal (indirectly) with one of the main problems in spectral geometry, that is, direct problems in which a description of the eigenvalues is needed. In this article we initiate the study of inverse problems of spectral geometry in the Non-Archimedean framework. Moreover, a new invariant for an arbitrary combinatorial simple graph is introduced, showing that the spectra of certain \(p\)-adic operators defined on the graph lead to a complete characterisation of its isomorphisim class. The question "Can you hear the shape of a graph?" has already been answered in different contexts. In [8], the question was posed in the context of quantum graphs, and was answered in the affirmative, that is, they showed that the spectrum of the Schrodinger operator on a finite, metric graph determines uniquely the connectivity matrix and the bond lengths under certain conditions. In [28], a new spectral invariant in quantum graphs has been introduced. In [9], it was proved that the spectral determinant of the Laplace operator on a finite connected metric graph determines the number of spanning trees under certain conditions. Understanding how the spectra of certain operators in general graphs determine the geometry of a graph is an important task for applications like graph comparison in graph analytics. For example in [23], the Network Laplacian Spectral Descriptor, a graph representation method that allows for straightforward comparisons of large graphs, is proposed. Moreover, our results are applied to \(p\)-adic structures like Mumford curves and \(p\)-adic analytic tori. Hence these results initiate the study of inverse problems in spectral geometry in the Non-Archimedean framework. Given a graph \(G\) and a matrix \(\Delta\in\mathbb{N}^{|G|\times|G|}\), we study a generalisation of a graph Laplacian \(\Lambda_{G}^{\Delta}\) defined in \(L^{2}(G\times K)\), where \(K\) is a non-archimedean local field. The space \(L^{2}(G\times K)\) can be decomposed as a direct sum of finite dimensional spaces of dimension \(|G|\), this leads of the following representation of \(\Lambda^{\Delta}_{G}\), \[\Lambda^{\Delta}_{G}=\bigoplus_{G\mathcal{K}}L(G^{\Delta}_{r}),\] where the matrices \(L(G^{\Delta}_{r})\) are the Laplacian matrix of a weighted version of the graph, and for \(r=1\), we have that \(G_{1}=G\). Therefore, this operator can be understood as a direct sum of scaled replica of the original graph. The spectrum of each copy belongs to a common plane algebraic curve \(V(P^{\Delta}_{G})\) called the spectral curve of the graph. For a suitable choice of \(\Delta\) we prove that \(P^{\Delta}_{G}\) is an invariant of the graph \(G\). This leads to a reconstruction theorem which enable us to reconstruct the graph through the spectra of the operator \(\Lambda^{\Delta}_{G}\) (see Theorem 4.7 and Corollary 4.9). Finally using these results we are able to reconstruct the reduction graph of a Mumford curve and the product graph coming from the reduction of a \(p\)-adic analytic torus using the spectrum of a \(p\)-adic Laplacian. ## 2 Notation and Some Results from \(p\)-Adic Analysis In this section we review some results from \(p\)-adic analysis, for a complete exposition of the subject and proofs the reader may consult [1]. Let \(K\) be a Non-Archimedean local field. Let \(|\cdot|_{K}\) denote the absolute value of the field \(K\). Denote the local ring of \(K\) by \(\mathcal{O}_{K}=\{x\in K:|x|_{K}\leq 1\}\) and its maximal ideal by \(\mathfrak{m}_{K}=\{x\in K:|x|_{K}<1\}\). Let \(\chi\) be a fixed non-constant complex-valued additive character on \(K\). We denote by \(dx\) the Haar measure on the additive group of \(K\), normalised such that the measure of \(\mathcal{O}_{K}\) is equal to \(1\). The Fourier transform of an absolute integrable complex-valued function \(f\in L^{1}(K)\) will be written as \[\mathscr{F}(f)(\xi)=\int_{K}\chi(\xi x)f(x)dx,\ \xi\in K.\] If \(\mathscr{F}(f)=\hat{f}\in L^{1}(K)\), we get the inversion formula \[f(x)=\int_{K}\chi(-x\xi)\hat{f}(\xi)d\xi.\] Since the mapping \(\mathscr{F}:L^{1}(K)\cap L^{2}(K)\to L^{2}(K)\) is an isometry, this mapping has an extension to an \(L^{2}-\)isometry from \(L^{2}(K)\) into \(L^{2}(K)\), where the inverse Fourier transform will be denoted as \(\mathscr{F}^{-1}\). Now we introduce the Vladimirov-Taibleson operator. Let \(\mathcal{D}\subset L^{2}(K)\) be its domain given by the set of those \(f\in L^{2}(K)\), for which \(|\xi|^{\alpha}\hat{u}(\xi)\in L^{2}(K)\). The Vladimirov operator \((\Delta^{\alpha},\mathcal{D})\), \(\alpha>0\), for \(f\in\mathcal{D}\) is defined by \[\Delta^{\alpha}f(x)=\mathscr{F}^{-1}_{\xi\mapsto x}(|\xi|^{\alpha}_{K} \mathscr{F}_{x\mapsto\xi}(f)(\xi))(x),\ x\in K.\] The operator \(\Delta^{\alpha}\) is an unbonded operator in \(L^{2}(K)\), and since it is unitarily equivalent to the operator of multiplication by \(|\xi|^{\alpha}_{K}\), it is self-adjoint, its spectrum consists of the eigenvalues \(\lambda_{r}=q^{\alpha r}\), where \(r\in\mathbb{Z}\) and \(q\) is the cardinality of the residue field \(\mathcal{O}_{K}/\mathfrak{m}_{K}\). Moreover we have the following result **Theorem 2.1** (Kozyrev).: _There exist a complete orthonormal system of eigenfunctions of the operator \(\Delta^{\alpha}\) of the form \(\psi_{r,n}(x)\in L^{2}(K)\), where \(r\in\mathbb{Z}\) and \(n\in\mathbb{N}\) such that_ \[\Delta^{\alpha}\psi_{r,n}(x)=q^{\alpha(1-r)}\psi_{r,n}.\] Proof.: For the case of \(\mathds{Q}_{p}\), cf. [16]. The case of a general Non-Archimedean local field \(K\), cf. [1]. Henceforth this basis of \(L^{2}(K)\) from Theorem 2.1 will be denoted by \(\mathcal{K}\). ## 3 Spectral Curves for Diffusion Pairs In this section, we introduce the objects necessary for constructing the spectral curve of a so-called _diffusion pair_ which is actually nothing but a weighted graph, where the weights are integer powers of a fixed variable \(Y\). These objects are \(p\)-adic matrix-valued Laplacian operators reflecting the adjacency structure of a graph. ### \(p\)-Adic Laplacians for Graphs Let \(G\subset K/O_{K}\) be a finite set. Then we have isomorphisms \[L^{2}(G\times K)\cong L^{2}(G)\otimes L^{2}(K)\cong\bigoplus_{a\in G}L^{2}(K_ {a})\] where \(K_{a}\) is a copy of \(K\) for each \(a\in G\). We define maps: (1) where we write \[L^{2}(K_{a})=\bigoplus_{\psi\in\mathcal{K}}\mathds{C}\psi_{a}\] using the set \(\mathcal{K}\) of Kozyrev wavelets on \(K\), and \[\psi_{a}(x)=\psi(x).\] The map \(H_{G}\) is given as follows: \[H_{G}\colon(u_{g})_{g\in G}\mapsto(f_{g})_{g\in G},\;f_{g}=\sum_{a\in G}C_{ga} \Delta_{ga}u_{a}\] where \(\Delta_{ga}\) is the Vladimirov operator \[\Delta_{ga}\colon L^{2}(K_{a})\to L^{2}(K_{g}),\psi_{a}\mapsto\Delta^{\alpha_ {ga}}\psi_{g}\] where \[\Delta^{\alpha_{ga}}=\mathscr{F}^{-1}\,|\cdot|_{K}^{\alpha_{ga}}\mathscr{F}\] behaves like the usual Vladimirov operator, except for being applied to different copies of Kozyrev wavelets indexed by vertices of \(G\). In particular, it simply multiplies the indexed Kozyrev wavelet \(\psi_{g}\) by an integer power of \(q^{\alpha_{ga}}\). Notice that in the basis of \(L^{2}(G\times K)\) given by \[G\mathcal{K}=\{\psi_{g}\colon g\in G,\;\psi\in\mathcal{K}\}\] we can represent \(H_{G}\) by the \(|G|\times|G|\)-matrix \[(C_{ag}\Delta_{ag})\] And the matrix \(C=(C_{ag})\) can be viewed as an adjacency matrix of a simple graph with vertex set \(G\). In order to obtain a graph Laplacian matrix, we consider instead of \(H_{G}\) the operator \[\Lambda_{G}^{\Delta}\colon L^{2}(K)^{|G|}\to L^{2}(K)^{|G|}\] represented by the matrix \[(L^{\Delta}_{ab})_{a,b\in G}\] with \[L^{\Delta}_{ab}=\begin{cases}-C_{ab}\Delta_{ab},&a\neq b\\ \sum\limits_{g\in G}C_{ag}\Delta_{ag},&a=b\end{cases}\] Here, \(\Delta=(\Delta_{ga})\) can be viewed as a a matrix in \(\mathds{N}^{|G|\times|G|}\) having entry \(\alpha_{ga}\) whenever \(ga\) represents an edge of the graph. Later, we will show that there exist choices of diffusion parameters \(\alpha_{ag}\in\mathds{N}\) such that the spectrum of the operator \(\Lambda^{\Delta}_{G}\) determines the isomorphism class of the combinatorial simple graph \(G\). **Definition 3.1**.: _The operator \(\Lambda^{\Delta}_{G}\) is called the \(p\)-adic Laplacian associated with the diffusion pair \((G,\Delta)\)._ ### The Spectral Curve of a Diffusion Pair Let \((G,\Delta)\) be a diffusion pair. Recall that \(L=L_{1}\) is the Laplacian of the graph \(G\). We begin with the following observation: **Lemma 3.2**.: _It holds true that_ \[\operatorname{Spec}(L)\subset\operatorname{Spec}(\Lambda^{\Delta}_{G})\] _as an inclusion of multi-sets._ Proof.: Observe first that in the basis \(G\mathcal{K}\), the operator \(\Lambda^{\Delta}_{G}\) is represented by an \(\mathds{N}\times\mathds{N}\)-matrix having a block-diagonal structure with blocks of size \(|G|\times|G|\) after a suitable linear ordering of the basis. Now, the non-zero entries of each block away from the diagonal consist of Vladimirov eigenvalues. By Theorem 2.1, they are of the form \[q^{\alpha_{ag}(1-r)}\] with \(r\in\mathds{Z}\). For \(r=1\) we identify the Laplacian matrix \(L\) as one of the blocks. Hence, the eigenvalues of \(L\) are contained in the spectrum of \(\Lambda^{\Delta}_{G}\). Observe further that the block-diagonal structure found in the proof of the above Lemma is in fact a replication of Laplacian matrices for the same combinatorial graph structure on \(G\), except that now the edge lengths are powers of \(p\) of the form \(p^{\alpha_{ab}(1-r)}\) for fixed \(r\in\mathds{Z}\). This means that there is a family of graphs \(G_{r}^{\Delta}\) parametrised by \(r\in\mathds{Z}\) having the same combinatorial Laplacian matrix \(L\). And in order to find all eigenvalues of \(\Lambda_{G}^{\Delta}\), it is necessary and sufficent to find the Laplacian eigenvalues for each graph in the family \(G_{r}^{\Delta}\) with \(r\in\mathds{Z}\). We will now examine the characteristic polynomial of each graph Laplacian \(L_{r}^{\Delta}\) associated with graph \(G_{r}^{\Delta}\) from the family. Notice that \[L_{1}^{\Delta}=L_{1}=L,\quad G_{1}^{\Delta}=G_{1}=G,\] are independent of the diffusion parameters symbolised by \(\Delta\). We also assume that the parameters \(\alpha_{ab}\) are all pairwise different positive natural numbers. The characteristic polynomial of \(G_{r}^{\Delta}\) is \[P_{r}^{\Delta}(X)\in\mathds{Q}[X]\] and its degree is \(|G|\). Again, we have \[P_{1}(X)=P_{1}^{\Delta}(X)\] is independent of \(\Delta\), and coincides with the characteristic polynomial of the graph Laplacian \(L\). The coefficients of \(P_{r}^{\Delta}\) are given by the Leibniz formula for the determinant as polynomials with integer coefficients in another variable \(Y\) evaluated in \(p^{1-r}\). Hence, we obtain a polynomial \[P_{G}^{\Delta}(X,Y)\in\mathds{Z}[X,Y]\] whose zero set in \(\overline{\mathds{Q}}^{2}\) contains the spectrum of \(\Lambda_{G}^{\Delta}\) as the first coordinate of some of its points. Here, we mean by \(\overline{\mathds{Q}}\) the algebraic closure of \(\mathds{Q}\). **Definition 3.3**.: _The completion of the plane algebraic curve \(V(P_{G}^{\Delta})\) to a projective algebraic curve is called the spectral curve of the pair \((G,\Delta)\)._ ### Recovering Spectral Curves Assume that we are given the \(\operatorname{Spec}\Lambda_{G}^{\Delta}\) as a multi-set, where \(\Lambda_{G}^{\Delta}\) is the \(p\)-adic Laplacian associated with diffusion pair \((G,\Delta)\). Assume also that the task is to recover the graph \(G\) from that spectrum. One way would be to try to recover the spectral polynomial \(P(X,Y)=P_{G}^{\Delta}(X,Y)\) and use the Reconstruction Theorem (Theorem 4.7) proved below. In this situation, an algorithm which terminates in finite time cannot be expected, because each individual eigenvalue has to be associated with one of the graphs \(G_{r}\) (\(r\in\mathds{Z}\)) in the family induced by the spectral pair. But from a purely existential standpoint, we can say that there exists a classification of eigenvalues (including their multiplicities) such that each class is \(\operatorname{Spec}(L_{r})\) with \(r\in\mathds{Z}\). Once this classification is made, then each coefficient \[a_{i}(p^{1-r}),\quad i=1,\ldots,n\] of the polynomial \[P(X,Y)=\sum_{i=1}^{n}a_{i}(Y)X^{i}\] with \(r\in\mathds{Z}\) can be calculated in each class of eigenvalues. All that is then needed, is for each \(r\in\mathds{Z}\) the value of \[P(X,p^{1-r})\] in finitely many places \(x_{s}\). Then interpolation yields the coefficients of \(P(X,Y)\). **Definition 3.4**.: _A set of pairs \((x_{s},p^{1-r})\) with \(s,r\in R\subset\mathds{N}\) is called a recovery datum, if \(R\) is a finite set and \(P(X,Y)\) can be interpolated after evaluating the polynomial in that set of pairs._ **Theorem 3.5**.: _Let \((G,\Delta)\) be a diffusion pair. Given \(\operatorname{Spec}(L_{r})\) as distinguished multi-sets for sufficiently but finitely many \(r\in\mathds{Z}\), it is possible to obtain recovery data for the spectral polynomial \(P_{(G,\Delta)}\) with a terminating algorithm._ Proof.: Since all graphs \(G_{r}\) are simple and have the same underlying combinatorial graph with \(n\) vertices and \[|E|\leq\frac{1}{2}n(n-1)=:b_{n}\] edges, it follows that the number of places to interpolate \[P(X,Y)=P_{G}^{\Delta}(X,Y)\] is bounded. As \(n\) is given as the size of each multi-set \(\operatorname{Spec}(L_{r})\), it follows that \(b_{n}\) such spectra are sufficient in order to reproduce the characteristic polynomials \[P(X,p^{1-r})=P_{r}(X)\] of the graphs \(G_{r}\). Evaluating the \(b_{n}\) polynomials \(P_{r}(X)\) at \(b_{n}\) places \(x_{s}\in\mathds{R}\) yields pairs \((x_{s},p^{1-r})\) which form a recovery datum, as now interpolation of \(P(X,Y)\) is possible. Together, this is an algorithm terminating after finitely many steps. The question is now, whether it is possible to extract distinguished multi-sets \(\operatorname{Spec}(L_{r})\) somehow by clustering spectral values. If it is allowed to vary the prime number \(p\), then this can be done in the following game: **Game 1**.: _Assume that you are allowed to choose diffusion parameters \(\Delta\) once, and a prime number \(p\) as many times as you wish. Then you will receive for each \(p\) the multi-set \(\operatorname{Spec}(\Lambda_{G}^{\Delta})\) of an unknown simple finite connected graph. If you manage to recover the spectral curve for the diffusion pair \((G,\Delta)\) from these spectra, then you win, otherwise you lose._ The following theorem states that there exist winning strategies for the Game 1: **Theorem 3.6**.: _For the \(p\)-adic Laplacian associated with any diffusion pair, there exists a winning strategy in order to obtain distinguished multi-sets \(\operatorname{Spec}(L_{r})\) with \(r\in\mathds{Z}\)._ Proof.: In the case that the choice of diffusion parameters is to have them all equal to \(1\) (when beloning to an edge, otherwise, it is zero), a winning strategy for connected graphs is to pick a large prime \(p\). In this case, we have \[L_{r}=p^{1-r}L\] where \(L\) is the Laplacian of the graph \(G\). Since the non-zero part of the spectrum of \(L\) lies inside a compact interval not containing \(0\), as has been proven by [5], it then suffices to ask for higher and higher prime numbers until there are increasing gaps between clusters with relatively small inter-cluster distances between neigbouring points. With increasing \(p\), this phenomenon becomes more and more clearly visible. If not all parameters are chosen equal to \(1\), we restrict to \(r\leq 1\) and again vary the prime \(p\). From Matrix Perturbation Theory [2], we get that if \(p\) is sufficiently large, then in the range \(r\leq 1\), again the inter-cluster distances of neighbouring eigenvalues will be smaller than the intra-cluster distances between neighbouring clusters. Hence, choosing \(p\) sufficiently large, again removes overlaps between the clusters. Since in both cases of diffusion parameter choices, the spectrum of \(L\) remains fixed for any choice of varying the prime \(p\), the reference cluster for \(r=1\) can be extracted, and then the spectra \(\operatorname{Spec}(L_{r})\) for more values of \(r\in\mathds{Z}\), or of \(r\leq 1\) in the second case. After having extracted sufficently many of these finite spectra, one can proceed to the interpolation method and compute a recovery datum, as now the requirements for Theorem 3.5 are met. ## 4 Spectral Curves Which are Separating In this section, we first separate pairs of non-isomrophic, but isospectral, graphs via spectral curves for suitable diffusion pairs using analytic matrix perturbation theory. After that, we prove for every finite graph the existence of a diffusion pair such that the graph can be reconstructed from the spectral polynomial. Although this generalises the first result, we believe that the matrix perturbation method is of general interest, nevertheless. **Example 4.1**.: _According to [18], the two graphs in Figure 1 are isospectral. Their first Betti number equals \(3\). This is the smallest example of an isospectral pair of non-isomorphic simple graphs without bridges. The label "2" on an edge indicates a diffusion parameter value of \(2\), i.e. an edge weight \(Y^{2}\). Unlabelled edges have diffusion parameter value \(1\), i.e. edge weight \(Y\). Their respective spectral polynomials \(P_{1}\) for the left, and \(P_{2}\) for the right graph of _Figure 1 are:_ \[P_{1}(X,Y)=\det\left(\begin{smallmatrix}X-3Y&Y&0&0&0&0&Y&Y\\ Y&X-2Y&Y&0&0&0&0&0\\ 0&Y&X-2Y&Y&0&0&0\\ 0&0&Y&X-2Y&Y&0&0\\ 0&0&0&Y&X-2Y&Y&0&0\\ 0&0&0&0&Y&X-3Y&Y&Y\\ Y&0&0&0&0&Y&X-(2Y+Y^{2})&Y^{2}\\ Y&0&0&0&0&Y&Y^{2}&X-(2Y+Y^{2})\end{smallmatrix}\right)\] \[P_{2}(X,Y)=\det\left(\begin{smallmatrix}X-3Y&Y&Y&0&0&0&0&Y\\ Y&X-2Y&Y&0&0&0&0&Y\\ Y&Y&X-3Y&Y&0&0&0\\ 0&0&Y&X-2Y&Y&0&0\\ 0&0&0&Y&X-2Y&Y&0\\ 0&0&0&Y&X-(2Y+Y^{2})&Y&Y\\ 0&0&0&0&Y^{2}&Y&X-(2Y+Y^{2})\end{smallmatrix}\right)\] _Their tangent cones \(T_{1}\) of \(P_{1}\) and \(T_{2}\) of \(P_{2}\) are:_ \[T_{1}(X,Y)=\det\left(\begin{smallmatrix}X-3Y&Y&0&0&0&0&Y&Y\\ Y&X-2Y&Y&0&0&0&0&0\\ 0&0&Y&X-2Y&Y&0&0&0\\ 0&0&Y&X-2Y&Y&0&0\\ 0&0&0&Y&X-2Y&Y&0&0\\ 0&0&0&Y&X-3Y&Y&Y\\ Y&0&0&0&0&Y&X-2Y&0\\ Y&0&0&0&0&Y&0&X-2Y\end{smallmatrix}\right)\] \[T_{2}(X,Y)=\det\left(\begin{smallmatrix}X-3Y&Y&Y&0&0&0&0&Y\\ Y&X-2Y&Y&0&0&0&0&0\\ Y&Y&X-3Y&Y&0&0&0&0\\ 0&0&Y&X-2Y&Y&0&0\\ 0&0&0&Y&X-2Y&Y&0\\ 0&0&0&0&Y&X-2Y&Y\\ Y&0&0&0&0&0&Y&X-2Y\end{smallmatrix}\right)\] _These are spectral polynomials of the two bridgeless graphs of genus two, where each edge has the same variable \(Y\), shown in Figure 2. According to Figure 1: A pair of non-isomorphic, but isospectral graphs whose first Betti number is 3. [18, Thm. 3.1], these graphs are not isospectral, because they are not isomorphic. It follows that the two tangent cones \(T_{1}\) and \(T_{2}\) are not equal._ _We saw that replacing one certain edge weight in each graph by \(Y^{2}\) leads to two polynomials \(P_{1}(X,Y)\) and \(P_{2}(X,Y)\) which are not the same. So, in this case, the isospectral pair is separated by these two polynomials. It only happens that_ \[P_{1}(X,1)=P_{2}(X,1)\] _resulting in two identical characteristic polynomials for the two non-isomorphic graphs. This example motivates the remainder of this article._ ### Separating Isospectral Pairs via Matrix Perturbation An introduction to matrix perturbation theory can be found in [2]. We will use this method in order to construct distinct spectral polynomials for non-isomorphic, but isospectral graphs. All our calculations are explicit and most of them reproduced here, even if many can also be found in that bibliographic reference in a more general setting. Let \(k\neq\ell\) be natural numbers in \(\{1,\ldots,n\}\). We define the matrix \[U(k,\ell)=(u_{ij})\] Figure 2: A pair of non-isomorphic bridgeless graphs whose first Betti number is \(2\). According to [18], it follows that they are not isospectral. with \[u_{ij}=\begin{cases}1,&i=j=k,\text{ or }i=k=\ell\\ -1,&(i,j)=(k,\ell),\text{ or }(i,j)=(\ell,k)\\ 0,&\text{ otherwise}\end{cases}\] This is the Laplacian of the graph on \(n\) vertices having precisely one undirected edge \((k,\ell)\), since \(U(k,\ell)=U(\ell,k)\). Let \(v=(v_{1},\ldots,v_{n})\in\mathds{R}^{n}\). Then \[v^{\top}U(k,\ell)v=(v_{k}-v_{\ell})^{2}\] as can be verified by a simple calculation. Now, let \(E\) be a symmetric set of pairs \((i,j)\) of numbers \(1,\ldots,n\) with \(i\neq j\). Then \[U(E):=\sum_{(k,\ell)\in E}U(k,\ell)\] and we have \[v^{\top}U(E)v=\sum_{(k,\ell)\in E}(v_{k}-v_{\ell})^{2}\] We can now define \[\left\lVert v\right\rVert_{E}^{2}=v^{\top}U(E)v\] The function \(\left\lVert\cdot\right\rVert_{E}\) is a semi-norm on \(\mathds{R}^{n}\), and we have \[\left\lVert v\right\rVert_{E}^{2}=0\quad\Leftrightarrow\quad\forall\;(i,j) \in E\colon v_{i}=v_{j}\] **Lemma 4.2**.: _Let \(L\) be the Laplacian of a simple graph \(G\) with \(n\) vertices, and let \(E,E^{\prime}\) be two disjoint sets of pairs from \(\{1,\ldots,n\}\). Then there exists an eigenvector \(v\) of \(L\) such that_ \[\left\lVert v\right\rVert_{E}^{2}\neq\left\lVert v\right\rVert_{E^{\prime}}^ {2}\] Proof.: This follows from the fact that \(L\) is symmetric, i.e. from the Spectral Theorem for symmetric real-valued matrices. Namely, if for all eigenvectors \(v\) of \(L\) we had \[\left\lVert v\right\rVert_{E}=\left\lVert v\right\rVert_{E^{\prime}}\] then it would be impossible to generate via linear combination a vector \(x\in\mathds{R}^{n}\) having \(\left\lVert x\right\rVert_{E}\neq\left\lVert x\right\rVert_{E^{\prime}}\), because both seminorms scale identically when an eigenvector is multiplied with a scalar. An equivalence of diffusion pairs \((G_{1},\Delta_{1})\) and \((G_{2},\Delta_{2})\) where \(G_{i}\) are graphs on the same vertex set \(V\), is given by a bijection \(E_{1}\to E_{2}\) between the edge sets of \(G_{1}\) and \(G_{2}\) which takes a weighted edge to an edge having the same weight. If two graphs \(G_{1},G_{2}\) are isospectral, then it is possible to define diffusion parameters \(\Delta_{1},\Delta_{2}\) such that there is an equivalence of diffusion pairs \((G_{1},\Delta_{1})\sim(G,\Delta_{2})\). The following theorem shows that this already allows to distinguish non-isomorphic graphs, if suitable choices are taken. **Theorem 4.3**.: _Assume that \(G_{1},G_{2}\) is a pair of non-isomorphic, but isospectral graphs. Then there exist equivalent diffusion pairs \((G_{1},\Delta_{1})\), \((G_{2},\Delta_{2})\) such that_ \[P_{G_{1}}^{\Delta_{1}}(X,Y)\neq P_{G_{2}}^{\Delta_{2}}(X,Y)\] _as bivariate polynomials._ Proof.: Denote the edge set of graph \(G_{i}\) as \(E_{i}\) for \(i=1,2\), and let \(C=E_{1}\cap E_{2}\), and \(C_{1}=E_{1}\setminus E_{2}\), \(C_{2}=E_{2}\setminus E_{1}\). Since \(G_{1}\) and \(G_{2}\) are isospectral, it follows that \(E_{1}\) and \(E_{2}\) have the same number of elements. We are interested in the spectrum of the matrices \[L_{\varepsilon,i}=U(C)+\varepsilon U(C_{i})\] for \(i=1,2\), and with \(\varepsilon>0\). In first order w.r.t. the parameter \(\varepsilon\), we have \[\lambda_{\varepsilon,i}^{(1)}=\lambda+\varepsilon\left\|v\right\|_{C_{i}}^{2} +\mbox{terms of higher order in $\varepsilon$} \tag{2}\] where \(v\in\mathds{R}^{n}\) is an eigenvector of \(U(C)\) associated with eigenvalue \(\lambda\), and \(\lambda_{\varepsilon,i}^{(1)}\) approximates in first order an eigenvalue \(\lambda_{\varepsilon,i}\) of \(L_{\varepsilon,i}\). According to Lemma 4.2, one can find an eigenvector \(v\) of \(U(C)\) such that \[\left\|v\right\|_{C_{1}}\neq\left\|v\right\|_{C_{2}}\] This implies that the first order approximations of the eigenvalues \(\lambda_{\varepsilon,i}\) are different. It follows that the eigenvalues \(\lambda_{\varepsilon,1}\) and \(\lambda_{\varepsilon,2}\) are different for the range of \(\varepsilon>0\) in which an analytic expansion in \(\varepsilon\) is possible and \(\varepsilon>0\) is sufficiently small. Now, we have varied one eigenvalue of \(U(C)\) analytically in two different ways. The other eigenvalues of \(U(C)\) also vary analytically as in (2), except that possibly it could be that some eigenvalues corresponding to \(L_{\varepsilon,1}\) and are equal in this case. In any case, the spectrum of \(U(C)=L_{\varepsilon,0}\) is varied continuously in two different manners. Hence, for \(\varepsilon>0\) small, the discrete subsets \(\operatorname{Spec}(L_{\varepsilon,1})\) and \(\operatorname{Spec}(L_{\varepsilon,2})\) of \(\mathds{R}\) are different. This implies that the corresponding characteristic polynomials \[P_{i}(X,\varepsilon),\quad i=1,2\] are different. But these are evaluations of the spectral polynomials \[P_{i}(X,Y)\] evaluated at \(Y=\varepsilon\) for all \(\varepsilon>0\) sufficiently small, and where these polynomials are given for edge weight maps: \[\Delta_{i}\colon C\cup E_{i}\to\mathds{N},\;e\mapsto\begin{cases}1,&e\in C \\ 0,&e\in E_{i}\end{cases}\] It follows that these two spectral polynomials are different. In order to obtain positive diffusion constants, now look at the spectrum of the matrix \[\varepsilon L_{\varepsilon,i}=\varepsilon U(C)+\varepsilon^{2}U(C_{i})\] which amounts to taking the diffusion parameters as \[\Delta_{i}\colon C\cup E_{i},\;e\mapsto\begin{cases}2,&e\in E_{i}\\ 1,&e\in C\end{cases}\] Hence, there exists a bijection \(E_{1}\to E_{2}\) between sets of weighted edges, i.e. an equivalence of diffusion pairs \((G_{1},\Delta_{1})\sim(G_{2},\Delta_{2})\), such that \(P_{G_{1}}^{\Delta_{1}}(X,Y)\neq P_{G_{2}}^{\Delta_{2}}(X,Y)\) as asserted. ### A Reconstruction Theorem Let \(G=(V(G),E(G))\) be a graph with \(n\) vertices. We begin with the following observation: **Theorem 4.4** (Kel'mans, 1967).: _Let_ \[P(X)=X^{n}-\sum_{i=1}^{n-1}(-1)^{i}c_{i}X^{n-i}\] _be the characteristic polynomial of the graph \(G\). Then_ \[c_{i}=\sum_{S\subset V\atop|S|=n-i}T(G_{S})\] _where \(T(H)\) is the number of spanning trees of graph \(H\), and \(G_{S}\) is the quotient graph obtained by identifying all vertices in \(S\) with a single vertex._ Proof.: [12, 13] We recover these quantities from the spectral polynomial \(P(X,Y)\) as follows: \[c_{i}(G)=a_{n-i}(1)\] for \(i=1,\ldots,n-1\). We define \[\mathcal{F}(G)=\{F\colon\text{$F$ is a forest with $V(F)=V(G)$ and $E(F)\subset E(G)$}\}\] and call \(\mathcal{F}(G)\) the _spanning forest set of \(G\)_. The following subsets of \(\mathcal{F}(G)\) are of interest: \[\mathcal{F}^{i}(G)=\{F\in\mathcal{F}(G)\colon b_{0}(F)=i\}\] This allows us to formulate a generalisation of Theorem 4.4 which is also a known result, but formulated here in the guise of spectral curves: **Theorem 4.5** (Buslov, 2014).: _Let \((G,\Delta)\) be a diffusion pair. Then the coefficients of the spectral polynomial_ \[P_{G}^{\Delta}(X,Y)=\sum_{i=1}^{n}a_{i}(Y)X^{i}\] _of a diffusion pair \((G,\Delta)\) are given as_ \[a_{i}(Y)=(-1)^{n-i}\sum_{F\in\mathcal{F}^{i}(G)}\pi_{F},\quad\pi_{F}=\prod_{e \in E(F)}Y^{\alpha_{e}}\] _for \(i=1,\ldots,n\)._ Proof.: [4, Thm. 2]. Let \(G^{\prime}\) be another graph. An isomorphism \[\mathcal{F}(G)\cong\mathcal{F}(G^{\prime})\] between spanning forest sets is given by a bijection \(f\colon\mathcal{F}(G)\to\mathcal{F}(G^{\prime})\) and an isomorphism \(F\cong F^{\prime}\) with \(F\in\mathcal{F}(G)\) and \(F^{\prime}=f(F)\) for all \(F\in\mathcal{F}(G)\). **Lemma 4.6**.: _Let \(G,G^{\prime}\) be two graphs on \(n\) vertices. Then \(G\) and \(G^{\prime}\) are isomorphic if and only if the strict forest subsets \(\mathcal{F}(G)\) and \(\mathcal{F}(G^{\prime})\) are isomorphic._ Proof.: If the graphs are isomorphic, then clearly their strict forest sets are isomorphic. Assume now that \(G=T\) and \(G^{\prime}=T^{\prime}\) are trees. Since trees are their own spanning forests, we clearly must have that \(T\cong T^{\prime}\). Now, assume that \(G\) is not a tree. If \(G\) is a forest, then we can apply the result for trees to each individual connected component of \(G\). So, we may assume that \(b_{1}(G)>0\), and that \(G\) is connected. If \(\mathcal{F}(G)\cong\mathcal{F}(G^{\prime})\), then w.l.o.g. we may assume that these two sets are equal, and that the vertex sets of the two graphs coincide. Let \(T\) be a spanning tree of \(G\). Then \[\mathcal{F}(T)\subset\mathcal{F}(G)=\mathcal{F}(G^{\prime})\] Hence, by symmetry, each spanning tree of \(G\) is a spanning tree of \(G^{\prime}\) and vice versa. This implies that \(G=G^{\prime}\), as otherwise there is an edge of \(G\) not in \(G^{\prime}\). But then a spanning tree of \(G\) containing that edge is not a spanning tree of \(G^{\prime}\), a contradiction. Hence, \(G\cong G^{\prime}\). **Theorem 4.7** (Reconstruction Theorem).: _For any finite graph \(G\), there exists a \(p\)-adic Laplacian given by diffusion parameters \(\Delta\) such that the diffusion pair \((G,\Delta)\) has a spectral polynomial \(P(X,Y)\) such that for any diffusion pair \((G^{\prime},\Delta)\) its spectral polynomial \(P^{\prime}(X,Y)\) satisfies:_ \[P(X,Y)=P^{\prime}(X,Y)\quad\Leftrightarrow\quad(G,\Delta)\cong(G^{\prime},\Delta)\] _In other words, the isomorphism class of \(G\) is the unique family of graphs having spectral polynomial \(P(X,Y)\)._ Proof.: We need only prove that if \(P(X,Y)=P^{\prime}(X,Y)\), then \((G,\Delta)\cong(G^{\prime},\Delta^{\prime})\). Let \[\Delta\colon E(G)\to\big{\{}\alpha_{1},\ldots,\alpha_{|E(G)|}\big{\}},e\mapsto \alpha_{e}\] be a bijection. W.l.o.g. we may assume that the edges of \(G\) are numbered as \(\alpha_{e}\), and that \(\Delta\) is the identity map, so that we may write \(Y^{e}\) instead of \(Y^{\alpha_{e}}\) in our polynomials. We further make the following assumption: We assume that no edge label equals the finite sum of (3) any other edge labels (they are all positive integers). Let \(I\subset E(G)\). Then we define \[\mathcal{F}^{i}_{I}(G) =\big{\{}F\in\mathcal{F}^{i}(G)\colon E(F)=I\big{\}}\] \[\mathcal{F}^{i}(I) =\big{\{}F\in\mathcal{F}^{i}(G)\colon E(F)\subseteq I\big{\}}\] Then \(\mathcal{F}^{i}(G)\) is the disjoint union of all the sets \(\mathcal{F}^{i}_{I}(G)\) for \(I\subset E(G)\). Also, \(\mathcal{F}^{i}_{I}(G)\) is either empty or consists of precisely one forest. Let \(E(G)=:E\), and let \(H\) be a spanning subgraph of \(G\). Then \[a_{H,i}(Y)=(-1)^{n-i}\sum_{F\in\mathcal{F}^{i}(H)}\prod_{e\in E(F)}Y^{e}\] and for any spanning tree \(T\) of \(G\), we have \[a_{i}(Y)=a_{T,i}(Y)+a_{G\setminus T,i}(Y) \tag{4}\] where \(G\setminus T\) is obtained from \(G\) by removing the edges of \(T\). Now, given a spectral polynomial \(P(X,Y)\), the polynomial \[a_{1}(Y)=\sum_{k=1}^{M}a_{1k}Y^{k}\] with \(M>>0\) recovers the set of spanning trees as follows: a monomial \(a_{1k}Y^{k}\) means that there are \(|a_{1k}|\in\mathds{N}\) spanning trees whose total sum of edge labels equals \(k\). Because of our assumption (3), we have that \[|a_{1k}|\leq 1\] for all \(k=1,2,\dots\). Hence, each non-zero coefficient of \(a_{1}(Y)\) encodes precisely one spanning tree of \(G\). Also, the two parts of the decomposition (4) have no monomials in common. In order to recover the diffusion constants, we look at \[a_{n-1}(Y)\] Again, assumption (3) ensures that all coefficients are either \(1\) or zero. So, the exponents of the non-zero monomials retrieve all the distinct labelled edges of \(G\). We can go now further to extract for every spanning tree with a given set of edge labels, the set of all spanning forests, thereby knowing their numbers of connected components. Beginning with all pairs of distinct edges, we can also extract the sets of edges in each connected component of any spanning forest. The assumption (3) makes this possible. In this way, a unique spanning tree is constructed. Doing this for all spanning trees, we obtain a unique set of spanning forests \(\mathcal{F}(G)\) for some graph \(G\). By Lemma 4.6, the isomorphism class of \(G\) is now uniquely determined. **Remark 4.8**.: _We remark that the spectral polynomial for any diffusion pair satisfying (3) has coefficients \(0,\pm 1\), as has been seen in the proof of the Reconstruction Theorem (Theorem 4.7)._ **Corollary 4.9**.: _Playing Game 1 with choosing diffusion parameters which satisfy (3), allows you to reconstruct the unknown graph \(G\)._ Proof.: This is an immediate consequence of Theorem 4.7, because Game 1 has a winning strategy for any choice of diffusion parameters according to Theorem 3.6. **Remark 4.10**.: _Notice that we are not solving the graph isomorphism problem in polynomial time, as the computation of the coefficients of the spectral polynomial can be expected to be far too time-consuming._ ## 5 Hearing Shapes of \(p\)-adic geometric objects Corollary 4.9 can also be applied to objects of \(p\)-adic geometry which have an underlying graph sructure. The first kind of objects consists of Mumford curves which have reduction graphs whose first Betti number equals the genus of the curve. The second kind are analytic tori which after a base change look like products of Mumford curves of genus 1, so-called _Tate curves_. In order to be able to do this, we will construct embeddings of the sets of \(K\)-rational points of these objects into \(K\). The details of this procedure are presented in the following two subsections. ### Hearing the Shape of a Mumford Curve Mumford curves are explained in some detail in [6, Ch. 5]. However, we will not need to much of their construction. All we need is that they are projective algebraic curves admitting a finite cover by holed disks. Certain types of coverings by holed disks in \(K\), called _vertical coverings_ are introduced in [3]. These produce certain types of reduction graphs, also called vertical. Let \(X\) be a Mumford curve. In [3], the concept of vertical covering of the set \(X(K)\) of its \(K\)-rational points was developped. Let such a covering be given, and let \(G\) be the corresponding vertical reduction graph. It is a connected graph whose vertices all have degree at least 2, and its first Betti number equals the genus \(g\) of the curve. We require Mumford curves to have positive genus. The vertical covering of \(X(K)\) consists of holed disks which are in one-to-one correspondence with the vertices of \(G\). The edges of \(G\) correspond to annuli (as rigid analytic spaces) with minimal positive thickness, i.e. they do not contain \(K\)-rational points. These annuli connect two otherwise disjoint holed disks in \(X(K)\) without introducing extra \(K\)-rational points. As a \(p\)-adic manifold, \(X(K)\) is simply a disjoint union of finitely many holed disks. This compact manifold can be embedded into \(K\) as a closed-open subset in such a way that each patch of the embedded vertical covering of \(X(K)\) contains a distinct point representing a class in \(K/O_{K}\). Now, we are in the setting of Section 3.1, and have a \(p\)-adic Laplacian operator acting on the space \(L^{2}(K)^{|G|}\). **Corollary 5.1**.: _Playing Game 1 with diffusion parameters satisfying (3) allows to reconstruct a vertical reduction graph of a Mumford curve._ Proof.: This is immediate from the construction above and Corrolary 4.9. ### Hearing the Shape of a \(p\)-Adic Analytic Torus The theory of \(p\)-adic analytic tori is outlined in [6, Ch. 6]. We will collect the data in what follows, and then proceed with the application. Let \(A\) be an analytic torus \[A=\mathds{G}_{m}/\Lambda\] with multiplicative lattice \(\Lambda\) generated by a basis \[\tilde{q}_{i}=q_{1}^{\alpha_{i1}}\cdots q_{g}^{\alpha_{ig}}\] for \(i=1\ldots,g\) with \[q_{i}=qe_{i}\] where \(q=p^{f}\), \(e_{i}\) is the unit vector of the \(i\)-th component in the product space \(K^{\times}\times\cdots\times K^{\times}\), and \(\alpha_{ij}\in\mathds{Z}\). The lattice basis yields a decomposition \[K^{g}=\bigoplus_{i=1}^{g}K\tilde{q}_{i}\] On each component \(K\tilde{q}_{i}\), we can define an ultrametric norm as follows: the generator \(\tilde{q}_{i}\) is determined by an integer vector \[\alpha_{i}=(\alpha_{i1},\ldots,\alpha_{ig})\in\mathds{Z}^{g}\] Its associated _primitive vector_ is a vector \[\lambda_{i}\in\mathds{N}^{g}\] such that \[\alpha_{i}=k\cdot\lambda_{i}\] with \(k\in\mathds{Z}\), and \(|k|\) is maximal with this property. If we write \[\tilde{q}=q_{1}\cdots q_{g}\] and \[\tilde{q}^{\beta}=q_{1}^{\beta_{1}}\cdots q_{g}^{\beta_{g}}\] for \[\beta=(\beta_{1},\ldots,\beta_{g})\in\mathds{Z}^{g}\] Then we have \[\tilde{q}_{i}=\tilde{q}^{\alpha_{i}}\] Its associated _primitive generator_ of the line \(K\tilde{q}_{i}\) is defined as \[\tilde{b}_{i}:=\tilde{q}^{\lambda_{i}}\] where \(\lambda_{i}\) is the primitive vector associated with \(\alpha_{i}\). **Definition 5.2**.: _The ultrametric norm associated with the line \(K\tilde{q}_{i}\) is defined as_ \[\left\|x\right\|_{i}=\left\|\tilde{b}_{i}\right\|_{K^{g}}^{-\log_{q}\left| \lambda\right|_{K}}\] _where \(x=\lambda\tilde{b}_{i}\in K\tilde{q}_{i}\) with primitive generator \(\tilde{b}_{i}\), and \(\lambda\in K\). The Haar measure \(\mu_{i}\) on \(K\tilde{q}_{i}\) is normalised such that the unit ball w.r.t. \(\left\|\cdot\right\|_{i}\) has measure one._ The element \(\tilde{q}_{i}\) defines a Tate curve, whose reduction graph we assume to be simplicial, as follows: as we have \[\alpha_{i}=n_{i}\lambda_{i}\] with natural \(n_{i}>2\), it follows that \[\left\|\tilde{q}_{i}\right\|_{i}=\left\|\tilde{b}_{i}\right\|^{n_{i}}=(q^{-c_ {i}})^{n_{i}}\] with integer \(c_{i}>0\). Hence, the component Tate curve here is \[T_{i}=(K^{\times}\tilde{b}_{i})/\langle\tilde{q}_{i}\rangle\cong K^{\times}/ \left\langle(q^{c_{i}})^{n_{i}}\right\rangle\] However, there is a difference in the reduction graph structures on both sides of the isomorphism: a vertical covering of the curve on the right has \(c_{i}n_{i}\) vertices, whereas anyone on the left has \(n_{i}\) vertices. We have a decomposition \[L^{2}(K^{g})=\bigotimes_{i=1}^{g}L^{2}(K\tilde{q}_{i})\] On each factor, we repeat the construction of the previous subsection and obtain an operator \[\Lambda\colon\bigotimes_{i=1}^{g}L^{2}(K\tilde{q}_{i})^{|G_{i}|}\to\bigotimes _{i=1}^{g}L^{2}(K\tilde{q}_{i})^{|G_{i}|}\] where \(G_{i}\) is a vertical reduction graph of \(T_{i}\). This operator generalises the product space operator from [22] and decomposes as \[\Lambda_{G}=\Lambda_{1}+\cdots+\Lambda_{g}\] where \[\Lambda_{i}=1\otimes\cdots\otimes 1\otimes\Lambda_{G_{i}}^{\Delta_{i}}\otimes 1 \otimes\cdots\otimes 1\] for \(i=1,\ldots,g\). Instead of playing Game 1 for each component graph \(G_{i}\), we recover the product graph for the torus \(A\): **Corollary 5.3**.: _Playing Game 1 is possible for \(p\)-adic analytic tori in a successful way in order to recover the product graph composed of \(G_{1},\ldots,G_{g}\)._ Proof.: Let \(G\) be the product graph of \(G_{1},\ldots,G_{g}\). Then \(\Lambda_{G}\) can be viewed in fact as an operator \[\Lambda_{G}\colon L^{2}(K)^{|G|}\to L^{2}(K)^{|G|}\] by taking suitable isomorphisms. Then play Game 1 using assumption (3). This recovers \(G\). ## Acknowledgements Evgeny Zelenov is thanked for posing this problem to one of the authors. David Weisbart is thanked for valuable discussions. This research is partially supported by the Deutsche Forschungsgemeinschaft under project number 469999674
2308.13283
Tracing obscured galaxy build-up at high redshift using deep radio surveys
A fundamental question of extra-galactic astronomy that is yet to be fully understood, concerns the evolution of the star formation rate (SFR) and supermassive black hole (SMBH) activity with cosmic time, as well as their interplay and how it impacts galaxy evolution. A primary focus that could shed more light on these questions is the study of merging systems, comprising highly star-forming galaxies (SFGs) and active galactic nuclei (AGN) at the earliest stages of galactic formation. However, it is essential to explore complementary selection methods across multiple wavelengths. The primary objective of this study is to conduct a comprehensive analysis of a sample of high-redshift ($z>3$) far-infrared (far-IR) and radio-emitting galaxies in the highest possible spatial resolution. In order to select the galactic population of our interest, we selected galaxies that present relatively compact radio morphologies at 1.4 GHz as well as a far-IR spectrum that peaks in flux at $\lambda \geq 350 \, \mu m$. For these selection criteria, we used the COSMOS and ECDF-S fields, which provide high spectral and spatial resolution at a multi-wavelength scale. We derived a sample of eight galaxies that were identified either photometrically or spectroscopically at $z>3$ from literature studies and by our team. A thorough investigation of available optical, near-IR, and millimetre (mm) imaging reveals a possible merging scenario in five out of eight cases in our sample. Additionally, available multi-wavelength photometry strongly suggests active star formation at the $10^3 \, M_{\odot}/yr$ level in massive systems co-hosting an active SMBH. Comparison of these results with previous studies, suggests that our selection method preferentially identifies galaxies hosting an active SMBH, as well as a strong SFG component, resulting in high SFR and IR luminosity.
Stergios Amarantidis, Jose Afonso, Israel Matute, Duncan Farrah, A. M. Hopkins, Hugo Messias, Ciro Pappalardo, N. Seymour
2023-08-25T10:09:47Z
http://arxiv.org/abs/2308.13283v1
# Tracing obscured galaxy build-up at high redshift using deep radio surveys ###### Abstract Context:A fundamental question of extra-galactic astronomy that is yet to be fully understood, concerns the evolution of the star formation rate (SFR) and supermassive black hole (SMBH) activity with cosmic time, as well as their interplay and how it impacts galaxy evolution. A primary focus that could shed more light on these questions is the study of merging systems, comprising highly star-forming galaxies (SFGs) and active galactic nuclei (AGN) at the earliest stages of galactic formation. However, considering the challenges associated with identifying these objects, it is essential to explore complementary selection methods across multiple wavelengths. Aims:The primary objective of this study is to conduct a comprehensive analysis of a sample of high-redshift (\(z>3\)) far-infrared (far-IR) and radio-emitting galaxies in the highest possible spatial resolution. The aim is to study the properties of this population, such as their morphological characteristics, and to explore the interplay of SFR and SMBH activity at this epoch. Methods:In order to select the galactic population of our interest, we employed two selection criteria that have frequently been used as separate methods in the literature. In more detail, we selected galaxies that present relatively compact radio morphologies at 1.4 GHz (i.e. an angular size smaller than 10 arcsec) as well as a far-IR spectrum that peaks in flux at \(\lambda\geq 350\,\mu\)m (i.e. \(flux_{50\mu m}>flux_{250\mu m}\)). For these selection criteria, we used the COSMOS and ECDF-S fields, two of the most extensively observed astronomical fields currently available, which provide high spectral and spatial resolution at a multi-wavelength scale. By accepting only galaxies that satisfied these selection criteria, we derived a sample of eight galaxies that were identified either photometrically or spectroscopically at \(z>3\) from literature studies and by our team. Results:A thorough investigation of available optical, near-IR, and millimetre (mm) imaging reveals a possible merging scenario in five out of eight cases in our sample. Additionally, available multi-wavelength photometry strongly suggests active star formation at the \(10^{3}\,M_{\odot}/yr\) level in massive systems (stellar masses of \(M_{\star}\sim 10^{11}\,M_{\odot}\)) co-hosting an active SMBH. Conclusions:Comparison of these results with previous studies, suggests that our selection method preferentially identifies galaxies hosting an active SMBH, as well as a strong SFG component, resulting in high SFR and IR luminosity. An additional examination of the efficacy of the radio and far-IR selection criteria provides further support for their combined application in selecting co-evolving AGN and star formation activity at high redshift. In this regard, future use of these selection criteria on radio and far-IR/mm observations of statistically larger galaxy samples is of high interest. ## 1 Introduction One of the main challenges in the field of extra-galactic astronomy is understanding how the star formation rate density (SFRD - \(M_{\odot}/yr/Mpc^{3}\)) evolves with redshift (see Madau & Fragos 2017; Oesch et al. 2018; Tacconi et al. 2020, for a detailed review). While we have accurate estimates of the SFRD at intermediate redshifts (\(1<z<3\)) using various selection methods from deep multi-wavelength data (e.g. Gruppioni et al. 2020; Novak et al. 2017; Ishigaki et al. 2018; Dunlop et al. 2017, for infrared (IR), radio, ultraviolet-UV and millimetre (mm) criteria, respectively) that agree with results from hydrodynamical simulations (e.g. Pillepich et al. 2018; Lagache 2018), uncertainties regarding higher redshifts (\(z>3\)) still remain (e.g. Magnelli et al. 2019; Gruppioni et al. 2020). For instance, studies of UV-selected galaxy samples (e.g. McLeod et al. 2016; Ishigaki et al. 2018) estimate a steep decline in the SFRD at \(z>3\), while other works employing far-IR radio selection criteria (e.g. Viero et al. 2013; Rowan-Robinson et al. 2016; Novak et al. 2017; Malefahlo et al. 2022) or gamma-ray-burst-selected samples (e.g. Yuksel et al. 2008; Kistler et al. 2009) suggest a flatter behaviour. These uncertainties may originate from our current instrumental limitations, including low sensitivity, coarse spatial resolution, and coverage of small parts of the sky, which hinder our ability to properly characterise distant objects and obtain a statistically unbiased sample at higher redshifts. For instance, observations based on telescopes such as the _Herschel_ Space Observatory (Pilbratt et al., 2010) or the Submillimetre Common-User Bolometer Array (SCUBA; Holland et al., 2013) are bound to limit the redshift detection to \(z_{\rm max}\sim 4\)(e.g. Donevski et al., 2018), and to uncover only the very bright end of the far-IR luminosity function (e.g. Gruppioni et al., 2013; Wang et al., 2021). On the other hand, more sensitive telescopes, such as the Atacama Large Millimeter/submillimeter Array (ALMA; Wilson et al., 2019) or the Northern Extended Millimeter Array (NOEMA; Chenu et al., 2016) are limited to smaller areas. Thereby, these telescopes provide fewer objects (e.g. Dunlop et al., 2017; Faisst et al., 2020; Casey et al., 2021) and reveal a fainter star-forming galaxy (SFG) population (e.g. Pantoni et al., 2021). Nevertheless, recent studies using the capabilities of both single telescopes and interferometers have pushed the limits of our current understanding regarding the star formation (SF) history, exposing a mm dusty galaxy population at \(z>3\)(see e.g. Donevski et al., 2018; Yamaguchi et al., 2019; Gruppioni et al., 2020). Another aspect of our current limitations regarding the accurate estimation of the SFRD at high redshifts might be hidden in the selection criteria that are currently used to detect SFGs. Various methods have been employed over the years in order to detect this galaxy population (e.g. Gruppioni et al., 2013; Bouwens et al., 2015), however, the bulk of the research is focussed on the detection of Lyman-break galaxies in the rest frame UV and the application of dust-correction low-redshift empirical relations for the derivation of their star formation rates (e.g. Bouwens et al., 2015; Oesch et al., 2018). These relations might not be necessarily valid at \(z>3\), where the SFG population remains uncertain. Alternatively, recent studies using sensitive far-IR or mm telescopes (e.g. the ALPINE-ALMA survey; Gruppioni et al., 2020) or radio surveys (e.g. Talia et al., 2021; Enia et al., 2022) aim to reveal a an SFG population that has so far remained undetected in the optical/UV bands, which is estimated to significantly contribute to the SFRD at \(z>3\)(e.g. Gruppioni et al., 2020). In order to obtain accurate SFR estimates, these studies separate the SFG from the AGN population (e.g. Lanzuisi et al., 2017; Gruppioni et al., 2020) by removing the AGN contribution from the IR luminosity. However, this approach can introduce a bias in the sample. Interestingly, results from the recent surveys of the _James Webb_ Space Telescope (_JWST_) (e.g. Harikane et al., 2023; Nakajima et al., 2023; Perez-Gonzalez et al., 2023; Rodighiero et al., 2023) reveal a dusty-galaxy population like this at even higher redshifts (i.e. \(z>6\)) that have not been explored before. Previous studies, such as the work by Duncan et al. (2019), indicated that the galaxy pair and merging ratio increases with redshift (from \(\sim 5\) % at \(z<1\) to \(\sim 40\) % at \(z>3\) and for \(M_{\star}>10^{10.3}\,M_{\odot}\)). Therefore, in this work, we question the strategy of removing AGN-contaminated radio sources from selected samples as this approach may overlook merging systems in which the AGN emission conceals an additional SFG population that could significantly contribute to the SFRD at high redshifts. Such galaxies could be detected from the identification of far-IR bright _Herschel_ galaxies along with a radio emission. Even though these selection criteria might favour sources with cold gas and negative k-correction, their application to fields with deep coverage (e.g. the COSMOS field; Scoville et al., 2007) along with photometric data and high spatial resolution information from telescopes such as the _Hubble_ Space Telescope (HST) and ALMA could potentially reveal a population of SFGs that would be omitted by the current selection criteria because of the AGN contribution of a neighbour galaxy. In an attempt to explore new techniques to assist radio surveys reaching \(z>3\), we present a complementary study combining the radio selection and far-IR peak spectrum criteria (e.g. Yan et al., 2022). Furthermore, we suggest that at \(z\sim 3-4\), a substantial amount of star formation occurs that is hardly revealed from optical or near-IR surveys, and which is consequently excluded by the current selection methods due to the AGN contribution in the system. In more detail, we explore a sample of \(z>3\) far-IR radio-emitting galaxies, detected in the COSMOS and ECDF-S fields, that might uncover interacting systems and merging galaxies with a substantial contribution of an AGN component, along with the presence of high star formation activity. If radio emission can help to pinpoint this important'missing' SF component, upcoming radio surveys, such as the Evolutionary Map of the Universe (EMU) survey (e.g. Norris, 2009; Norris et al., 2021) as well as novel selection criteria in the radio (e.g. Talia et al., 2021; Enia et al., 2022) could become even more important to our understanding of the SFRD at \(z>3\). The paper is structured as follows: in Sect. 2 we present the method and criteria that were employed in the selection of the original sample and the subsequent reduction of the sample to the high-redshift far-IR radio candidates. Section 3 briefly highlights the photometric and spectroscopic properties of the sample, along with the results obtained from various properties of the entire sample. In Sect. 4 we compare our work with previous studies and provide a brief discussion of the results, emphasising the importance of the merging population for SFRD estimations. Finally, we provide the conclusions of this study in Sect. 5 and discuss future prospects regarding our method. Throughout this paper, we assume a \(\Lambda\)-CDM cosmology using \(H_{0}=70\,kms^{-1}Mpc^{-1}\), \(\Omega_{\rm m}=0.3\), and \(\Omega_{\Lambda}=0.7\). ## 2 Sample selection For our selection, we examined two of the most deeply explored extra-galactic fields that were observed under the projects Extended _Chandra_ Deep Field-South (ECDF-S; Lehmer et al., 2005) and Cosmic Evolution Survey (COSMOS; Scoville et al., 2007). In past years, these two fields have been extensively observed in a wide range of electromagnetic frequencies, ranging from X-rays to radio waves. Radio data were obtained from both fields at 1.4 GHz with the VLA (VLA-COSMOS; the flux density limit is 7 \(\mu Jy/beam\), and VLA-ECDF-S; the flux density limit is 7.4 \(\mu Jy/beam\); Schinnerer et al., 2010; Smolcik et al., 2017; Miller et al., 2013) at an angular resolution of \(\sim 2.5^{\prime\prime}\). The far-IR wavelength regime, which is necessary for our selection, is available through the _Herschel_ Multi-tiered Extragalactic Survey (HerMES; Oliver et al., 2012). This survey covered a total of 380 \(deg^{2}\) using the two main instruments on board _Herschel_, namely the Spectral and Photometric Imaging Receiver (SPIRE; Griffin et al., 2010; Nguyen et al., 2010, ; at 250, 350 and 500 \(\mu m\); the confusion noise level is \(\sim 5.8\), \(\sim 6.3\), and \(\sim 6.8\,mJy\), respectively) and the Photodetector Array Camera and Spectrometer (PACS; Poglitsch et al., 2010, ; at 100 and 160 \(\mu m\); the detection limits are \(\sim 7.7\) and \(\sim 14.5\,mJy\), respectively). The ECDFS and COSMOS fields are also included in the covered area of HerMES, comprising a total of \(\sim 2\,deg^{2}\). Previous studies using HerMES data have demonstrated the efficient use of _Herschel_'s far-IR fluxes for detecting high-redshift (\(z>3\)) dusty galaxies. For instance, Dowell et al. (2014) employed sim ple criteria for the increasing fluxes for the 250, 350, and 500 \(\mu m\) bands, which is indicative of a far-IR peak at longer (redshifted) \(>350\,\mu m\) wavelengths, revealing a dusty SF galaxy population with a mean redshift distribution of \(<z>\sim 4.7\). To identify potential high-redshift hybrid AGN-SFG systems, we conducted a cross-match analysis between the VLA COSMOS and ECDF-S 1.4 GHz catalogues with those of HerMES, applying a maximum separation of 18 arcsec (motivated by the beam size of _Herschel_ at 250 \(\mu m\) of 18.1 arcsec). The resulting cross-matched sample contained 2292 sources for COSMOS, and for the ECDF-S field is provided 769 sources. This sample was further reduced by considering robust radio sources (signal-to-noise ratio; S/N\(>\)5) without an indication for extended or multicomponent radio emission (according to each catalogue; for more details, see Lehmer et al. 2005; Scoville et al. 2007). To ensure the correspondence between radio and far-IR emission, we excluded from the sample sources that present multiple radio detections within one _Herschel_ beam. Furthermore, only sources with 250, 350, and 500 \(\mu m\) flux errors smaller than 50 % were accepted, which ensured that the shape of the far-IR emission was properly determined. This selection method resulted in 219 and 93 sources for COSMOS and ECDF-S, respectively. Furthermore, motivated by the efficacy of studies such as Dowell et al. (2014) and Donevski et al. (2018) in selecting high-\(z\) dusty SF galaxies, we adopted a similar method to identify sources with \(f_{350\mu m}/f_{250\mu m}>1\), which would translate into a potential \(z>3\) redshift. This selection method resulted in a sample of 53 and 30 galaxies (see Fig. 1) for the COSMOS and ECDF-S fields, respectively. The radio 1.4 GHz and far-IR (250 \(\mu m\)) images of each of these sources were visually inspected in order to secure the robustness of the matching process and identify potentially uncertain or confused radio/far-IR detections. We confirmed that all 83 selected sources display compact morphologies both in the radio and in the FIR, and that the detected counterparts are robust, with average separations between the radio and far-IR detections below \(\sim 2\) arcsec. ### High-redshift candidates Application of these far-IR criteria resulted in a sample of \(53+30=83\) sources from both fields for which we retrieved possible photometric or spectroscopic identifications from the literature. For this exploration, we consulted Simbad (Wenger et al. 2000), the NED1 online search tools, and the Cosmos2015 (Laigle et al. 2016) online catalogue2, retrieving redshift values from sources that are within 2 arcsec of our candidates. Footnote 1: [https://ned.ipac.caltech.edu](https://ned.ipac.caltech.edu) Footnote 2: [https://irsa.ipac.caltech.edu](https://irsa.ipac.caltech.edu) From this analysis, we found 18 sources in the ECDF-S and 19 sources in the COSMOS field with spectroscopic redshift determination. Five of these radio sources present a spectroscopic redshift of \(z_{\rm spec}>3\), namely J033140.1-275631 (ECDF-S field; \(z_{\rm spec}\)=5.14; Suh et al. 2015), J100028.71+023203.7 (COSMOS field; \(z_{\rm spec}\)=4.76; Hasinger et al. 2018), [LKB2017] 214 (ECDF-S field; \(z_{\rm spec}\)=3.74; Iwasawa et al. 2012), COSMOSVLA J100256.53+021158.4 (COSMOS field; \(z_{\rm spec}\)=3.503; Paris et al. 2014), and [D/SS2017]554 (ECDF-S field; \(z_{\rm spec}\)=3.1988; Danielson et al. 2017). The reported spectroscopic redshifts of the two highest- \(z\) sources, however, are not reliable. For instance, the candidate with the highest redshift, found in the ECDF-S, reveals contradicting spectroscopic measurements. More specifically, the \(z_{\rm spec}=5.137\) value (derived from near-IR spectroscopy; Suh et al. 2015) contradicts Coppin et al. (2012) and Danielson et al. (2017), who estimated a lower spectroscopic value of \(z_{\rm spec}=1.617\) (with spectra from optical and far-IR wavelengths) for this candidate. Because a high-redshift nature of this source cannot be excluded, we further explored its available spectra. This investigation indicated that the emission line Figure 1: Flux density ratios of 350 and 250 \(\mu m\) vs the ratios of 500 and 350 \(\mu m\) of HerMES in COSMOS (panel a) and ECDF-S (panel b). The sample was selected by cross-matching the HerMES catalogue with the VLA-COSMOS and VLA-ECDFS data. The dashed black lines indicate \(f_{350}/f_{250}\)=1 and \(f_{500}/f_{350}\)=1, while the points surpassing the limit of \(f_{350}/f_{250}\)\(>\)1 are presented with larger markers. The colour bar corresponds to the 350 \(\mu m\) flux of each source, and the legend shows the number of sources for each panel and the selected high-redshift candidates (crosses). The black lines in the subregion of each panel provide information regarding the shape of the far-IR (FIR) emission. For instance, sources with \(f_{500}/f_{350}\)\(<\)1 and \(f_{350}/f_{250}\)\(>\)1 have a peak in flux between 250 and 500 \(\mu m\). The photometric properties of the sources presented in the legend are provided in Table 1. responsible for the \(z_{\rm spec}\)=5.137 solution can be also interpreted as an emission line at \(z_{\rm spec}\)=1.617, which further supports the notion for the intermediate-redshift solution derived in the literature. Therefore, we considered this source as an intermediate-redshift galaxy and did not explore it further. Likewise, for J100028.71, inspection of its DEIMOS 10k survey spectrum (Hasinger et al. 2018) revealed a very low S/N spectrum without an obvious line detection that could explain the \(z=4.76\) indication. We assumed an erroneous redshift estimate caused by a possible false-positive of the automated feature recognition from the DEIMOS survey. This source is however kept in this work as a high-redshift source, prompted by a photometric redshift determination of \(z_{\rm phot}\sim 3.2\) by Miettinen et al. (2015). Inspection of the remaining three sources with a spectroscopic redshift higher than 3 revealed that the emission lines used to determine the redshifts of these sources have either a high S/N (i.e. S/N\(>10\)) or are denoted with a secure spectroscopic quality flag. This means that these measurements are accurate. Our search identifies 12 sources in the ECDF-S field and 34 sources in the COSMOS field based on photometric redshifts from the literature. These results are depicted in Fig. 2, where the histograms for the redshifts of each survey and the total values are plotted with different colours. This figure reveals that our selection favours radio sources around \(z\sim 2\) with a distribution that extends to higher redshifts. Four sources of this sample in the COSMOS field have photometric redshifts (identified in the Cosmos2015 catalogue) of \(z>3\) (i.e. COSMOSVLA J100233.16+020626.5 with \(z_{\rm phot}\)=3.016, COSMOSVLA J095845.94+024329.2 with \(z_{\rm phot}\)=3.172, COSMOSVLA J100028.75+023203.5 with \(z_{\rm phot}\)=3.175, and COSMOSVLA J100004.81+023045.2 with \(z_{\rm phot}\)=3.76). Similarly, in the ECDF-S sample, we identified one source with \(z_{\rm phot}\)\(>\)3, namely ALESS 303152.49-280319.1 (\(z_{\rm phot}\)=3.56; Wardlow et al. 2011). These five high-redshift candidates, along with the three sources with a spectroscopic redshift above 3, are presented along with their 1.4 GHz and far-IR HerMES flux densities in Table 1. We are particularly interested in sources with the highest redshifts. We therefore decided to further investigate the five sources for which the photo-\(z\) indicates a \(z>3\) nature. Follow-up observations of these sources aimed to improve the redshift determination (for the photometric redshift estimates), and a more detailed analysis of their properties was initiated, using mm observatories from the northern and southern hemispheres (for the COSMOS and ECDF-S fields, respectively), the first results of which are described in the following section. The comparison of our acquired spectroscopic/photometric redshift values for our sample with the corresponding values from alternative photometric catalogues in the literature is a valuable exercise. This comparison is presented in more detail in Appendix A and indicates that the COSMOS and the ECDF-S fields do not contain additional candidates at \(z>3\) that our analysis could have miss-identified at lower \(z\), and the \(z>3\) nature of the five sources with photo-\(z\) considered here is confirmed. Therefore, we are more confident that the sources that are presented in the following sections are the only ones with \(z_{\rm phot,spec}>3\) that could be selected with our selection criteria from the two fields of study. ## 3 Results To characterise the morphological and physical properties of the eight high-redshift galaxies under investigation, a detailed examination of the available photometric and spectroscopic data from the literature was conducted. Simultaneously, follow-up observations were initiated, in particular, using millimetre facilities. The results are described below. During this analysis, a spectral energy distribution (SED) fitting analysis was performed for each galaxy in the sample using CIGALE (Noll et al. 2009; Serra et al. 2011). The resulting fits yield two values for each modelled physical property of a galaxy (e.g. stellar mass and redshift), one corresponding to the best-fit solution and another that is derived through a Bayesian statistical analysis. In the following text, we refer to the best-fit values when presenting the sample, while the Bayesian-derived estimates along with their corresponding uncertainties are introduced in Table 3. The images of the eight sources for four different photometric bands (optical, near-IR, radio, and mm wavelengths) are presented in Fig. 4. For each wavelength regime, the observations with the highest available spatial resolution were selected. ### SED fitting results The SED fitting analysis was conducted using the CIGALE software, employing consistent input models and parameters for the entire sample. The data fluxes from the Cosmos2015 catalogue were used in the analysis, while the radio information as well as any nebular emission component were not taken into consideration. For the sources that have been identified spectroscopically in the literature (i.e. CVLA1002, CVLA958, ALESS33, and ECDFS544), the redshift input was fixed to the corresponding spectroscopic value. From the remaining galaxies, ALESS14 and CVLA1000, the redshifts were fixed to the photometric values retrieved from the literature, while for CVLA100 and CVLA028, the best-fitting SED was selected by introducing \(\Delta z=0.1\) intervals. The resulting fits, along with the model parameters used in this analysis, are presented in Fig. 3 and Table 2. To probe the significance of the AGN contribution, we applied the same fitting strategy using CIGALE without the AGN Figure 2: Histograms presenting the number of sources for each redshift bin selected by the criteria for the COSMOS (empty histograms) and ECDF-S (filled histograms). The blue histograms correspond to spectroscopic values derived from the literature, and the orange histograms show the photometric values. The combined sample is depicted with an empty black histogram. The number of sources for each category is presented in the legend. component. The updated SED fits revealed that for sources that initially required a lower AGN contribution (e.g. CVLA1000), the \(\chi^{2}\) values changed marginally by approximately \(\sim 10\%\). On the other hand, for galaxies such as CVLA100, which presents a substantial AGN contribution, no accurate SED fit could be provided without an AGN component (\(\Delta\chi^{2}>3\)). Consequently, this exercise demonstrates the necessity of incorporating an AGN component in the SED fit for five out of eight sources, generally leading to improved fits as compared to those obtained without an AGN component. ### Sample presentation COSMOSVLA J100256.53+021158.4 The COSMOS field deep multi-wavelength data provide the opportunity of examining the structure of this source, which appears to be composed of two galaxies: an optically bright quasar, and a fainter millimetre source (see the top left panel of Fig. 4). Spectroscopic data from the BOSS survey (Reid et al. 2016) secured the detection of the Ly-\(\alpha\) emission line at a redshift of \(z=3.503\pm 0.0007\). Because the BOSS capabilities cannot resolve the two sources in study here, this archival spectrum corresponds to both galaxies. However, considering that the main source (i.e. the quasar) is the brightest galaxy at optical wavelengths, we assume that the emission line and subsequently derived redshift correspond to this source. Furthermore, inspection of the 3 GHz data reveals that the quasar causes most of the radio flux density emitted from this system with a value of \(f_{\rm 3GHz-QSO}=0.052\pm 0.02\,mJy\) (\(\sim 80\,\%\) of the total), while the second source has a flux density of \(f_{\rm 3GHz-2\mu source}=0.013\pm 0.009\,mJy\). When we assume a typical radio spectral index of 0.8 for both sources3, the corresponding radio flux density of 1.4 GHz is equal to \(f_{\rm 1.4GHz-QSO}=0.1\pm 0.04\,mJy\) and \(f_{\rm 1.4GHz-2\mu source}=0.024\pm 0.017\,mJy\). Therefore, if both galaxies are located at \(z\sim 3.5\) and the detected radio emission was generated solely from SF, the corresponding SFR (Bell 2003, ) would be \(SFR_{\rm radio-QSO}=4246\pm 1633\,M_{\odot}/yr\) and \(SFR_{\rm radio-2\mu source}=1061\pm 735\,M_{\odot}/yr\) for the quasar and SFG, respectively (\(SFR_{\rm radio-total}=5307\pm 1790\,M_{\odot}/yr\)). According to Lamzaiwic et al. (2017), this system of galaxies has a total stellar mass of \(log(M_{\star}/M_{\odot})=11.69\pm 0.01\) and \(SFR_{\rm IR-total}\) of \(1549\,M_{\odot}/yr\). The latter value is derived from the far-IR flux integrated at the position of the quasar and is rather close to our SED-fitting, best-derived SFR estimate of \(1721\,\ M_{\odot}/yr\). Curiously, this value is similar to the value derived from the radio for the second source in the system (i.e. \(SFR_{\rm radio}-2\mu source\)). This is compatible with a system with a powerful quasar and a companion where most of the SF is taking place, and where no AGN is detected. Footnote 3: \(S_{\nu}\propto\nu^{-\alpha}\), where \(S_{\nu}\) corresponds to the flux density, \(\nu\) is the frequency, and \(\alpha\) is the radio spectral index, which is 0.8 for pure radio synchrotron emission. It has to be noted that for most bands, we cannot separate the brightness contribution from each of the two sources because of the sub-arcsec separation and the insufficient angular resolution of telescopes such as _Spitzer_. The SED fit was therefore performed using only total fluxes. A separation of the SED fitting analysis for the two sources would be non-trivial and would yield high uncertainties in the fluxes of each source. In the X-ray regime, COSMOS has been observed both with _XMM-Newton_ (_XMM_-COSMOS survey; Cappelluti et al. 2009) and the _Chandra_ X-ray Observatory (_Chandra_ COSMOS Survey and _Chandra_ COSMOS Legacy Survey; Elvis et al. 2009; Civano et al. 2016), reaching flux limits of \(f_{\rm 6.5-2keV}\sim 1.7\cdot 10^{-15}\,erg/s/cm^{2}\) and \(f_{\rm 0.5-2keV}\sim 2.7\cdot 10^{-16}\,erg/s/cm^{2}\), respectively. The source CVLA1002 has been observed and detected by the _Chandra_ COSMOS survey with a flux of \(f_{\rm 2-10keV}=6.67\cdot 10^{-15}\,erg/s/cm^{2}\) (see Marchesi et al. 2016). ## 4 LESS J033152.49-280319.1 Based on the current photometric data, this source seems to be a single galaxy, and according to the SED photometric analysis by da Cunha et al. (2015), it has a photometric redshift of \(z\sim 3.38\), an \(SFR_{\rm IR}\) of \(1202\,\ M_{\odot}/yr\), and a stellar mass of \(10^{10.96}\,M_{\odot}\). For our photometric analysis, which takes the possibility of an AGN component into account, the estimated best SFR value has a similar value of \(SFR_{\rm IR}=1188\,\ M_{\odot}/yr\). Assuming that the galaxy is located at \(z=3.38\) and its radio emission is solely due to star formation, the corresponding SFR from the radio emission would be equal to \(SFR_{\rm radio}\sim 4213\pm 757\,M_{\odot}/yr\). A comparison between this value and the value derived from our SED fitting analysis or by da Cunha et al. (2015) suggests that the source hosts intense SF, and an accreting SMBH explains the clear radio excess that is observed. The X-ray energies have been observed in the CDF-S field with _XMM-Newton_ and the _Chandra_ observatories (for more details, see Ranalli et al. 2013; Luo et al. 2017) in deep surveys, resulting in sensitivities of \(6.6\cdot 10^{-16}\,erg/s/cm^{2}\) and \(2.7\cdot 10^{-17}\,erg/s/cm^{2}\) at 2-10 keV, respectively. However, ALESSI4 is not detected in any of these surveys, supporting the view that if an AGN exists, it contributes mostly to the radio emission, and at a much lower level at X-ray and optical/IR wavelengths. This was also suggested by the \(\sim 20\%\) AGN contribution to the IR emission from our CIGALE SED fit. \begin{table} \begin{tabular}{l c c c c c c c} Source name & Abbreviation & RA & DEC & \(f_{\rm 1.4GHz}\) & \(f_{\rm 250}\) & \(f_{\rm 350}\) & \(f_{\rm 900}\) \\ \(-\) & \(-\) & [\(deg\)] & [\(deg\)] & [\(mJy\)] & [\(mJy\)] & [\(mJy\)] & [\(mJy\)] \\ \hline CVLA J100256.53+021158.4 & CVLA1002 & 150.73553 & 2.19963 & \(0.104\pm 0.020\) & \(37.0\pm 3.4\) & \(39.7\pm 6.6\) & \(25.1\pm 5.1\) \\ ALESS J033152.49-280319.1 & ALESSI4 & 52.96871 & -28.05531 & \(0.103\pm 0.016\) & \(13.0\pm 6.2\) & \(27.7\pm 3.9\) & \(26.7\pm 4.5\) \\ CVLA J100004.81+023045.2 & CVLA1000 & 150.0200 & 2.51525 & \(0.096\pm 0.015\) & \(25.1\pm 6.3\) & \(27.8\pm 6.6\) & \(23.1\pm 5.3\) \\ CVLA J095845.94+024329.2 & CVLA958 & 149.69144 & \(2.72479\) & \(0.095\pm 0.018\) & \(27.4\pm 6.3\) & \(27.7\pm 6.6\) & \(28.7\pm 5.4\) \\ CVLA J100233.16+020626.5 & CVLA100 & 150.63817 & \(2.10736\) & \(0.088\pm 0.015\) & \(22.3\pm 6.3\) & \(24.0\pm 6.6\) & \(17.5\pm 5.3\) \\ CVLA J100028.75+023203.5 & CVLA028 & 150.11958 & \(2.53438\) & \(0.057\pm 0.015\) & \(28.6\pm 6.3\) & \(36.7\pm 6.6\) & \(29.3\pm 5.2\) \\ ALESS J033211.34-275211.9 & ALESS33 & 53.04738 & -27.87003 & \(0.043\pm 0.012\) & \(18.6\pm 6.2\) & \(25.6\pm 3.9\) & \(25.4\pm 4.7\) \\ [DSS2017] 554 & ECDFS544 & 53.42108 & -27.92731 & \(0.042\pm 0.014\) & \(24.5\pm 3.4\) & \(27.0\pm 3.9\) & \(23.8\pm 3.9\) \\ \hline \end{tabular} \end{table} Table 1: Names, abbreviations, equatorial coordinates, flux densities at 1.4 GHz, 250 \(\mu m\), 350 \(\mu m\) and 500 \(\mu m\) (along with their total error - instrumental and confusion), and angular diameter (derived from the 1.4 GHz emission) for our high-redshift (\(z_{\rm spec,phot}>3\)) radio candidates, ordered by \(f_{\rm 1.4GHz}\) (descending order). The second row of the table corresponds to the units of each parameter. Figure 3: CGALE SED fitting results for the eight galaxies of our sample. The redshift values for CVLA1002, CVLA958, ALESS33, ECDFS544, ALESS14, and CVLA1000 were fixed to those derived from the literature, and the redshift for CVLA100 and CVLA028 was estimated from a range of values separated by \(\Delta z=0.1\). Figure 3: CGALE SED fitting results for the eight galaxies of our sample. The redshift values for CVLA1002, CVLA958, ALESS33, ECDFS544, ALESS14, and CVLA1000 were fixed to those derived from the literature, and the redshift for CVLA100 and CVLA028 was estimated from a range of values separated by \(\Delta z=0.1\). According to the HELP catalogues (Shirley et al. 2021), this source is located at a redshift of 3.43 with an \(SFR_{\rm IR}\) of 1519 \(M_{\odot}/yr\), while our SED fitting analysis predicts a lower \(SFR_{\rm IR}\) of 690 \(M_{\odot}/yr\). Based on the SED fitting results, modelling of the IR emission of CVIA1000 requires an AGN contribution. According to the Cosmos2015 and Cosmos2020 (Weaver et al. 2022) catalogues, the photometric redshifts are 3.26 and 3.62, respectively. When we accept an average redshift value of \(\sim 3.3\) for this source, then according to its radio flux density of \(f_{1.4\rm GHz}=0.096\,mJy\), the \(SFR_{\rm radio}\) should be \(\sim 3723\,M_{\odot}/yr\), which in comparison with the previous estimate from the IR emission strongly suggests an active SMBH at the centre of this galaxy. Due to its morphology in the near-IR, radio, and mm wavelengths, this source could be classified as a group of galaxies (see the third row of Fig. 4), however, catalogues with large samples of galaxies (i.e. Cosmos2015 and Cosmos2020) identify this galaxy as a single source due to the lack of resolution. With a radial extent of \(\sim 1.146\) arcsec, measured at the 1.6 \(\mu m\) - NICMOS data, and a scaling factor of \(7.524\,kpc/\arcsec\) (at \(z=3.3\)), the actual size of this galaxy can be estimated as \(\sim 8.6\,kpc\). This value is compatible with an ultra-compact dwarf galaxy (e.g. Rodriguez et al. 2021), however, the images reveal a source that resembles a disk rather than a spherical object. Inspection of all the available images cannot exclude the possibility of a merging system, considering that the NICMOS data depict a structure that could resemble a complex of galaxies. No detection in the X-ray bands is reported in the literature for this galaxy. ## COSMOSVLA J095845.94+024329.2 Source CVLA958 was recently spectroscopically observed by our team with the Institut de Radioastronomie Millimetrique (IRAM) 30m antenna (project code: 227-19; for more details regarding these observations, see Appendix B). Unfortunately, its mm faint nature prevented us from acquiring a secure redshift determination with the IRAM telescope. According to Cosmos2015, the photometric redshift of this galaxy is \(z=3.172\), while our photometric analysis suggests a minimum of \(\chi^{2}\) at 3.21. The latter redshift solution agrees with the only line that is detected from recent ALMA archival spectra (project code: 2019.1.01600.S). The detected emission line at \(\sim 110.5\,GHz\) has a full width at half maximum of \(\sim\)0.4 GHz (\(\sim 70.4\,km/s\)), possibly indicating a merging system or an orderly rotating spiral galaxy (see Chen et al. 2022, for more details). The recent work by Birkin et al. (2021) also identified the emission line as due to the CO4-3 rotational transition at band 3 of ALMA, locating this source at \(z=3.1755\). According to our photometric SED fitting analysis, this source has a stellar mass of \(10^{1.07}\,M_{\odot}\), an SFR of 1177 \(M_{\odot}/yr\), and a significant AGN contribution. Considering the measured flux density at 1.4 GHz of 90 \(\mu Jy\), we estimate that its \(SFR_{\rm radio}\) (assuming that its radio emission comes purely from star formation) should be \(\sim 3376\,M_{\odot}/yr\). This value is significantly higher than the value determined from the SED fitting (see Table 3 for further details), supporting the presence of an AGN in this source as well. CVLA958 is not detected in any available dataset in the X-ray wavelengths. ## COSMOSVLA J100233.16+020626.5 The source CVLA100 seems to be a single galaxy as well. Our photometric analysis places this source at a redshift of \begin{table} \begin{tabular}{c c c c} Type & Model name & Reference & Parameters used \\ \hline Star formation history & sfhdelayed & (Ciesla et al. 2017) & age\_main = 1000, 2500, 4500, 6000, 8000 \\ & & & tau\_burst = 10000.0 \\ & & & age\_burst = 10, 50, 80, 110 \\ & & & f\_burst = 0.001, 0.01, 0.03, 0.1, 0.2, 0.3 \\ & & & sfr\_A = 10.100,100.0,1000.0,3,000.0 \\ \hline Stellar emission & bc03 & (Bruzual \& Charlot 2003) & imf = 1(Chabrier) \\ & & & metallicity = 0.0001,0.02 \\ & & & separation = 29 10 \\ \hline & & & Av\_BC = 0.3, 0.8, 1.2, 1.7, 2.3, 2.8, 3.3, 3.8 \\ Dust attenuation & dustatt\_2powerlaws & (Charlot \& Fall 2000) & slope\_BC = -0.7 \\ & & & BC\_to\_ISM\_factor = 0.3, 0.5, 0.8, 1.0 \\ & & & slope\_ISM = -0.7 \\ & & & filters = V\_B90, FUV \\ \hline Lyman Continuum escape & \multirow{2}{*}{lyc\_absorption} & \multirow{2}{*}{(Inoue 2011)} & f\({}_{\rm esc}\) = 0.0,0.5 \\ & & & f\({}_{\rm dust}\) = 0.0,0.5 \\ \hline & & & qpah = 0.47, 1.12, 2.5, 3.9 \\ Dust emission & dl2014 & (Dale et al. 2014) & qmin = 5.0, 10.0, 25.0 \\ & & & alpha = 2.0 \\ & & & gamma = 0.02,0.9 \\ \hline & & & r\_ratio=60.0 \\ & & & tau=1.0,6.0 \\ AGN & fritz2006 & (Fritz et al. 2006) & beta=-0.5 \\ & & & opening\_angle=100.0 \\ & & & psy=0.001 \\ & & & fracAGN\_escurg=0.0.0, 1.0,2.0,6,0.8 \\ \hline Redshift & redshifting & (Meiksin 2006) & redshift=0.01-7.5 (or fixed) \\ \hline \end{tabular} \end{table} Table 2: Model parameters of CIGALE that were used for the SED fitting. Figure 4: Multi-wavelength view of the eight high-redshift galaxies of our sample, ordered according to Table 1. From left to right, we present the i- or r-band (HST, VIMOS, and VOICE data), the _Spitzer_ 3.6\(\mu\)m/VIDEO Ks-band/NICMOS 1.6\(\mu\)m images, the radio VLA 1.4/3 GHz and the FIR data from ALMA and _Herschel_, and the red contours correspond to the VLA emission. From top to bottom, the contour levels of the radio image for each source are (\(7.715\cdot 10^{-6}\), \(2.093\cdot 10^{-5}\)), \(2.827\cdot 10^{-5}\), (\(-4.710\cdot 10^{-6}\), \(1.058\cdot 10^{-5}\)), \(5.640\cdot 10^{-6}\), (\(-4.845\cdot 10^{-6}\), \(5.461\cdot 10^{-6}\), \((-5.078\cdot 10^{-6}\), \(1.407\cdot 10^{-5}\)), (\(-4.633\cdot 10^{-6}\), \(1.124\cdot 10^{-5}\), \(2.712\cdot 10^{-5}\)), and \(2.092\cdot 10^{-5}\) Jy/beam. Each panel has a size of 7.2x7.2 _arcsec\({}^{2}\)_, except for the _Herschel_ image, which for visualisation purposes has a larger extent. Additionally, the beam size (shaded light grey area) of each instrument and a 2 arcsec scale bar are added in each panel. \(z_{\rm phot}\sim 3.3\). This result agrees with the estimates of other photometric catalogues (see Lagile et al. 2016; Weaver et al. 2022). ALMA spectroscopic data acquired by our team (project code: 2017.101713.S; see Appendix B for more details) reveal an emission line at \(\sim 107.3\,GHz\), which corresponds to the CO(4-3) emission line with the galaxy located at \(z=3.29\). For the derived physical properties of this source, our SED fitting analysis indicates a stellar mass of \(10^{10.27}\,M_{\odot}\), an SFR of \(566\,\,M_{\odot}/yr\), and an AGN fraction of \(60\,\,\%\). Application of equation 6 by Bell (2003) provides us with an estimate of the \(SFR_{\rm radio}\) of \(2760\)\(M_{\odot}/yr\). The SED fitting models for this redshift suggest an SFR of \(\sim 500\,M_{\odot}/yr\), which could mean that the observed 1.4 GHz radio flux cannot be obtained solely from SF, but by a combination of star emission and AGN activity (see Table 3 for additional information). Similarly to CVLA958, this source is not detected at any high-energy band. ## COSMOSVLA J100028.75+023203.5 As mentioned in the previous section, this source (which seems to be a single galaxy) has a spectroscopic redshift of \(4.76\) (see Hasinger et al. 2018), however, inspection of the DEIMOS spectrum does not reveal a secure emission line (i.e. the Ly-\(\alpha\) line, according to the authors). No further spectroscopic data are available in the literature. Our own photometric SED fitting analysis suggests a redshift of \(3.27\), which is consistent with the redshift provided by the HELP database and Miettinen et al. (2015). When we assume this photo-\(z\) to be the true redshift of this galaxy, our SED fit analysis proposes a stellar mass of \(10^{10.74}\,M_{\odot}/yr\), an SFR of \(2554\,\,M_{\odot}/yr\), and no AGN contribution. The derived SFR from the radio emission of this galaxy (i.e. \(2166\,\,M_{\odot}/yr\)) further supports the argument that no radio emission is added by SMBH activity. This result establishes CVLA028 as the only galaxy in our sample without an indication of an AGN by the SED fit or radio excess. Although this galaxy has been observed by both _XMM-Newton_ and _Chandra_, the images at all bands reveal no clear detection. ## 4LESJ033211.34-275211.9 This source is recognised in the literature as an X-ray AGN with faint radio emission. Photometric analysis from various deep surveys (e.g. Guo et al. 2020) reveals a redshift of \(z\sim 3.7\), while ALMA spectroscopic data (see Birkin et al. 2021) show two clear emission lines (namely the CO4-3 and [CI] lines at \(\sim 98.22\,GHz\) and \(\sim 104.85\,GHz\), respectively). This secures the galaxy at \(z=3.694\). Assuming that this spectroscopic redshift and that a radio flux density of \(0.043\,mJy\) Sc originate from star formation, the corresponding SFR is \(\sim 1884\,M_{\odot}/yr\). However, according to Guo et al. (2020), following the SED fitting method using CIGALE, this galaxy is a type II AGN with a stellar mass of \(\sim 10^{11.3}\,M_{\odot}\), an SFR of \(\sim 31\,M_{\odot}/yr\) (our CIGALE SED fitting analysis suggests a higher value of \(1194\,\,M_{\odot}/yr\)), and an AGN fraction of \(72\,\,\%\) (our analysis proposes a lower value of \(10\,\%\) ). The source detected in the HST image (see Fig. 4), which is rather close to the ALMA detection (\(\sim 0.5\,arcsec\)), seems to cause the radio and IR emission (see Wardlow et al. 2011). Therefore, this source could be a complex system of at least two galaxies possibly interacting, with the presence of a radio-quiet AGN and an SFG. Even though ALESS33 is not detected in any of the X-ray surveys, another source lies at a distance of \(\sim 2\) arcsec (named LBX2017 \(-\) 14; see Ranalli et al. 2013, for more details) that seems to be an X-ray bright AGN with a flux of \(f_{\rm 2-10keV}=6.26\cdot 10^{-15}erg/s/cm^{2}\) and no counterparts. This X-ray source was spectroscopically identified by Ranalli et al. (2013) at \(z=3.74\pm 0.06\), meaning that it could be part of a merging system along with the sub-millimetre and radio galaxy identified with the source ALESS33. ## 5[DSS2017] 554 This source is present in the VIMOS spectroscopic survey presented by Danielson et al. (2017) with a spectroscopic redshift of \(3.1988\), which was used in this work as a fixed value for the subsequent SED fitting analysis. This galaxy is placed at the outskirts of the ECD-S field, meaning that it was not selected in the photometric analysis by Guo et al. (2020). Inspection of the VST Optical Imaging of the CDFS and ES1 (VOICE; Vaccari et al. 2016) data for this source in bands r, g, and i (see the bottom row of Fig. 4) suggests the possibility of multiple components. These observations might signify a merging or interacting system. The predicted SFR derived solely from the radio emission is \(1520\,\,M_{\odot}/yr\), while our SED fitting analysis suggests a value of \(1162\,\,M_{\odot}/yr\) and an AGN contribution of \(20\%\). No clear detection has been reported regarding the possible X-ray nature of this source. \begin{table} \begin{tabular}{c c c c c c c c c} Name & Redshift\({}^{(1)}\) & Stellar Mass & SFR\({}_{\rm IR}\) & SFR\({}_{\rm radio}\) & AGN \({}^{(2)}\) & L\({}_{\rm 1.4GHz}\) & L\({}_{\rm IR}\)\({}^{(3)}\) & \(q_{TIR}^{(4)}\) \\ \(-\) & \(-\) & \([M_{\odot}]\) & \([M_{\odot}/yr]\) & \([M_{\odot}/yr]\) & \([\%]\) & \([10^{24}W/Hz]\) & \([10^{39}W]\) & \(-\) \\ \hline CVLA1002 & \(3.503\pm 0.001^{(a)}\) & \(11.62\pm 0.44\) & \(2496\pm 1266\) & \(5307\pm 1790\) & \(41\pm 22\) & \(8.34\pm 1.60\) & \(3.15\pm 0.01\) & \(2.00\pm 0.08\) \\ \(\rm{ALESS14}\) & \(3.380\pm 0.25^{(p)}\) & \(11.07\pm 0.85\) & \(1123\pm 142\) & \(4213\pm 757\) & \(20\pm 1\) & \(7.63\pm 1.37\) & \(2.19\pm 0.20\) & \(1.88\pm 0.09\) \\ CVLA1000 & \(3.430\pm 0.32^{(p)}\) & \(12.13\pm 1.30\) & \(711\pm 40\) & \(4057\pm 792\) & \(10\pm 2\) & \(7.35\pm 1.44\) & \(2.15\pm 0.25\) & \(1.89\pm 0.10\) \\ CVLA958 & \(3.175\pm 0.001^{(a)}\) & \(11.19\pm 0.44\) & \(1225\pm 546\) & \(3382\pm 641\) & \(20\pm 4\) & \(6.13\pm 1.16\) & \(0.58\pm 0.02\) & \(1.40\pm 0.11\) \\ CVLA100 & \(3.290\pm 0.39^{(p)}\) & \(10.36\pm 0.52\) & \(528\pm 96\) & \(3391\pm 781\) & \(60\pm 3\) & \(6.14\pm 1.41\) & \(1.98\pm 0.32\) & \(1.93\pm 0.10\) \\ CVLA028 & \(3.270\pm 0.24^{(p)}\) & \(11.13\pm 0.30\) & \(1565\pm 753\) & \(2167\pm 602\) & \(3\pm 7\) & \(3.92\pm 1.09\) & \(2.52\pm 0.22\) & \(2.23\pm 0.13\) \\ \(\rm{ALESS33}\) & \(3.694\pm 0.001^{(a)}\) & \(11.27\pm 0.81\) & \(978\pm 113\) & \(2140\pm 597\) & \(10\pm 1\) & \(3.88\pm 1.08\) & \(2.20\pm 0.01\) & \(2.18\pm 0.12\) \\ ECDFS544 & \(3.199\pm 0.001^{(a)}\) & \(11.42\pm 0.80\) & \(1083\pm 470\) & \(1520\pm 507\) & \(20\pm 3\) & \(2.75\pm 0.92\) & \(2.01\pm 0.01\) & \(2.29\pm 0.15\) \\ \hline \end{tabular} \({}^{(1)}\)(\(\equiv\)) spectroscopic redshift: (\(p\)); photometric redshift (accepting the solution with the lowest \(z\) at the available in the literature); \({}^{(2)}\): the AGN percentage (Bayesian values) is estimated by our CIGALE fitting and corresponds to the same wavelength range in which the dust luminosity of each source was estimated; \({}^{(3)}\) this infrared luminosity corresponds to the wavelength range \(42\)-\(122\,\,\mu\)m (rest frame), \({}^{(6)}\) according to equation 5 by Enia et al. (2022): \(\alpha_{\rm r}=\log(\Delta z/3.75\cdot 10^{12})-\log L_{\rm \odot}\). \end{table} Table 3: Derived and observed properties of the galaxy sample. The SFR values derived from the IR emission are a product of the SED fitting described in the text, while the radio-derived SFRs are derived by assuming that the radio emission is solely due to star formation without contribution by an AGN. The second row of the table corresponds to the units of each parameter. ## 4 Discussion We have examined a sample of eight high-redshift dusty galaxies with an AGN component (\(3<\)\(z_{\rm spec,phot}<4\)). The selection process involved using the 1.4 GHz VLA radio catalogues from the COSMOS and ECDF-S fields in combination with the far-IR peak criterion (\(f_{350\mu{\rm m}}/f_{250\mu{\rm m}}>1\)) using the HerMES survey. Moreover, a compact morphology was considered as an additional criterion. In the following paragraphs, we assess the efficacy of these criteria in selecting these sources. Furthermore, we provide a comparison between the derived properties of our sample with those obtained in previous studies, focusing on similar redshift ranges. ### Validity of the selection criteria While the compactness criterion merely serves as a method for avoiding local extended galaxies, the far-IR and radio criteria have been employed extensively in the past in the study of the distant Universe. For instance, Amblard et al. (2010) demonstrated that the redshift of a source tends to increase for galaxies that satisfy the criteria \(f_{350\mu{\rm m}}/f_{250\mu{\rm m}}>1\) and \(f_{500\mu{\rm m}}/f_{350\mu{\rm m}}>1\). These results are expected, considering that the far-IR emission of an SED traces the dust content of a galaxy, and more specifically, its blackbody radiation, which is shifted to higher wavelengths with increasing redshift. However, this emission is affected by the temperature of the dust as well, which consequently introduces a degeneracy that is non-trivial to break. Another point of attention is the sensitivity of the _Herschel_ surveys, which are inherently biased towards the detection of the brightest dust-rich far-IR galaxies. In order to evaluate this argument and selection method further, we employed the HELP catalogue, which contains far-IR observational data (i.e. 15747 _Herschel_ detected galaxies) along with SED fitting results from CIGALE (for a detailed description of the CIGALE parameters used in this catalogue, see Malek et al. 2018). The galaxies from this catalogue are presented in the left panel of Fig. 5, in which the blue and orange histograms correspond to the sources that follow the criterion \(f_{350\mu{\rm m}}/f_{250\mu{\rm m}}<1\) and \(f_{350\mu{\rm m}}/f_{250\mu{\rm m}}>1\), respectively. As anticipated, galaxies that adhere to the far-IR criterion are situated at higher redshifts on average. Interestingly, based on the CIGALE fits, a higher percentage (i.e. \(\sim 45\%\)) of the galaxies that follow the \(f_{350\mu{\rm m}}/f_{250\mu{\rm m}}\geq 1\) criterion exhibit SFRs of \(SFR>10^{2}\,M_{\odot}/yr\). In contrast, for the rest of the sources, a smaller percentage (\(\sim 22\%\)) demonstrates high SFR values like this. A similar exercise was repeated for the radio criterion by estimating the AGN fraction and dust mass, derived from the CIGALE fits, for each galaxy. Because the HELP catalogues do not consider the radio emission in their fits, we cross-matched this catalogue with Cosmos2015, which includes the 1.4 GHz VLA data. The results of this analysis are displayed in the right panel of Fig. 5, which shows histograms of the dust mass. The shape and mean values of these histograms indicate that the radio selection criterion does not guarantee the preferential selection of highly dusty SFGs. However, upon examining galaxies at redshifts above 3, it becomes evident that the radio selection method identifies a higher proportion of sources (i.e. \(\sim 22\pm 9\%\)) with a substantial AGN fraction (i.e. \(AGN_{\rm frac}>40\%\)) compared to cases without detected radio emission (\(\sim 15\pm 4\%\)). While this percentage difference may not be statistically significant, the combination of the two far-IR and radio selection methods still provides motivation for their application in identifying high-redshift SFGs with an AGN component. The detailed Figure 5: Normalised redshift histograms of all galaxies from the HELP catalogue (panel a), separated with the criterion \(f_{350\mu{\rm m}}/f_{250\mu{\rm m}}=1\). Flux densities were estimated using the _Herschel_ observational data as provided from the same catalogue. The vertically dotted coloured lines correspond to the average values of the same histograms, employing the flux densities from the CIGALE best model fit, and are coloured accordingly. The text in the panel denotes the percentage of the sources that present an SFR above \(10^{2}\,M_{\odot}/yr\) and are coloured according to the aforementioned criterion. Panel b illustrates the normalised histograms for the dust content of the galaxies in both the HELP and Cosmos2015 catalogues. The blue and orange histograms are derived from galaxies without and with radio VLA 1.4 GHz emission, respectively. The text of the panel demonstrates the percentage of the galaxies that present an AGN component (\(AGN_{\rm frac}>40\%\)) and are placed at \(z>3\). analysis of the eight sources in this sample has reinforced these findings and encourages their application in large-scale surveys, such as EMU. ### Comparison with previous work Overall, our sample has a mean redshift of \(<z>=3.37\pm 0.16\), an average stellar mass of \(<log(M_{\star}/M_{\odot})>=11.27\pm 0.47\), \(<L_{\rm IR,42-122\mu m}>=2.1\pm 0.67\cdot 10^{39}\,W\) (total IR luminosity of \(<L_{\rm IR,8-1000\mu m}>=3.9\pm 0.84\cdot 10^{39}\,W\sim 10^{13}\,L_{\odot}\)), \(<L_{\rm 1.4GHz}>=5.77\pm 1.9\cdot 10^{24}\,W/Hz\), and a mean star formation rate (derived from the \(42-122\,\mu m\) rest-frame IR luminosity of each source) of \(<SFR_{\rm IR}>1213\pm 567\,M_{\odot}/yr\). This average SFR is consistent with previous results that reported in increasing SFR at higher redshift. For instance, Rowan-Robinson et al. (2016) and Gao et al. (2022) showed that _Herschel_-detected samples of hyperluminous infrared galaxies (HLIRGs; \(L\sim 10^{12}-10^{13}\,L_{\odot}\)) at \(z>3\) have SFR values ranging between \(\sim 1000-10000\,M_{\odot}/yr\). Therefore, the inclusion of the 500 \(\mu m\) emission in our selection criteria naturally provides us with the high bright tail of the SFR distribution at these redshifts. Similarly, Wang et al. (2019) reported that the selection of _Herschel_ galaxies above \(z>3\) revealed dusty SFR galaxies with infrared luminosities of \(\sim 10^{12}-10^{13}\,L_{\odot}\), similar to the derived values of our sample. Furthermore, for 87.5 % of the sources in our sample (seven out of eight), an AGN is indicated either by the radio excess or by the mid-IR signature obtained from the best SED fit. Although the morphology of these sources still remains uncertain, five out of eight (63 %) might consist of multiple galaxies, with significant levels of both SF and AGN activity. In comparison, Gao et al. (2021) and Wang et al. (2021), who analysed a sample of HLIRGs selected using data from _Herschel_ and the Deep Fields of the LOFAR Two-metre Sky Survey (LOTSS; Shimwell et al. 2017). described an AGN contamination of 34-64 % (using six different models to fit the AGN component) for galaxies at the redshift range \(3<z<4\). These values, although model dependent, seem to be lower than the AGN contribution of our sample, which can be attributed to the fact that the LOTSS survey reaches sensitivities of 20-30 \(\mu Jy/beam\) at 150 MHz. When we assume a typical radio synchrotron slope, this value translates into a lower flux density value at 1.4 GHz than the COSMVLA sensitivity (i.e. 7 \(\mu Jy/beam\)). Therefore, it seems that by selecting radio sources from a deeper radio survey, the authors revealed a population of SF galaxies that receive contributions from AGN less frequently. However, it is important to emphasise that a comparison of our results with studies involving larger sample sizes should be approached with caution, considering the limited statistics that our data provides. Additionally, Enia et al. (2022) reported a negligible percentage of AGN for the entire sample (\(0<z<6\)), which was selected using deep radio 1.4 GHz observations from the GOODS-N field (sensitivity of 2.2 \(\mu Jy\)). This can be perceived with the parameter \(\rm q_{\rm TIR}=log(L_{\rm IR}/3.75\cdot 10^{12})-logL_{\rm 1.4GHz}\), which is commonly used as an indicator of the presence of AGN contribution to the radio emission. Figure 6 depicts the \(q_{TIR}\) values for our sample and those of Enia et al. (2022), demonstrating the AGN contamination in the radio emission in most of our galaxies. In more detail, 62.5 % of the sources in our sample have lower values of \(q_{\rm TIR}\) than the grey region of the plot (which denotes the average redshift-dependent \(q_{\rm TIR}\)). These statistics do not fully match the number of 87.5 % of AGN contamination that was derived from the SED fitting and described earlier in the text. However, this discrepancy might originate from the low AGN contribution and uncertainties in the measurements of the radio and infrared fluxes of the SED of some galaxies. In contrast, the sample by Enia et al. (2022) for the redshift bin of \(3<z<4\) indicates an AGN contamination of 37.5 %. This difference can be attributed to the fact that in their selection criteria, they did not require the far-IR emission (or better, far-IR peak spectrum) that we employed. As a result, only 38.4 % of their total sample has a 500 \(\mu m\)_Herschel_ detection. Another comparison can be conducted with the \(q_{TIR}\) relation derived by Delvecchio et al. (2021) and presents a mass-dependent and almost redshift-independent equation (i.e. \(q_{TIR}(M_{\star},z)=(2.646\pm 0.024)(1+z)^{-0.023\pm 0.008}-(logM_{\star}/M_{ \odot}-10)(0.148\pm 0.013)\)). According to this relation, all of the galaxies in our sample satisfy the criterion, placing them below the mean of the SFR population. However, none of the sources in our sample follows the relation \(log(L_{\rm 1.4GHz}/SFR_{\rm IR})=21.984\times(1+z)^{0.013}\) that was derived earlier by Delvecchio et al. (2017) and corresponds to the 3\(\sigma\) deviation from the peak distribution of the ratio of \(L_{\rm 1.4GHz}\) and \(SFR_{\rm IR}\) as a function of redshift. In the same manner, Novak et al. (2017), which selected sources from the VLA 3GHz survey without a far-IR emission requirement, similarly presented the trend of a lower contamination of AGN (23 % and 47 % of the total sample and \(3<z<4\) redshift bin, respectively) than in our sample. This comparison favours the inclusion of the criterion of a far-IR rising spectrum to the radio emission when AGN at high redshifts are to be identified and studied. In contrast, an interesting comparison can be performed with studies such as that by Bonzini et al. (2015), in which the authors presented a sample with galactic properties similar to those of our galaxies. This sample was derived from the radio 1.4 GHz ECDF-S catalogues (6 \(\mu Jy\) sensitivity) with the addition of the _Herschel_ detection in all sources at the redshift bin of interest (i.e. \(z=3-4\)). Fifteen of the 28 galaxies presented in this redshift range were categorised as radio-quiet (RQ) AGN while one as a radio-loud (RL) AGN, totalling 16 galaxies, or an AGN contamination of 73 %. This value is close to the value derived from our sample, which further supports the necessity of including the far-IR criterion to detect high-\(z\) AGN. Additionally, the right panel of Fig. 6 shows that our galaxies fall in the RQ region of their work, even though our estimates indicate that at least two of our sources should have a significant AGN contribution. In this sense, our selection method might provide an increased probability for the detection of dusty galaxies at high-\(z\) that present high SFRs with an additional AGN component, with weak radio emission, as part of a merging system. Additionally, our galaxies seem to be a more extreme group of interacting systems, having brighter IR and radio luminosities as well as stellar masses and SFRs as compared, for instance, to the work by Talia et al. (2021). This outcome might originate from the fact that their sample, once again, did not require a far-IR _Herschel_ detection and was originally derived from the 3 GHz - COSMOS survey, which has a higher detection limit than the VLA-1.4 GHz-COSMOS survey we employed. Nevertheless, Novak et al. (2017) also explored the same field at 3 GHz and reached the same flux density sensitivity (rms of 2.3 \(\mu Jy\)), from which they achieved more AGN detections. The main difference lies in their selection criteria, with the latter work requiring an optical/near-IR counterpart, while Talia et al. (2021) excluded these sources. The selection of radio sources with a far-IR peak spectrum and an optical counterpart might therefore reveal merging systems that at high \(z\), present high SFRs (\(\sim 10^{3}\,M_{\odot}/yr\)) and an AGN counterpart in most cases. Nevertheless, selecting galaxies that are considered as ex treme in terms of mass and luminosity will naturally limit the study to a rare sample, hence to a reduced number of selected sources. This infrequency, along with the complexity of the adopted selection method, will unavoidably provide a less accurate estimation of the SFRD at \(z>3\), which can be only considered as a lower limit (following the \(V_{\rm max}\) method introduced by Avni & Bahcall 1980), and for our study, this is equal to \(SFRD>8\cdot 10^{-3}\ M_{\odot}/yr/Mpc^{3}\). However, these selection criteria might be quite powerful in the aforementioned identification and in studying the interplay between AGN and SF activity at high redshift, especially with their application to the upcoming next-generation radio surveys (e.g. the EMU survey) and current/future observational data from mm facilities (e.g. Bing et al. 2023). ## 5 Conclusions In this work, we present the exploration of a sample of a high-redshift (\(3<z<4\)) radio galaxy population that was selected with the criterion of a rising far-IR spectrum (i.e. \(f_{\rm 550\mu m}>f_{\rm 250\mu m}\)) along with the detection of non-extended (\(<\)10 arcsec) radio emission at 1.4 GHz. The aim behind this selection method was to investigate the potential of combining these two criteria in identifying high-redshift dusty SFGs that are contaminated by an AGN and might be 'hidden' by other selection methods. A population like this, due to its extremity and low occurrence, might not contribute significantly to the SFRD of the Universe, but it can provide us with further insight into the interplay between SMBHs and SF activity at high \(z\). In order to explore the morphology of our sample and its physical characteristics in depth, we took advantage of the multi-wavelength deep data available in the literature from the COSMOS and ECDF-S fields. These data sets enabled us to identify that five out of eight galaxies in our sample might consist of more than one galaxy with possible interaction/merging. It has to be noted that the low angular extent of these galaxies did not permit confirmation of this statement due to the spatial resolution of the current instrumentation. Nevertheless, future observations with telescopes such as the _James Webb_ Space Telescope (_JWST_) may provide further insights into this argument. Furthermore, we determined the presence of an AGN component in seven out of eight of these sources, which in most cases represents a weak radio emission, identifying these sources as radio-quiet AGN. An additional highly star-forming component seems to be present in all cases, with a typical SFR of \(\sim 10^{3}\ M_{\odot}/yr\). We have demonstrated that the combination of the far-IR and radio emission criteria is highly efficient in selecting merging systems at \(3<z<4\), typically with the presence of a radio-quiet AGN and a highly SFG. With upcoming wide radio surveys (e.g. the Evolutionary Map of the Universe, EMU) and next-generation telescopes such as the SKA, these results motivate further exploration of alternative selection methods. In order to advance the study of systems similar to those in our sample, particularly during the epoch of reionisation (EoR), sources with a mm rise in flux in combination with deep radio surveys might further be studied. A possible rising mm spectrum profile, which could be explored for faint individual sources using interferometric facilities (e.g. ALMA and NOEMA) and single-dish Figure 6: The \(q_{TR}\) parameter over redshift for the sample explored by Enia et al. (2022) and for our 8 galaxies (panel a). This figure was retrieved by Enia et al. (2022), published under the CC BY 4.0 international licence and was modified by adding the results from our work (square points). The dashed black line and grey shaded region correspond to the best-fit relation of their total sample and the standard deviation, respectively, while the dashed blue line shows a similar relation by Delhaize et al. (2017). All by points in Enia et al. (2022) are coloured according to the contribution of AGN to the radio emission defined as \(f_{\rm AGN}=1-10^{\rm oo-ggg}\), with \(q_{\rm sn}\) and \(\delta_{\rm SF}\) corresponding to the \(q_{\rm TIR}\) and its evolution with redshift, respectively. The star symbols depict what is defined by Enia et al. (2022) as H-dark sources (i.e. not detected at the _Hubble_-WFC3 H-band). Panel b, presents the SFRs derived from the IR and radio luminosity, for the 8 sources of our sample, along with the results by Bonzini et al. (2015) (radio-quiet AGN: blue squares, radio-loud AGN: red triangles, SF galaxies: green circles) and Talia et al. (2021) (average values of their sample). All data points with redshift below 3 are coloured with transparency, while the ones with z\(>\)3 have normal colours. instruments (e.g. the NIKA2 camera of the IRAM 30m antenna, or the ToLTEC of the Large Millimeter Telescope) targeting a brighter population from larger surveys could assist in the identification of the redshifted dust component of SFGs at \(z>6\). Similarly, the morphology of these galaxies could be further understood with infrared facilities, including the _JWST_. ###### Acknowledgements. This work was supported by Fundacida para a Ciencia e Tecnologia (FCT) through the research grants PTDC/FIS-AST/29245/2017, 1220UID/FIS/04/34/2019, UIDB/04434/2020, and UIDP/04434/2020. C.P. acknowledges support from DL 57/2016 (P2460) from the 'Departamento de Fisica, Faculdade de Ciencias da Universidade de Lisado'. This paper makes use of ALMA data. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan); together with NRC (Canada), MOST, and ASIA (TAuwan); and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NAO, and NAOJ. This research made extensive use of core PYTHON packages for Astronomy as well as TOPCAT. We thank the anonymous referee for all the valuable comments, which improved the quality of the paper.
2308.00599
QualityBLE: A QoS Aware Implementation for BLE Mesh Networks
Bluetooth Low Energy (BLE) Mesh is widely recognized as a driver technology for IoT applications. However, the lack of quality of service (QoS) in BLE Mesh, represented by packet prioritization, significantly limits its potential. This work implements a quality-of-service (QoS) method for BLE Mesh to prioritize the data packets and provide them with different network transmission settings according to their assigned priority. Unlike existing works on QoS for BLE Mesh, our proposed approach does not require any modifications to the BLE Mesh protocol and can be smoothly adopted in existing BLE Mesh networks. We conducted an extensive measurement campaign to evaluate our solution over a 15-node BLE Mesh network deployed to emulate a smart healthcare scenario where 45 sensors with an assigned priority transmit information over the network. The experiments provide performance results for single and multi channel network scenarios. The obtained results validate our solution, showing the difference between the established priorities and providing insights and guidelines to conduct further research on QoS over BLE Mesh and broadcast-based networks.
Jimmy Fernandez Landivar, Pieter Crombez, Sofie Pollin, Hazem Sallouha
2023-08-01T15:23:02Z
http://arxiv.org/abs/2308.00599v1
# QualityBLE: A QoS Aware Implementation for BLE Mesh Networks ###### Abstract Bluetooth Low Energy (BLE) Mesh is widely recognized as a driver technology for IoT applications. However, the lack of quality of service (QoS) in BLE Mesh, represented by packet prioritization, significantly limits its potential. This work implements a quality-of-service (QoS) method for BLE Mesh to prioritize the data packets and provide them with different network transmission settings according to their assigned priority. Unlike existing works on QoS for BLE Mesh, our proposed approach does not require any modifications to the BLE Mesh protocol and can be smoothly adopted in existing BLE Mesh networks. We conducted an extensive measurement campaign to evaluate our solution over a 15-node BLE Mesh network deployed to emulate a smart healthcare scenario where 45 sensors with an assigned priority transmit information over the network. The experiments provide performance results for single and multi channel network scenarios. The obtained results validate our solution, showing the difference between the established priorities and providing insights and guidelines to conduct further research on QoS over BLE Mesh and broadcast-based networks. ## 1 Introduction The Internet of Things (IoT) applications space is constantly increasing, covering many areas, from home, transportation, and agriculture to healthcare and industry [1, 2]. As a result, the number of IoT devices is rising, setting a diverse range of requirements on quality-of-service (QoS), power consumption, coverage, and spectral efficiency [2]. Bluetooth Low Energy (BLE) is one of the most popular IoT technologies which suffices IoT applications with a limited number of nodes. In order to scale up BLE networks, a multi-hop-based mesh BLE standard was released in 2017 [3]. The BLE Mesh standard, which is built on top of the BLE lower protocol layers, uses BLE's advertising/scanning states to broadcast messages to other devices, allowing them to communicate in a mesh structure [3]. A BLE Mesh network can theoretically include up to 32767 devices, and a BLE Mesh packet can perform up to 127 hops to reach its destination [4]. These characteristics positioned BLE Mesh as a favorable technology for several applications where network power consumption and scalability, in terms of the number of devices and range, are more critical than latency [5]. These applications include home automation [3], healthcare [5], and smart infrastructures [6]. While BLE Mesh meets the connectivity requirements of these applications, it sends all packets without any prioritization, failing to provide any QoS needed in modern IoT networks. For instance, in modern smart hospitals where patients have wearable devices, transmitting important vital signs or nurse calls must have a higher priority than an asset tag that sends a control signal or a command to turn on the illumination system. The use of IoT in the area of healthcare applications has recently attracted considerable research focus. The work presented in [7] showed the benefits of IoT in healthcare, highlighting the necessity of supporting multiple types of packet priorities in the network in big health infrastructures such as hospitals. In [5], the authors demonstrated IoT networks' essential role in monitoring applications in general and for healthcare in particular. However, the work was focused on BLE implementations only and did not address the scalability problem faced in massive deployments such as in the case of hospitals. The urgent need for a QoS implementation that can be integrated in the BLE Mesh standard motivates our work in this paper, which is, to the best of our knowledge, an open research question. This paper proposes a QoS implementation for BLE Mesh networks, which can be integrated into the network while staying in line with the BLE Mesh standard. In particular, our QoS implementation uses only 1-Byte of the packet sequence number (SEQ) field on the network Protocol Data Unit (PDU) structure to incorporate a priority class to every packet. Different network transmission parameters are automatically set for each priority on the nodes. We evaluate the performance of our solution in a 15-node BLE Mesh testbed deployed in a full building floor, emulating a smart healthcare infrastructure use case. Our main contributions are summarized as follows: * We introduce a QoS implementation method for the BLE Mesh protocol stack. Specifically, this method adds a priority class (up to 256 values) to the network layer PDU. It is associated with various network transmission configurations that modify the Transmission Power, Time to Live (TTL), Number of rebroadcasts, and packet Advertising Interval. This implementation enables the possibility to differentiate the network transmission services for every priority class. * An extensive measurements campaign is performed on a 15-node (nRF52 System-On-Chip and Zephyr RTOS-based) BLE Mesh network. Each of our nodes virtualizes three sensors, representing a 45-sensor network in total. The rest of this paper is organized as follows: Section 2 defines the related work and state-of-the-art. Section 3 presents the BLE Mesh networking process. Section 4 describes the proposed method for QoS implementation in BLE Mesh. Section 5 details the data collection and experiment campaign. Finally, sections 6 and 7 present the performance evaluation results, conclusions, and future work. ## 2 Related Work Since the BLE Mesh standard's official release, several works have studied its performance regarding scalability, reliability, and latency. The author in [8] implemented a low-cost BLE Mesh testbed with multiple configurable nodes, including the possibility to test flooding and route-based mesh protocols. The work in [9] presented the results of massive experimentation on BLE Mesh, showing results related to the network performance, latency, and size in an environment with coexistence of other wireless technologies. The performance of a BLE Mesh network composed of heterogeneous devices deployed in an area of 1100 m\({}^{2}\) was tested and analyzed in [10]. The work in [11] suggested methods to improve BLE Mesh and its node features by applying self-organizing network concepts to the BLE Mesh node roles. The authors in [12] suggested deployment strategies and node configuration parameters in order to improve the performance of BLE Mesh. However, none of these aforementioned works introduced a QoS solution for BLE Mesh network as they mainly focused on a signle traffic class. Only a few existing works addressed the QoS problem in BLE Mesh. These works either require frequent manual configurations [13], propose different strategies for different network configurations [14], or make changes on the inner default functions on the protocol itself [15]. Moreover, a framework for introducing QoS in BLE Mesh by varying some features of the radio standard, such as the backoff time mechanism and channel selection, is proposed in [15]. However, solutions modifying critical elements in the standard limit the possibility of their adoption on a large scale, where following a standard protocol is a key requirement. The urgent need for a QoS implementation in BLE Mesh networks, that stays in line with the technology protocol motivates our work in this paper. ## 3 BLE Mesh Background This section presents the principal characteristics of the BLE Mesh communication process, network architecture, and protocol stack. ### Elements, models, and node roles Packets in BLE Mesh are sent using a controlled flooding protocol where the nodes broadcast the packets using BLE's advertising and scanning channels. BLE Mesh nodes receive the packets and rebroadcast them until reaching the destination or traverse all nodes in the mesh network. This mechanism is called _relaying_ and allows the packets to do multiple hops between devices by being _relayed_. The number of hops a packet can do is controlled by a parameter called Time to Live (TTL) [1, 3]. The TTL decreases with every hop made by the packet, limiting the number of hops a BLE Mesh packet can perform. When TTL reaches 0, the packet will not be relayed [1]. The BLE Mesh network integrates different types of nodes that can send, relay, and receive messages [1]. The standard defines four types based on their role: First, _The Relay Nodes_ can receive and re-transmit messages. Second, _The Proxy Nodes_ enables the communication between BLE devices and the mesh network. Third, _The Low Power (LP) Nodes_ are in energy-saving mode until they awake and communicate with the mesh network through the Friend nodes. Fourth, _The Friend Nodes_ enables the communication between the LP nodes and the mesh network, receiving and sending information collected from the LP nodes and relaying it to the other nodes. Every BLE Mesh node supports the four roles, and they can be set by software according to the network design. Figure 1 depicts the BLE Mesh network architecture. The BLE Mesh nodes can have one or more roles, and every node has a unicast address. In their architecture, the nodes can have multiple constituent parts that establish communication with the mesh network jointly or independently. These parts are called _elements_[16]. A node can have one or more elements, and every element has a different unicast address. A group of nodes including different elements can have a group address [1]. The functions of every element to interact with the mesh network are defined by a structure called a _model_[4, 16], as illustrated in Figure 1. The models communicate with different nodes using mesh messages. The messages are part of the BLE Mesh packet and are transmitted by nodes from and to an address. These addresses could be unicast or group addresses [1]. A node can only send messages to an address it has been registered to publish and receive them only if it has subscribed. Each message is defined by a unique _opcode_ that allows communication between nodes in the mesh network, identifying the type of information and assigning a model to process it [16]. ### BLE Mesh protocol stack architecture The BLE Mesh protocol comprises seven layers on top of the BLE protocol stack. Here, the BLE's physical and link layers establish radio communication through advertising and scanning channels 37, 38, and 39 [4]. From the bottom to the top, the seven different layers and their functions are: 1. _The bearer layer_ composed of the advertising bearer and the GATT bearer. The advertising bearer is the one that uses the BLE functions of scanning and advertising to send and receive packets exclusively for BLE Mesh. 2. _The Network Layer_ is in charge of sending the encrypted transport PDUs to the bearer layer, adding the addresses and extra packet information to be sent. The relay process is carried out here. If a packet is received and if it belongs to the node, it goes to the upper layer; otherwise, it is discarded or relayed. 3. _The Lower Transport Layer_ sends the PDU to the lower transport layer in the peer devices. Here, the data is segmented if necessary. 4. _The Upper Transport Layer_ is where the application data is encapsulated and encrypted to be sent to the lower transport layer. 5. _The Access Layer_ verifies the data to be encrypted and sent it to the transport layer. It also verifies that the data coming from the transport layer is correct to be sent to the upper foundation models' and models' layers. 6. _The Foundation Models' Layer_ is where the models and their different states are interpreted and defined. 7. _The Models Layer_ is the one that associates the different models and states with the user applications. ## 4 QoS Implementation for BLE Mesh In this section, we introduce our novel QoS implementation for BLE Mesh to enable priority and differentiate services for the data packets flooding the network. Our method is performed over the nRFConnectSDK1 BLE Mesh protocol stack of the Zephyr RTOS 2. In the following, we detail our approach and the functions we implemented in the BLE Mesh stack to enable QoS in BLE Mesh networks. Footnote 1: [https://www.nordiesemi.com/](https://www.nordiesemi.com/) Footnote 2: [https://www.zephyrproject.org/](https://www.zephyrproject.org/) ### Adding the priority class to mesh packets The proposed method enables QoS by adding the priority value to the mesh packets every time a model sends a message. Here, the unique opcode of each model is used to define the packet's priority. This opcode is sent as an argument through the access and transport layer functions to the network layer, where the opcode is translated to a priority class of 1-byte size, allowing a maximum of 256 priority classes. Once the priority value is received on the network layer, it is added as another field of the network PDU. The extra 8 bits of the priority class are allocated into the sequence number (SEQ) field. Here the 24 bits of the SEQ field are reduced to 16, and the remaining 8 bits are assigned to the priority class. Figure 2 shows the BLE Mesh network PDU structures with the priority field. It is important to note that the 16 bits assigned to the SEQ number provide enough packet counts to not compromise the security of the BLE Mesh standard against replay attacks [4]. If necessary, the SEQ field size could be increased to 20 bits. ### Configuring parameters for QoS The configurable transmission parameters represent a group of parameters located in the different structures of the BLE Mesh protocol. The parameters can be set to optimize the network performance and give personalized transmission characteristics to BLE Mesh packets. Table 1 summarizes each parameter and its range of values. When a new node is added to the BLE Mesh network, the network transmission parameters, the addresses, and different publish/subscription rights are set during a process called _provisioning_. After this process, only the _number of rebroadcasts_, _advertising interval_, and _TTL_ can be changed by executing manual runtime configurations from an external order of the provisioner node. In contrast, the _Transmission Power_ cannot be changed during or after the provisioning process. Thus, changing the network parameters from an external order each time a new packet is sent would be effort- and time-inefficient for the source nodes and impossible for the relay nodes. Accordingly, the first step of our solution is to vary these parameters for each packet directly at the nodes, enabling the possibility of having network transmission settings for each packet according to its given priority. \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|} \hline **Parameter** & **Description** & **Values** \\ \hline Number of Rebroadcast Repetitions (N.Rep) & Number of times a packet is rebroadcast by a network node & 0 to 1000 \\ \hline Advertising Interval (Adv. Interval) & Time between advertising intervals & 20 ms to 10.24 s \\ \hline Time to Live (TTL) & The max number of hops a packet can perform & 0 to 127 \\ \hline Transmission Power (Tx Power) & The transmission power of the BLE radio connection* & (4, 0, -8, -20, -40) dBm \\ \hline \multicolumn{4}{p{85.4pt}}{*Changing the transmission power is a feature in the Nordic nRF52832 System-On-Chip.} \\ \end{tabular} \end{table} Table 1: BLE Mesh transmission parameters Figure 1: BLE Mesh network architecture. Figure 2: QoS priority class allocated into the sequence field (SEQ) in the BLE Mesh network-layer protocol data unit (PDU). ### Transmitting packets with priority over BLE Mesh networks The Packet priority class determines the network transmission parameters for the source and relay nodes in the mesh network. When a packet is transmitted for the first time, the source node elements transmitting it uses a model for every priority. This model sets the initial network transmission parameters and the message opcode according to the priority. As explained in Subsection 4.1, the packet is broadcast with the priority field added at the network layer. For each packet received at a relay node, the network layer determines if this packet is addressed to the node unicast address or to the group address and needs to be relayed to the next node. In the second case, if the packet needs to be relayed, the implemented functions on the network layer determine its priority, set the correct node transmission parameters, and forward the packet to the next node. The BLE Mesh packets without a priority (default) are set with second-priority transmission parameters. Algorithm 1 summarizes the relay process with our QoS implementation. ``` 1:while True do 2: newPacket \(\leftarrow\) receivePacket() 3: packetAddress = getAddress(newPacket) 4: if packetAddress == nodeAddress then 5: toTransportLayer() 6:else if packetAddress == groupAddress then 7: toTransportLayer() 8: iftoRelays == True then 9: checkPriority() 10: setNetworkPara () 11: relay() 12: else 13: discard() 14:endif 15:endif 16:endwhile ``` **Algorithm 1** Network-Layer packet relay process with QoS. ## 5 Experimental setup and data collection To test the performance of BLE Mesh with the proposed QoS implementation, we deployed an experimental BLE Mesh network. This section presents its implementation and the phases of the experiment campaign. ### BLE Mesh experimental network The experimental BLE Mesh network is deployed on a full floor of an office building with a total area of around 400 meters. Here, 15 BLE Mesh nodes are placed in equal distance distribution to represent a smart healthcare sensor network. Every node represents a sensor device generating unsegmented 11-byte BLE Mesh data packets and pushing them in the network. This network setup comprises 15 Nordic Semiconductor nRF52DK development kits with the nRF52832 System-On-Chip3. The nodes are programmed using version 1.8.0 of the nRF Connect SDK with Zephyr RTOS real-time operating system and version 5.2 of the Bluetooth core specification. For the data collection process, 6 Raspberry Pi 4B4 (RPi) single-board computers are placed next to the nodes. The RPis are synchronized using Network-Time-Protocol. They collect the nodes' information through a serial connection and upload it to a server. Figure 3 shows the experimental network. Footnote 3: [https://www.nordicsemi.com/products/nrf52832](https://www.nordicsemi.com/products/nrf52832) Footnote 4: [https://www.raspberrypi.com/products/raspberry-pi-4-model-b/](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/) The nodes are provisioned and set with the relay feature, enabling them to send, receive and relay messages. Each node has three elements, each one with a unicast address. Each element represents a sensor in our experiments, and each of the three sensors will have a different network configuration. This way, the source nodes provide three priority classes, one for each sensor. Table 2 summarizes the network transmission characteristics for each priority. ### Experiment campaign Our experiment campaign consists of two experiments. For every experiment, the data is collected by a Raspberry Pi at the source and the destination nodes. Table 3 illustrates the structure of the database table. The two experiments are detailed as follows: 1. _Experiment 1: Single-channel scenario:_ This experiment aims to evaluate the BLE Mesh network performance in a limited congestion network scenario. Therefore, the designated source node is A, the destination node is H, and the other 13 mesh nodes work as receivers and relays (See Figure 3). A sends to H 6000 unsegmented data packets with random priorities with a packet generation rate of 2 seconds. 2. _Experiment 2: Multi-channel scenario:_ Experiment two aims to evaluate the BLE Mesh network performance in a relatively high congestion network scenario. To this end, we defined 2 traffic channels. The first channel's source node is A, and the destination node is H. The second channel's source node \begin{table} \begin{tabular}{|p{113.8pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Priority** & **N. Rep** & **TTL** & **Adv. Interval** & **Tx Power** \\ \hline \hline 1 - Element 1 - Sensor 1 & 2* & 7 & 20 ms & 4 dBm \\ \hline 2 - Element 2 - Sensor 2 & 2* & 5 & 100 ms & -8 dBm \\ \hline 3 - Element 3 - Sensor 3 & 2* & 3 & 200 ms & -20 dBm \\ \hline \multicolumn{6}{p{113.8pt}}{*The BLE Mesh standard recommends setting the _Number of Rebroadcasts (N.rep)_ to 2 [17].} \\ \end{tabular} \end{table} Table 2: BLE Mesh QoS priority list Figure 3: BLE Mesh experimental network deployment. is N, and the destination node is G (See Figure 3). The other 11 nodes work as receivers and relays. Each source node sends 6000 unsegmented data packets with random priorities and a generation rate of 2 seconds. ## 6 Results This section summarizes the most important insights obtained from our experiment campaign. Three Key Performance Indicators (KPIs) are defined to evaluate the performance of our QoS implementation. * _The Packet Delivery Time (PDT)_ measures the elapsed time in milliseconds of a message from the source to the destination. * _The Packet Delivery Rate (PDR)_ indicates the delivered packets over the total number of packets sent. * _The Number of Hops (N.Hops)_ counts the number of hops that a packet performed to reach its destination. ### _Experiment 1: Single-channel scenario._ In this experiment, the three defined priorities can be clearly distinguished. Figure 4 shows the empirical cumulative distribution function (eCDF) for the PDT corresponding to each priority. As shown in the figure, 80% of the first priority data packets were delivered in less than 20 ms. For the second and third priorities, the same 80% of the packets were delivered in less than 100 ms and 300 ms, respectively. These results confirm that our QoS implementation shows a clear difference in terms of PDT between the three priorities considered. Table 4 presents the detailed statistics of the experiment results, clearly showing the difference in performance between the three adopted priority classes. It is worth noting that this experiment analyzes the ideal case, in which the traffic is only generated in one channel without interference from other sources or channels. This ideal traffic scenario is also validated with the PDR analysis. As shown in Table 4, the first and second priorities have a PDR of 1, and the third priority have a PDR of 0.905. The number of hops is also related to the assigned priority, with a minimum average of 1.882 and 3 hops for the first and third priorities, respectively. For the third priority, the PDR (0.905) according to the number of hops can be interpreted as a border effect of the experiment and used to define the correct TTL limit values for each application. ### _Experiment 2: Multi-channel scenario._ The three defined priorities can be clearly distinguished in this experiment for channel 1 and channel 2. The first important point to note is that the average PDT values are higher for the two channels of this experiment (See Table 5). Second, there are differences in the performance between channels 1 and 2. Figures 5 and 6 show the PDT eCDFs of both channels, where for the first priority, the difference between channels in average PDT is 22.536 ms, and for the second and third priorities, the difference is 31,133 and 98,821 ms, respectively. Table 5 shows the detailed statistics of the second experiment's results. As shown in the table, the PDR remains 1 for both channels' first and second priority. However, for the third priority, channel 2 performs 2,3% better. For both channels, the number of hops gradually increases depending on the priority, e.g., in channel 2, from 1.643 to 2.939 hops executed by the first and third priority, respectively. The results of experiment 2 also validate the effectiveness of our QoS implementation. It shows the differences between the channels and the differences between priorities in a more realistic network scenario, where more than one channel is exchanging data packets. and the second traffic classes. The PDT also confirmed the priority differentiation, presenting values of approximately 20 ms and 325 ms for the first and third priority classes, respectively, showing a gap of approximately 300 ms between them. Moreover, the number of hops is also associated with the priorities, varying from 1.8 to 3 hops from the first to the third priority. Exploring machine-learning-based parameters selection to improve the parameters selection process for the different QoS classes in BLE Mesh networks is a potential future work. ## 8 Acknowledgments The present work has received funding from the European Union's Horizon 2020 Marie Sklodowska Curie Innovative Training Network Greenedge (GA. No. 953775). The work of Hazem Sallouha was funded by the Research Foundation Flanders (FWO), Postdoctoral Fellowship No. 12ZE22N.
2307.07473
Constraints on the Abundance of PBHs from X-ray Quasar Microlensing Observations: Substellar to Planetary Mass Range
We use X-ray observations of quasar microlensing (sensitive to smaller compact objects than in the optical) to study the possible presence of a population of low mass black holes (from $\sim$ $10^{-3}M_{\odot}$ to $10^{-1}M_{\odot}$) in lens galaxies. We compare these observations with microlensing magnification simulations of a mixed population of stars and black holes (BHs) plus a smooth matter component. We estimate the individual mass fractions of both, stars and BHs, for three different BH masses in the range of substellar to planetary masses. Our Bayesian analysis indicates that the contribution of BHs is negligible in the substellar mass range but that a population of BHs of planetary mass (M $\lesssim$ $10^{-3}M_{\odot}$) could pass unnoticed to X-ray microlensing. We provide new upper limits to the contribution of BHs to the fraction of dark matter based on both, the quasar microlensing data in the X-ray band, and our previous estimates in the optical of intermediate-mass BHs with an additional upper limit at $M=3M_{\odot}$.
A. Esteban-Gutiérrez, E. Mediavilla, J. Jiménez-Vicente, J. A. Muñoz
2023-07-14T16:54:41Z
http://arxiv.org/abs/2307.07473v1
Constraints on the Abundance of PBHs from X-ray Quasar Microlensing Observations: Substellar to Planetary Mass Range ###### Abstract We use X-ray observations of quasar microlensing (sensitive to smaller compact objects than in the optical) to study the possible presence of a population of low mass black holes (from \(\sim 10^{-3}M_{\odot}\) to \(10^{-1}M_{\odot}\)) in lens galaxies. We compare these observations with microlensing magnification simulations of a mixed population of stars and black holes (BHs) plus a smooth matter component. We estimate the individual mass fractions of both, stars and BHs, for three different BH masses in the range of substellar to planetary masses. Our Bayesian analysis indicates that the contribution of BHs is negligible in the substellar mass range but that a population of BHs of planetary mass (M \(\lesssim 10^{-3}M_{\odot}\)) could pass unnoticed to X-ray microlensing. We provide new upper limits to the contribution of BHs to the fraction of dark matter based on both, the quasar microlensing data in the X-ray band, and our previous estimates in the optical of intermediate-mass BHs with an additional upper limit at \(M=3M_{\odot}\). (Primordial Black Holes -- gravitational lensing: micro -- X-rays) ## 1 Introduction Since the first evidences of dark matter in galaxies and clusters of galaxies in the last century, the astrophysical community is still searching for a plausible candidate. Several possibilities have been proposed from different disciplines including elementary particles (Feng et al. 2010), new types of interacting dark matter (e.g. Salucci et al. 2020) or faint compact objects in the halos of galaxies (Alcock et al. 2000), but none of them have yet provided any strong evidence. The discovery by the LIGO/Virgo gravitational wave (GW) collaborations of mergers of BHs in the \(10-50M_{\odot}\) mass range (Abbott et al. 2019a, 2019b, 2021a, 2021b, The Ligo Scientific Collaboration et al. 2021a, 2021b) reopened the interest on primordial black holes (PBHs), theoretically predicted to be formed in the radiation-dominated era (Hawking 1971, Carr 1975) and postulated as a suitable dark matter candidate (Sasaki et al. 2018, Clesse & Garcia-Bellido 2018). Nonetheless, those stellar-mass PBHs are not the only acceptable possibility, but a larger mass range (\(\sim 10^{-12}-10^{3}M_{\odot}\)) should be considered according to models of PBH formation (Carr & Kuhnel 2020, Carr et al. 2021b), with a particular focus on the promising mass window from \(1M_{\odot}\) down to \(10^{-6}M_{\odot}\) (Carr et al. 2021a). Gravitational microlensing, and specifically microlensing of lensed quasars, are ideal astrophysical phenomena to analyze the abundance of compact objects in lens galaxies (Chang & Refsdal 1979, Schneider et al. 2006, Jimenez-Vicente et al. 2015a). During recent years, the fraction of PBHs in the 1-100\(M_{\odot}\) mass range and its possible contribution to dark matter have been strongly constrained by galactic microlensing (Alcock et al. 2001, Tisserand et al. 2007, Griest et al. 2014, Zumalacarregui & Seljak 2018, Wyrzykowski & Mandel 2020, Verma & Rentala 2022, Blaineau et al. 2022), quasar microlensing (Hawkins 2011, Mediavilla et al. 2017, Hawkins 2020, Esteban-Gutierrez et al. 2020, Esteban-Gutierrez et al. 2022a, Esteban-Gutierrez et al. 2022b, Hawkins 2022), GWs (Kavanagh et al. 2018, Vaskonen & Veermae 2020) and galactic radio/X-ray emission (Inoue & Kusenko 2017, Manshanden et al. 2019) studies. The majority of those works have estimated low mass fractions for PBHs, which could therefore constitute only a small fraction of the total dark matter content (see, nevertheless, Hawkins 2011, 2020, 2022). On the other hand, the low-mass range (\(M\ll 1M_{\odot}\)) has also been explored using galactic microlensing (e.g. OGLE1, EROS2, HSC3), GWs (e.g. LIGO4, Virgo5) or pulsars (e.g. NANOGrav6). While the majority of these works found very low bounds to the PBH abundances in the substellar or subsolar mass regime (Tisserand et al. 2007, Oguri et al. 2018, Abbott et al. 2018, Niikura et al. 2019, Smyth et al. 2020, Chen et al. 2020, Chen & Huang 2020, Abbott et al. 2022, The Ligo Scientific Collaboration et al. 2022), there are others that claim the possibility that PBHs in the \(\sim 10^{-12}-100M_{\odot}\) mass range (Dror et al. 2019) or in the asteroid to planetary mass range (Miller et al. 2021, 2022, Domenech & Pi 2022) could account for an important fraction of dark matter. Footnote 1: The Optical Gravitational Lensing Experiment, [http://ogle.astrouw.edu.pl](http://ogle.astrouw.edu.pl) Footnote 2: Exépience pour la Recherche d’Objets Sombres, [http://eros.in2p3.fr](http://eros.in2p3.fr) Footnote 3: Hyper Suprime-Cam, [https://www.naoj.org](https://www.naoj.org) Footnote 4: Laser Interferometer Gravitational-Wave Observatory, [https://www.ligo.caltech.edu](https://www.ligo.caltech.edu) Footnote 5: The Virgo interferometer, [https://www.virgo-gw.eu](https://www.virgo-gw.eu) Footnote 6: North American Nanohertz Observatory for Gravitational Waves, [https://nanograv.org](https://nanograv.org) If we want to explore small mass PBHs using quasar microlensing of lensed galaxies, we need to use very compact sources, with sizes smaller than the Einstein radius of the considered microlens masses. This can be achieved by using observations in the X-ray band, which could be sensitive to the effect of the smallest microlenses (e.g. Pooley et al. 2007, Jimenez-Vicente et al. 2015a, 2015b). Thus, this work aims to add constraints to the dark matter fraction in form of compact objects in the substellar to planetary mass range from X-ray observations of quasar microlensing. The article is organized as follows. Section SS2 describes the data used for this analysis and the details of the specific parameters used in the microlensing simulations. The statistical tools applied to compute the probabilities are also defined in this section. In Section SS3, the main results found for the PBH abundances in the planetary to substellar mass range are presented. In Section SS4 we show and discuss our estimates of the contribution of PBHs to the fraction of dark matter in the context of previous works. Finally, the conclusions are outlined in Section SS5. ## 2 X-ray Data and Methodology We use the X-ray differential microlensing magnifications collected by Jimenez-Vicente et al. (2015b, see their Table 1) from the fluxes reported by Schechter et al. (2014) extracted from quasar observations by Pooley et al. (2007) and Blackburne et al. (2011). The macro model magnifications (\(\mu_{i}\)) provided by Schechter et al. (2014) are used as an unmicrolensed baseline. Therefore, the microlensing magnification between an image, \(i\), and the reference image, \(j\), of each system is given by: \[\Delta m_{ij}=m_{i}-m_{j}-(\mu_{i}-\mu_{j})=(\Delta m_{i}-\Delta m_{j}), \tag{1}\] Our selection consists then of a total of 30 quasar image pairs seen through 10 lens galaxies. To compare with these observations, we generate microlensing magnification maps produced by a mixture of stars and BHs, in surface mass density fractions \(\alpha_{stars}=\kappa_{stars}/\kappa\) and \(\alpha_{BH}=\kappa_{BH}/\kappa\), respectively, as we did in Esteban-Gutierrez et al. (2022b). The mass of the stars is fixed to \(0.2M_{\odot}\), which is a representative value of the mean mass of the old stellar population in lens galaxies (see Poindexter & Kochanek 2010, Jimenez-Vicente & Mediavilla 2019). The lowest mass for the BHs that we are able to probe is limited by the typical source size for the X-ray source of \(\sim 1\) lt-day (see Jimenez-Vicente et al. 2015b and references therein). Thus, the lower bound for the BH mass is taken to be the mass with an Einstein radius of size 1 lt-day7, that is, \(M_{BH}=0.0024M_{\odot}(\sim 2.49M_{J})\). On the other side, the upper bound for the BH mass is that of the stars, as we are not able to distinguish BHs from stars above this limit. We therefore take as upper limit the Hydrogen Burning Limit (HBL) of \(M_{BH}\sim 0.08M_{\odot}\). This way, we explore a BH mass range between the lowest suitable mass and the HBL in a logarithmic grid of masses as \(M_{BH}/M_{\odot}=0.0024,0.013,0.082\), and perform three sets of simulations to generate the corresponding magnification maps. Footnote 7: For a typical gravitational lens system with the lens at z=0.5 and the source at z=2. For the surface mass density fraction of the stars we use a linear grid of 7 values in \(\alpha_{star}=\{0,0.05,0.1,0.2,0.3,0.4,0.8\}\). For the fraction of mass in BHs we take a grid of 7 values logarithmically distributed from \(\alpha_{BH}=0.02\) to \(\alpha_{BH}=1\), in addition to the the \(\alpha_{BH}=0\) contribution, resulting in \(\alpha_{BH}=\{0,0.02,0.044,0.096,0.21,0.46,1\}\). The additional contribution to the projected mass to complete the macro convergence, \(\kappa\), is in form of a smooth matter component, with fraction \(\alpha_{smooth}=1-\alpha_{star}-\alpha_{BH}\). The \((\kappa,\gamma)\) values of each macro-image of the sample are taken from Schechter et al. (2014; see their Table 4). In order to minimize the sample variance, we compute a total of 100 magnification maps for each image using the Inverse Polygon Mapping algorithm (Mediavilla et al. 2006, 2011), resulting in a total of 3x7x7x40x100=588000 magnification maps. This procedure demanded a great deal of calculation time (\(\sim\)50000 CPU hours) requiring high-throughput computing services8 for accelerating the map calculation process. The resolution of the maps was conservatively taken as 0.5 lt-day/pixel with a size for the maps of 250x250 pixels. Both, the pixel size and map resolution, were carefully selected and tested9 to provide a fair balance between the two mass components, so that we have a representative number of the heavy stars while keeping a manageable number of the lighter BHs for all the explored range of values of \(\{\alpha_{star},\alpha_{BH}\}\). A convolution with a Gaussian source of 1 lt-day (representative of the size of the X-ray source) is finally applied. The histograms of images \(i\) and \(j\) for each value of the parameters \((\alpha_{star},\alpha_{BH})\) are used to calculate the probability density function (PDF) of observing a microlensing magnification \(\Delta m_{ij}\), \(p_{klij}(\alpha_{BHk},\alpha_{start}|\Delta m_{ij})\) via cross-correlation (see Mediavilla et al. 2009). Footnote 8: PROTEUS Scientific Computing Cloud: [https://proteus.ugr.es](https://proteus.ugr.es); HTCondor: [https://htcondor.org](https://htcondor.org) Footnote 9: We tested maps with sizes between 100 and 1500 pixels (aiming at the largest possible size) and found that 250 pixels provided the best balance between execution time and statistical completeness. Finally, to obtain the corresponding PDFs for the abundance of BHs of planetary to substellar mass, we apply a Bayesian inference analysis, as explained in Esteban-Gutierrez et al. (2022b). This method applies a statistical approach which calculates the posterior probability distribution of some parameters given a set of observed variables, which, in our case, are the microlensing magnifications. The global PDF is calculated as the product for image pairs, \[p_{kl}(\alpha_{BHk},\alpha_{starl})\propto\prod_{ij}p_{klij}(\alpha_{BHk}, \alpha_{starl}|\Delta m_{ij}) \tag{2}\] ## 3 Results: Likelihood of the Bimodal Distribution of Abundances The final PDFs and the marginalized PDFs for the surface mass density fraction of stars, \(\alpha_{star}\), and BHs, \(\alpha_{BH}\), are presented in Figure 1 for the three considered BH masses. The behaviour of the 2D PDF is as expected: we see a substantial impact of the BHs with lower masses, while their potential contribution decays as the BH mass increases to values closer to the stellar mass. For the highest considered mass of the BHs, they are not distinguishable from the stars and the joint PDF shows a strong degeneracy. On the other extreme, the lowest mass BHs have more room to "hide", resembling the behaviour of smooth matter, and producing a much flatter distribution. The maximum probability for the fraction of stars is located at \(\sim 0.1\) in all cases, with an expected value of \(\sim 0.12\). A low contribution of the BHs peaking at (or near) \(\alpha_{star}=0\) is obtained for all the masses considered with expected values of \(\sim 0.1\) for \(M_{BH}=0.0024M_{\odot}\) and \(\lesssim 0.04\) for the remaining BH masses. For the BHs, we find upper limits of \(\alpha_{BH}<[0.34,0.13,0.09]\) at the 68% confidence level (\(\alpha_{BH}<[0.64,0.27,0.16]\) at the 90%) for \(M_{BH}/M_{\odot}=0.0024,0.013\) and 0.082, respectively. The main result of this paper is shown in Figure 2, where we show the results from the analysis of X-ray data in the present work together with the previous results based on optical observations presented in Esteban-Gutierrez et al. (2022b)10, to provide upper limits to the fraction of PBHs as dark matter, \(f_{PBH}=\Omega_{PBH}/\Omega_{DM}=\alpha_{BH}/(\alpha_{BH}+\alpha_{smooth})\), for the mass range from \(0.0024M_{\odot}\) up to \(60M_{\odot}\) (with a new added point for a BH mass of \(3M_{\odot}\)). Figure 2 shows the "confusion band" for stellar mass BHs between the HBL and \(3M_{\odot}\) for which microlensing cannot discriminate between stars and BHs. Footnote 10: Notice that we have added a new point for a BH mass of \(3M_{\odot}\) to the results obtained from the optical data, in order to fill the large mass gap between the largest mass considered in the present work of \(0.08M_{\odot}\) and the lowest mass considered in Esteban-Gutierrez et al. 2022b, of \(10M_{\odot}\). This new point is also shown in Figure 2. Figure 1: Probability distributions of the fraction of total mass density in stars and BHs of three masses: \(M_{BH}/M_{\odot}=0.0024,0.013\) and \(0.082\), plotted in yellow, turquoise and orange, respectively. Bottom right: Joint (2D) probability density function, \(p_{ijkl}(\alpha_{BHk},\alpha_{starl}|\{\Delta m_{ij}\})\). Contour levels in steps of \(0.25\sigma\) with thicker lines for \(1\sigma\) and \(2\sigma\). Straight black dashed lines represent constant \(\alpha_{smooth}\) for \(\alpha_{smooth}=0.9,0.8,0.7,0.6,0.5\) (bottom to top). Top right and bottom left: Marginalized (1D) probability density functions, \(p_{k}(\alpha_{BH_{k}}|\{\Delta m_{ij}\})\) and \(p_{l}(\alpha_{starl}|\{\Delta m_{ij}\})\), of the fraction of total mass density in BHs, \(\alpha_{BH}\), and stars, \(\alpha_{star}\), respectively. Upper limits at the 68% and 90% confidence levels for each BH mass are indicated as dotted and dash-dotted lines, respectively. ## 4 Discussion: constraints on the dark matter fraction Our results show that the allowed abundance from quasar microlensing observations of BHs depends strongly on the mass of the microlenses, with a significant increase of the permitted abundance in compact objects for the lowest explored BH mass of \(0.0024M_{\odot}\), corresponding to an Einstein radius comparable to a typical X-ray source (\(\sim 1\) lt-day). This result is not unexpected, since when the ratio between the masses of the two microlens components (stars and BHs) is large enough, there will be a selective washing out of the small mass component if its Einstein radius is close to the source size considered. Below this limiting mass, X-ray microlensing becomes rather insensitive to the presence of BHs. On the other hand, it is evident that in the region of coincidence with typical stellar masses (from the HBL to \(1-3M_{\odot}\)) there is a degeneracy between both populations (stars and BHs) and we are not able to distinguish between them. Nevertheless, as we know from other grounds (Jiang & Kochanek 2007 and references therein) that the mass fraction in stars is about 10%, we can assume that the fraction of BHs must be placed somewhere in this grey band (see Figure 2) most likely close to the red dashed line connecting the points at the HBL and \(M_{BH}=3M_{\odot}\). Figure 2: Upper limits of the contribution of PBHs to the fraction of dark matter from both, X-rays (this study) and the optical (Esteban-Gutierrez et al. 2022b), plus a new extra point at \(3M_{\odot}\). The grey band corresponds to the mass range for which quasar microlensing cannot distinguish between stars and BHs. In order to provide a global comparison with other works regarding the estimation of the dark matter fraction, we use the free Python code PBHbounds11(Kavanagh, 2019) to add our new contributions in optical and X-rays (shown as the grey area in Figure 3) to this collective study12 (shown as light blue, red and orange areas in Figure 3). As we can see from this plot, our studies based on quasar microlensing establish the strongest bounds in the mass range between \(10^{-1}\) and \(10^{2}M_{\odot}\). We also added new constraints to the dark matter fraction in the substellar to planetary mass range (\(10^{-3}-10^{-1}M_{\odot}\)) using the present X-ray quasar microlensing results. Footnote 11: [https://github.com/bradkav/PBHbounds/](https://github.com/bradkav/PBHbounds/) Footnote 12: This corresponds to a selected mass range (\(10^{-12}-10^{9}M_{\odot}\)) from the complete sample of results given by the PBHbounds repository, compatible with constraints from various experiments involving microlensing (HSC, OGLE, EROS), GWs (LIGO-Virgo), pulsars (NANOGrav) and studies in galaxy dynamics. Figure 3: Dark matter fraction of PBHs inferred from microlensing (light blue) from OGLE, EROS and HSC collaborations, GWs/pulsars (red) from LIGO-Virgo and NANOGrav collaborations and dynamical constraints (orange) from Ultra-faint Dwarf Galaxies and Wide Binaries constraints on MACHOs in terms of their mass, assuming a monochromatic mass function (data taken from PBHbounds). The grey curve represents the quasar microlensing results for the visible and X-ray measurements, taking into account the upper limits at the 90% of confidence level. ## 5 Conclusions From X-ray microlensing data available in the literature on which we perform a Bayesian analysis of the impact of a bimodal population of stars and BHs with masses in the planetary to stellar range, we can summarize the following conclusions: - Independently of the BH mass considered, the abundance of stars remains almost invariable, with an average value of \(\alpha_{star}=0.12^{+0.05}_{-0.05}\) at the 68% of confidence level and \(\alpha_{star}=0.12^{+0.11}_{-0.07}\) at the 90%. This result confirms previous estimates based on optical data (e.g. Jimenez-Vicente et al. 2015a). - In the range of masses considered (from \(0.0024M_{\odot}\) and \(0.08M_{\odot}\)), the expected abundance of BHs is very small (\(<4\%\)), reaching a 10% abundance at the lowest mass. Below this mass, the X-ray microlensing becomes progressively insensitive to the presence of a population of compact objects. - Using jointly the X-ray and optical microlensing data, we have been able to provide limits on the abundance of BHs (and of any other type of compact object) in the planetary to intermediate-mass range (see Figure 3). These limits are the strongest ones available to date in the \(0.01M_{\odot}\) to \(60M_{\odot}\) mass range and indicates that the BH abundance is negligible in the \(\sim 0.005M_{\odot}\) to \(\sim 100M_{\odot}\) mass range. We thank the anonymous referee for valuable comments that helped improving this paper. This research was supported by the Spanish projects PID2020-118687GB-C33, PID2020-118687GB-C32 and PID2020-118687GB-C31 financed by MCIN/AEI/10.13039/501100011033. J.J.V. is also supported by projects FQM-108, P20_00334 and A-FQM-510-UGR20/FEDER financed by Junta de Andalucia. J.A.M. is also supported by the Generalitat Valenciana with the project of excellence Prometeo/2020/085. AEG thanks the support from grant FPI-SO from the Spanish MINECO (research project SEV-2015-0548-17-4 and predoctoral contract BES-2017-082319) and acknowledges support from ANID Fondecyt Postdoctorado with grant number 3230554.
2306.10226
Skyrmions at Finite Density
In this paper, we will describe recent advances in analytical methods to construct exact solutions of the Skyrme model (and its generalizations) representing inhomogeneous Hadronic condensates living at finite Baryon density. Such novel analytical tools are based on the idea to generalize the well known spherical hedgehog ansatz to situations (relevant for the analysis of finite density effects) in which there is no spherical symmetry anymore. Besides the intrinsic mathematical interest to find exact solutions with non-vanishing Baryonic charge confined to a finite volume, this framework opens the possibility to compute important physical quantities which would be difficult to compute otherwise.
Fabrizio Canfora, Scarlett C. Rebolledo-Caceres
2023-06-17T01:29:36Z
http://arxiv.org/abs/2306.10226v2
# Skyrmions at Finite Density ###### Abstract In this paper, we will describe recent advances in analytical methods to construct exact solutions of the Skyrme model (and its generalizations) representing inhomogeneous Hadronic condensates living at finite Baryon density. Such novel analytical tools are based on the idea to generalize the well known spherical hedgehog ansatz to situations (relevant for the analysis of finite density effects) in which there is no spherical symmetry anymore. Besides the intrinsic mathematical interest to find exact solutions with non-vanishing Baryonic charge confined to a finite volume, this framework opens the possibility to compute important physical quantities which would be difficult to compute otherwise. _Keywords -_ finite density, skyrmions, nuclear pasta ###### Contents * 1 Introduction * 2 The Skyrme-Maxwell Model * 2.1 Field Equation and energy-momentum tensor * 2.2 Topological Charge like a Baryon Number * 2.3 Ansatz for Skyrme Field * 2.3.1 Exponential parametrization * 2.3.2 Euler angles parametrization * 3 Skyrmions at Finite Density * 3.1 The first example: Sine-Gordon layer * 4 Modulated condensates and the appearance of chiral degrees of freedom * 4.1 Finite Density: Metric of a Box * 4.2 Euler Angles parametrization: Hadronic layers * 4.2.1 Physical interpretation of the chiral degrees of freedom * 4.3 Exponential parametrization: Hadronic tubes * 4.3.1 Physical interpretation of the chiral degrees of freedom * 5 Gauged Skyrmions and applications * 5.1 Abelian Higgs Model * 6 Applications to Yang-Mills-Higgs theory * 6.1 Pure Yang-Mills theory * 6.2 Energy-momentum tensor and topological charge * 6.3 Yang-Mills-Higgs theory * 7 Conclusion ## 1 Introduction One of the main open problems in physics is the lack of an analytic understanding of the low energy phase diagram of Quantum Chromodynamics (QCD henceforth). Due to the non-perturbative nature of low energy QCD, many reseachers believe that, in this regime, only refined numerical techniques are useful (see [1, 2, 3, 4], and references therein) while analytical tools are not useful. One of the many negative consequences of this fact is that, the appearance of the _inhomogeneous Baryonic condensates_ (see [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], and the nice up to date review [16]), which is a very remarkable phenomenon typical of finite Baryon density and low-temperature regime, has no theoretical "first principles" explanation. In such a phase, ordered structures appear, in which most of the Baryonic charge is contained in regular shapes like thick Baryonic layers or thick Baryonic tubes. This phase has many similarities with the non-homogeneous condensates, which have been discovered in integrable field theories in \((1+1)\) dimensions (see [17, 18, 19, 20, 21], and references therein). The numerical analysis of these configurations is quite demanding from the hardware viewpoint (see [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], and references therein). This unfortunate circumstance makes the study of relevant physical quantities difficult. Transport coefficients (such as thermal conductivity, viscosity, and so on) which encode fundamental properties of multi-Baryonic configurations (see [33, 34, 35, 12, 36], and references therein) are especially challenging. It is very complicated to analyze numerically how these coefficients depend on the Baryonic charge, on the coupling constants of the theory and so on. The present paper reviews a novel approach able to describe analytically these inhomogeneous Baryonic condensates (together with their corresponding electromagnetic fields) appearing in the low energy limit of QCD opening the possibility to determine exactly relevant physical quantities associated to these configurations. The starting point is the Skyrme theory which (at leading order in the t' Hooft expansion [37, 38], and [39]) represents the low energy limit of QCD. The dynamical field of the Skyrme action [40] is a \(SU(N)\)-valued scalar field \(U\) (here we will consider the two-flavors case \(U(x)\in SU(2)\) although many of the present results can be extended to the generic \(SU(N)\) case). This action possesses both small excitations describing Pions and topological solitons describing Baryons [41, 42, 43, 44, 45, 46], and [47]; the Baryonic charge being a topological invariant (see also [48, 49, 50, 51, 52, 53, 54, 55, 56, 57], and references therein). At first glance, when one tries to analytically describe topological solitons with high topological (Baryonic, in the present case) charge in non-integrable theories (such as the low energy limit of QCD) a huge technical problem appears. Generically, it is not possible to require spherical symmetry for the SU(2)-valued field. In the present review, for the reasons mentioned above, we are interested in finite density effects. This, roughly speaking, means that we need to force the topological solitons to live within a box of finite spatial volume (in which case spherical symmetry is broken). In the usual textbook cases, the requirement of spherical symmetry (in the hedgehog sense) generates an ansatz that both reduces the Skyrme field equations to just one equation as well as produces a non-vanishing Baryon density. In fact, in the Skyrme case, spherical symmetry is not so welcome1 (from the analytic viewpoint) after all since the remaining field equation can only be solved numerically. The situation becomes even worse if the coupling with the electromagnetic field is taken into account. The obvious reason is that two of the three Pions are charged so that they generate their own electromagnetic field and, at the same time, the electromagnetic field back-reacts on the \(SU(2)\) field. In mathematical terms, when the minimal coupling to the \(U(1)\) gauge field is taken into account, one has to solve a coupled system of 7 non-linear partial differential equations (3 from the gauged Skyrme model and, in principle, 4 from Maxwell equations with the \(U(1)\) current arising from the Skyrme model itself). Consequently, the analytic construction of gauged inhomogeneous condensates in the gauged Skyrme model Maxwell system looks as a hopeless task. Footnote 1: See [105, 106, 107] for an updated discussion on this point. Quite remarkably, as discussed in [58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76] the absence of spherical symmetry opens unprecedented possibilities as far as analytic solutions are concerned. Such an approach provides (at least) two explicit cases: Baryonic layers and tubes (together with their modulations and their electromagnetic fields). It is worth emphasizing that, first of all, the plots in [58] and [67] are qualitatively very close to the ones found numerically in the analysis of spaghetti-like configurations (see the plots in [5, 6, 7, 8, 9], and [16]). Moreover, in [66] the shear modulus of lasagna configurations has been computed, the result being in good agreement with [11] and [15]. A welcome consequence of the above formalism is that it opens the possibility of analyzing the transport properties of these Hadronic tubes and layers explicitly (as we will discuss at the end of the review). The review is divided as follows: In section 2, we give a brief description of the gauged Skyrme Model, its relation with baryons, its field equations, the energy-momentum tensor, and we define topological charge in the model. Furthermore, we define and explain the most common ansatzes for the fundamental element \(U\). Section 3 summarizes previous contributions of hadronic configurations at finite density, we called these structures Skyrmions crystals. Then, we will present the metrics and their ranges of coordinates. Section 4 presents a novel analytic solution of the Skyrme Model at finite density with a non-vanishing topological charge. Besides, we define the relevant properties of the theory, like the energy-momentum tensor and the topological current, and present an interpretation of the chiral degrees of freedom. Section 5 presents the gauged version of the inhomogeneous condensates. In section 6, we show recent work related to a sector of pure Yang-Mills and Yang-Mills-Higgs theory that exhibits conformal symmetry. We motivate this work with the exciting possibilities of computing and characterizing nuclear matter. Finally, section 7 is dedicated to conclusions and future outlooks. In our convention \(c=\hbar=1\), Greek indices run over the space-time with mostly plus signature, and Latin indices are reserved for those of the internal space. For simplicity, we denote the scalar product with \(\cdot\), for example, we use \((\nabla\alpha\cdot\nabla G)\) for \((\nabla_{\mu}\alpha\nabla^{\mu}G)\). ## 2 The Skyrme-Maxwell Model A proper understanding of how topological solitons react to the presence of non-trivial boundary conditions (such as the ones appearing then the theory is analyzed in a box of finite spatial volume) is the key step to disclose the mechanism behind the appearance of non-homogeneous Baryonic condensates. A fundamental ingredient is that (at leading order in the 't Hooft expansion [37, 38],and [39]) the low energy limit of QCD is equivalent to the gauged Skyrme model [40, 47, 77], and [78]. An intriguing characteristic of Skyrme theory is that it encodes both (topologically trivial) small excitations describing mesons (Pions in the \(SU(2)\) case) and topological solitons describing Baryons [25, 26, 41, 44, 45, 46, 47, 79, 80], and [78]. Moreover, the Baryonic charge can be expressed as a topological invariant. Skyrme theory has been analyzed at finite Baryon density starting from the classic references [23, 81, 82, 83], and [84] using either a variational approach or the so-called rational map approximation. The techniques introduced in [58, 60, 62, 66, 70, 71, 72, 74, 75], and [76] (which will be described in the following sections) provided the first analytic constructions of inhomogeneous Baryonic condensates in the low energy limit of QCD. Needless to say, the gauged Skyrme model is a very important theory in itself, but the 't Hooft large \(N_{c}\) expansion gives rise to very complicated subleading corrections to the Skyrme model (see [85, 86], and [87] and references therein). Therefore, an important question to address is: Do such subleading terms ruin the approach developed in [58, 60, 62, 66, 70, 71, 72, 74, 75], and [76]? This issue has been discussed in [60, 67], and [88] where it has been shown that such an approach works, and it is almost unchanged even when these subleading terms are included. In the present review we will only explicitly consider the \(SU(2)\) Skyrme model minimally coupled to Maxwell theory. The action for the gauged \(SU(2)\)-Skyrme model is given by \[I[U,A_{\mu}]=\int d^{4}v\left({\cal L}^{\rm SK}+{\cal L}^{\rm U(1)}\right)\, \tag{2.1}\] \[{\cal L}^{\rm SK} = \frac{K}{4}{\rm Tr}\left\{R_{\mu}R^{\mu}+\frac{\lambda}{8}G_{\mu \nu}G^{\mu\nu}\right\}\,\ {\cal L}^{U(1)}=F_{\mu\nu}F^{\mu\nu}\,\] \[F_{\mu\nu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\,\] where \[R_{\mu}=U^{-1}D_{\mu}U=R_{\mu}^{a}t_{a}\,\quad G_{\mu\nu}=\left[R_{\mu},R_{ \nu}\right]\,,\ d^{4}v=\sqrt{-g}d^{4}x\,\] \[D_{\mu}U=\nabla_{\mu}U+A_{\mu}U\hat{O}\,\quad\hat{O}=U^{-1}\left[t_{3},U \right]\,\] \(U(x)\in SU(2)\), \(g\) is the metric determinant, \(\nabla_{\mu}\) is the partial derivative, \(D_{\mu}\) is the covariant derivative associated to the \(U(1)\) gauge field \(A_{\mu}\) and \(t_{a}=i\sigma_{a}\) are the generators of the \(SU(2)\) Lie group, being \(\sigma_{a}\) the Pauli matrices. The Skyrme couplings \(K\) and \(\lambda\) are positive constants that have to be fixed experimentally (see [45, 46] and references therein). The first term in Eq. (2.1) is the non-linear sigma model. Skyrme introduced the second term to avoid Derrick's scaling argument [89] (which prevented the appearance of solitonic configurations in the Non-Linear Sigma Model, NLSM henceforth). It is fair to emphasize that actually, Skyrme understood the essence of Derrick's argument before Derrick himself [40]. ### Field Equation and energy-momentum tensor The field equations for the Skyrme Model are obtained through its variation with respect to the fundamental fields \(U\) and \(A_{\mu}\): \[\nabla_{\mu}\left(R^{\mu}+\frac{\lambda}{4}[R_{\nu},G^{\mu\nu}] \right) = 0\, \tag{2.2}\] \[\nabla_{\mu}F^{\mu\nu} = J^{\nu}, \tag{2.3}\] where the electromagnetic current \(J_{\mu}\) is given by \[J_{\mu}=\frac{K}{2}{\rm Tr}\left\{\hat{O}\left(R_{\mu}+\frac{\lambda}{4}[R^{ \nu},G_{\mu\nu}]\right)\right\}\.\] As has been already emphasized in the introduction, the choice of an effective Ansatz is the most difficult step along the path to finding analytic (gauged) Skyrmions due to the highly non-linear character of the field equations. The contribution of the Skyrme model to the energy-momentum tensor is given by \[T^{\rm SK}_{\mu\nu}=-\frac{K}{2}{\rm Tr}\left\{R_{\mu}R_{\nu}-\frac{1}{2}g_{ \mu\nu}R^{\alpha}R_{\alpha}+\frac{\lambda}{4}\left(g^{\alpha\beta}G_{\mu \alpha}G_{\nu\beta}-\frac{1}{4}g_{\mu\nu}G_{\alpha\beta}G^{\alpha\beta} \right)\right\}\, \tag{2.4}\] while the contribution of the Maxwell theory is \[T^{U_{(1)}}_{\mu\nu}=g^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}-\frac{1}{4}g_{ \mu\nu}F^{\alpha\beta}F_{\alpha\beta}\.\] In most of the present review, we will focus on the energy-momentum tensor of Skyrme Model \(T^{\rm SK}_{\mu\nu}\) because across this work we will use \(A_{\mu}\to 0\). On the other hand, we will mention how the inhomogeneous Baryonic condensates described in the following sections can be generalized with the inclusion of the minimal coupling with Maxwell theory in due time. ### Topological Charge like a Baryon Number Unlike Noether charges (which are continuous conserved charges associated to symmetries and which are conserved on-shell), topological charges are discrete, not associated to symmetries and invariant under any continuous deformation of the fields (so that these charges are conserved off-shell). In the case of gauged Skyrme theory, the topological charge (which is interpreted as the Baryonic charge of the given configuration) reads \[B=\frac{1}{24\pi^{2}}\int_{\sigma}\rho_{B}, \tag{2.5}\] where \(\sigma\) is any three-dimensional \(t=\text{const.}\) hypersurface, while the Baryon density \(\rho_{B}\) is defined as \[\rho_{B}=\rho^{\text{SK}}+\rho^{\text{U}(1)}\,,\] where \[\rho^{\text{SK}} =\epsilon^{abc}\text{Tr}\left\{\left(U^{-1}\partial_{a}U\right) \left(U^{-1}\partial_{b}U\right)\left(U^{-1}\partial_{c}U\right)\right\},\] \[\rho^{\text{U}(1)} =-\text{Tr}\left\{\partial_{a}\Big{(}3A_{b}t_{3}\big{(}U^{-1} \partial_{c}U+\left(\partial_{c}U\right)U^{-1}\big{)}\Big{)}\right\}\,.\] The topological charge is uniquely linked to the boundary conditions of the fundamental fields \(U\) and \(A_{\mu}\). One of the most important properties of this kind of charge is the fact that infinite energy is needed to change a physical state with a fixed value of the topological charge to a state with a different topological charge. Whenever \(A_{\mu}\to 0\) we get the topological charge associated to the Skyrme model by itself. As it is well known since the introduction of the model (see [40, 77], for a review see [79]) the energy of static configurations is bounded from below by the topological charge: \[E\geq\left|Q\right|.\] Unlike what happens in Yang-Mills-Higgs theory, such a bound cannot be saturated on flat space-time. This fact prevents one from using the common techniques (associated to integrability and self-duality) to find analytic solutions of the Skyrme field equations. In what follows, we will use the terminology "Skyrmions at finite density" and "inhomogeneous Baryonic condensates" to denote analytic solutions of the field equations in Eqs. (2.2) and (2.3) with non-vanishing topological charge satisfying boundary conditions corresponding to a finite spatial volume. ### Ansatz for Skyrme Field In this section, we will show two explicit parametrizations of the \(SU(2)\)-valued Skyrme field \(U\) which are very convenient to construct the sought ansatze. #### 2.3.1 Exponential parametrization The standard exponential form of the field \(U(x)\) is \[U=\cos{(\alpha)}\mathbf{1}_{2\times 2}+\sin{(\alpha)}n_{j}t^{j}, \tag{2.6}\] where \(\mathbb{1}_{2\times 2}\) is the \(2\times 2\) identity matrix and \(t^{i}=i\sigma^{i}\) where \(\sigma^{j}\) are the Pauli matrices. Besides, \[\vec{n}=(\sin(\Theta)\cos(\Phi),\sin(\Theta)\sin(\Phi),\cos(\Theta))\, \tag{2.7}\] where \(\alpha=\alpha(x^{\mu})\), \(\Theta=\Theta(x^{\mu})\) and \(\Phi=\Phi(x^{\mu})\) are the three scalar degrees of freedom of the \(SU(2)\)-valued field \(U(x)\). Then, plugging Eqs. (2.6) and (2.7) into the action for the Skyrme model reads \[I(\alpha,\Theta,\Phi)=\frac{K}{4}\int d^{4}v\mathrm{Tr}\bigg{\{} ( \nabla\alpha)^{2}+\sin^{2}(\alpha)((\nabla\Theta)^{2}+\sin^{2}( \Theta)(D\Phi)^{2})+\frac{\lambda}{2}\big{(}\sin^{2}(\alpha)((\nabla\alpha)^{ 2}(\nabla\Theta)^{2}-\] \[-(\nabla\alpha\cdot\nabla\Theta)^{2})+\sin^{2}(\alpha)\sin^{2}( \Theta)((\nabla\alpha)^{2}(D\Phi)^{2}-(\nabla\alpha\cdot D\Phi)^{2})+\] \[+\sin^{4}(\alpha)\sin^{2}(\Theta)((\nabla\Theta)^{2}(D\Phi)^{2}- (\nabla\Theta\cdot\nabla\Phi)^{2})\big{)}\bigg{\}}+I^{(1)},\] where \[D_{\mu}\Phi=\nabla_{\mu}\Phi-2eA_{\mu}.\] It is a trivial computation to check that the covariant derivative in terms of the three scalar degrees of freedom \(\alpha(x^{\mu})\), \(\Theta(x^{\mu})\) and \(\Phi(x^{\mu})\) is equivalent to the following minimal coupling rule \[\nabla_{\mu}\alpha\to D_{\mu}\alpha=\nabla_{\mu}\alpha,\] \[\nabla_{\mu}\Theta\to D_{\mu}\Theta=\nabla_{\mu}\Theta,\] \[\nabla_{\mu}\Phi\to D_{\mu}\Phi=\nabla_{\mu}\Phi-2eA_{\mu}.\] Thus, the scalar degree of freedom \(\Phi\) plays the role of the "\(U(1)\) phase" of the Skyrme field since, under a \(U(1)\) gauge transformation, it transforms as \[A_{\mu}\to A_{\mu}+\nabla_{\mu}\Lambda,\] \[\Phi\to\Phi+2e\Lambda.\] It is also useful to write the full Skyrme-Maxwell equations in terms of the three scalar degrees of freedom \(\alpha\), \(\Theta\) and \(\Phi\): \[-\Box\alpha+\sin(\alpha)\cos(\alpha)\left((\nabla\Theta)^{2}+\sin ^{2}(\Theta)(D\Phi)^{2}\right)+\lambda\Big{(}\sin(\alpha)\cos(\alpha)\big{(}( \nabla\alpha)^{2}(\nabla\Theta)^{2}-(\nabla\alpha\cdot\nabla\Theta)^{2}\big{)} +\] \[+\sin(\alpha)\cos(\alpha)\sin^{2}(\Theta)\big{(}(\nabla\alpha)^{2 }(D\Phi)^{2}-(\nabla\alpha\cdot D\Phi)^{2}\big{)}+2\sin^{3}(\alpha)\cos( \alpha)\sin^{2}(\Theta)\big{(}(\nabla\Theta)^{2}(D\Phi)^{2}-\] \[-(\nabla\Theta\cdot D\Phi)^{2}\big{)}-\nabla_{\mu}\big{(}\sin^{ 2}(\alpha)(\nabla\Theta)^{2}\nabla^{\mu}\alpha\big{)}+\nabla_{\mu}\big{(}\sin ^{2}(\alpha)(\nabla\alpha\cdot\nabla\Theta)\nabla^{\mu}\Theta\big{)}-\] \[-\nabla_{\mu}\big{(}\sin^{2}(\alpha)\sin^{2}(\Theta)(D\Phi)^{2} \nabla^{\mu}\alpha\big{)}+\nabla_{\mu}\big{(}\sin^{2}(\alpha)\sin^{2}(\Theta) (\nabla\alpha\cdot D\Phi)D^{\mu}\Phi\big{)}\Big{)}=0\] \[-\sin^{2}(\alpha)\Box\Theta-2\sin(\alpha)\cos(\alpha)(\nabla\alpha \cdot\nabla\Theta)+\sin^{2}(\alpha)\sin(\Theta)\cos(\Theta)(D\Phi)^{2}+\] \[+\lambda\Big{(}\sin^{2}(\alpha)\sin(\Theta)\cos(\Theta)\big{(}( \nabla\alpha)^{2}(D\Phi)^{2}-(\nabla\alpha\cdot D\Phi)^{2}\big{)}+\] \[+\sin^{4}(\alpha)\sin(\Theta)\cos(\Theta)\big{(}(\nabla\Theta)^{2 }(D\Phi)^{2}-(\nabla\Theta\cdot D\Phi)^{2}\big{)}-\nabla_{\mu}\big{(}\sin^{2}( \alpha)(\nabla\alpha)^{2}\nabla^{\mu}\Theta\big{)}-\] \[-\nabla_{\mu}\big{(}\sin^{4}(\alpha)\sin^{2}(\Theta)(D\Phi)^{2} \nabla^{\mu}\Theta\big{)}+\nabla_{\mu}\big{(}\sin^{4}(\alpha)\sin^{2}(\Theta) (\nabla\Theta\cdot D\Phi)D^{\mu}\Phi\big{)}+\] \[+\nabla_{\mu}\big{(}\sin^{2}(\alpha)(\nabla\alpha\cdot\nabla \Theta)\nabla^{\mu}\alpha\big{)}\Big{)}=0\,\] #### 2.3.2 Euler angles parametrization For the \(SU(2)\) case we chose the following element \(U\) of \(SU(2)\) \[U=\exp\left(F\left(x^{\mu}\right)t_{3}\right)\exp\left(H\left(x^{\mu}\right)t_ {2}\right)\exp\left(G\left(x^{\mu}\right)t_{3}\right), \tag{2.8}\] where \(F\left(x^{\mu}\right)\), \(G\left(x^{\mu}\right)\) and \(H\left(x^{\mu}\right)\) are the three scalar degrees of freedom (traditionally, in this parametrization the field \(H\) is called profile). It is worth emphasizing that, as it happens for the exponential representation discussed in the previous subsection, any group element can always be written in the Euler angle representation. We will see that, as has been already emphasized, in order to construct a concrete ansatz that can be adapted to the finite density situation it is convenient to write explicitly the Skyrme action in terms of the Euler angle parametrization in Eq. (2.8). A direct computation shows that \[I(H,F,G)= -\frac{K}{2}\int d^{4}v\text{Tr}\bigg{\{}(\nabla H)^{2}+(DF)^{2} +(DG)^{2}+2\cos(2H)(DF\cdot DG)-\] \[-\lambda\big{(}2\cos(2H)((\nabla H\cdot DF)(\nabla H\cdot DG)-( \nabla H)^{2}(DF\cdot DG))+\] \[+4\sin^{2}(H)\cos^{2}(H)((DF\cdot DG)^{2}-(DF)^{2}(DG)^{2})+\] \[+(\nabla H\cdot DF)^{2}+(\nabla H\cdot DG)^{2}-(\nabla H)^{2}(DF )^{2}-(\nabla H)^{2}(DG)^{2}\big{)}\bigg{\}}\.\] The minimal coupling with the Maxwell field in this parametrization is \[D_{\mu}F=\nabla_{\mu}F-eA_{\mu}\,\qquad D_{\mu}G=\nabla_{\mu}G+eA_{\mu}\.\] The field equations in the Euler parameterization are given by \[0= \square H+2\sin(2H)(\nabla F\cdot\nabla G)-\lambda\bigg{\{}2\sin(2H) \Big{(}(\nabla H\cdot\nabla F)(\nabla H\cdot\nabla G)-(\nabla H)^{2}(\nabla F \cdot\nabla G)-\] \[-\cos(2H)\big{(}(\nabla F\cdot\nabla G)^{2}-(\nabla F)^{2}( \nabla G)^{2}\big{)}\Big{)}+\nabla_{\mu}(\cos(2H)(\nabla H\cdot\nabla G)\nabla ^{\mu}F)+\] \[+\nabla_{\mu}(\cos(2H)(\nabla H\cdot\nabla F)\nabla^{\mu}G)-\nabla _{\mu}(2\cos(2H)(\nabla G\cdot\nabla F)\nabla^{\mu}H)+\nabla_{\mu}((\nabla H \cdot\nabla F)\nabla^{\mu}F)+\] \[+\nabla_{\mu}((\nabla H\cdot\nabla G)\nabla^{\mu}G)-\nabla_{\mu} ((\nabla F)^{2}\nabla^{\mu}H)-\nabla_{\mu}((\nabla G)^{2}\nabla^{\mu}H)\bigg{\}}\, \tag{2.9}\] \[0= \square F-e\nabla\cdot A-2\sin(2H)(DG\cdot\nabla H)+\cos(2H)( \square G+e\nabla\cdot A)-\] \[-\lambda\Big{(}\nabla_{\mu}(\cos(2H)(\nabla H\cdot DG)\nabla^{ \mu}H)-\nabla_{\mu}(\cos(2H)(\nabla H)^{2}D^{\mu}G)+\] \[+\nabla_{\mu}(4\sin^{2}(H)\cos^{2}(H)(DF\cdot DG)D^{\mu}G)-\nabla _{\mu}(4\sin^{2}(H)\cos^{2}(H)(DG)^{2}D^{\mu}F)+\] \[+\nabla_{\mu}((\nabla H\cdot DF)\nabla^{\mu}H)-\nabla_{\mu}(( \nabla H)^{2}D^{\mu}F)\Big{)}\, \tag{2.10}\] \[0= \square G+e\nabla\cdot A-2\sin(2H)(DF\cdot\nabla H)+\cos(2H)( \square F-e\nabla\cdot A)-\] \[-\lambda\Big{(}\nabla_{\mu}(\cos(2H)(\nabla H\cdot DF)\nabla^{ \mu}H)-\nabla_{\mu}(\cos(2H)(\nabla H)^{2}D^{\mu}F)+\] \[+\nabla_{\mu}(4\sin^{2}(H)\cos^{2}(H)(DF\cdot DG)D^{\mu}F)-\nabla _{\mu}(4\sin^{2}(H)\cos^{2}(H)(DF)^{2}D^{\mu}G)+\] \[+\nabla_{\mu}((\nabla H\cdot DG)\nabla^{\mu}H)-\nabla_{\mu}(( \nabla H)^{2}D^{\mu}G)\Big{)}. \tag{2.11}\] In the following sections, we will discuss first these field equations in the ungauged case to show how and why effective low energy chiral conformal degrees of freedom appear. ## 3 Skyrmions at Finite Density The first references analyzed the effects of forcing Skyrmions to live at finite Baryon density were [23, 26, 81, 82, 83, 84], and [90], where the variational construction and classifications of ordered arrays of Skyrmions were analyzed. In these references, roughly speaking, each peak of the energy and Baryon densities represent a single unit charge Skyrmion (Baryon). In the present review we are more interested in situations where these peaks (representing charge-1 Skyrmions) can melt into bigger structures with much higher topological charges (such as Hadronic layers and tubes). ### The first example: Sine-Gordon layer In [76], the first topologically non-trivial analytical solution in flat space-time that describes Hadronic layers in a finite volume was constructed. In order to confine the system to a flat region of fine spatial volume in that reference the metric in 3 + 1 dimensions was taken2 to be Footnote 2: In the following sections, we will see that actually the metric can be chosen in a slightly more generic way. \[ds^{2}=-dt^{2}+L^{2}(dr^{2}+d\gamma^{2}+d\phi^{2})\,, \tag{3.1}\] where the range of coordinates3 are Footnote 3: The proper range of the spatial coordinate can be determined by taking into account the theory of the Euler angle (see [91, 92, 93], and [94]). \[0\leq r\leq 2\pi\quad,\quad 0\leq\gamma\leq 4\pi\quad,\quad 0\leq\phi\leq 2\pi.\] The ansatz for the Skyrme field was originally given in the exponential parameterization in Eqs. (2.6) and (2.7) as follows \[\Phi=\frac{\gamma+\phi}{2},\] \[\tan\Theta=\frac{\tan H}{\cos A},\] \[\tan\alpha=\frac{\sqrt{1+\tan^{2}\Theta}}{\tan A}\,, \tag{3.2}\] where \(A=(\gamma-\phi)/2\) and \(H=H(t,r)\). The remarkable property of the above ansatz is that it reduces the complete set of field equations to a single equation, given by \[\Box H-\frac{\lambda}{8L^{2}(\lambda+2L^{2})}\sin{(4H)} = 0, \tag{3.3}\] \[\frac{\partial^{2}}{\partial t^{2}}-\frac{1}{L^{2}}\frac{ \partial^{2}}{\partial r^{2}} = \Box\,, \tag{3.4}\] which is the Sine-Gordon equation, the prototype of integrable PDE! Moreover, the topological charge is non-vanishing and it reads \[\rho_{B}=3\sin{(2H)}dHd\gamma d\phi. \tag{3.5}\] Furthermore, when the profile \(H\) satisfies the following boundary conditions \[H(t,0)=\ 0,\ \ H(t,2\pi)=\ \pm\frac{\pi}{2}, \tag{3.6}\] the topological charge takes the values of \(B=\pm 1\). These configurations represent the first analytic examples of a Baryonic layer in a flat box of finite volume. In the following sections, we will discuss the generalizations of these solutions. Modulated condensates and the appearance of chiral degrees of freedom In this section, we present the explicit construction of modulated inhomogeneous Hadronic condensates in the ungauged case. These modulations are encoded in emergent chiral conformal degrees of freedom. ### Finite Density: Metric of a Box We are interested in analyzing the intriguing phenomena that occur when a finite amount of Baryonic charge lives within a finite spatial volume. We consider as a starting point a slightly more general metric than the one introduced in the previous section (which allows to describe a flat box with different sizes along the three spatial directions \(\{r,\theta,\phi\}\)): the line element is \[ds^{2}=-dt^{2}+L_{r}^{2}dr^{2}+L_{\theta}^{2}d\theta^{2}+L_{\phi}^{2}d\phi^{2}\, \tag{4.1}\] where \(L_{i}\) are constants representing the length of the box where the solitons are confined. The adimensional coordinates \(\{r,\theta,\phi\}\) have the following ranges \[0\leq r\leq 2\pi\,\quad 0\leq\theta\leq\pi\,\quad 0\leq\phi\leq 2\pi\, \tag{4.2}\] so that the volume available for the solitons is \(V=4\pi^{3}L_{r}L_{\theta}L_{\phi}\). Notice that the coordinates \(\{r,\theta,\phi\}\) are Cartesian, their finite ranges can be fixed for instance using the theory of Euler angles [91, 92, 93], and [94]. ### Euler Angles parametrization: Hadronic layers The energy-momentum tensor of the Skyrme model (2.4), in the Euler angle parametrization reads \[T_{\mu\nu}^{SK} =g_{\mu\nu}\mathcal{L}^{SK}+K\Big{\{}\nabla_{\mu}H\nabla_{\nu}H+ D_{\mu}FD_{\nu}F+D_{\mu}GD_{\nu}G+\cos{(2H)}(D_{\mu}FD_{\nu}G+D_{\nu}FD_{\mu}G)-\] \[-\lambda\Big{(}\cos{(2H)}\big{(}(\nabla H\cdot DG)(\nabla_{\mu} HD_{\nu}F+\nabla_{\nu}HD_{\mu}F)+(\nabla H\cdot DF)(\nabla_{\mu}HD_{\nu}G+ \nabla_{\nu}HD_{\mu}G)-\] \[-2(\nabla F\cdot DG)\nabla_{\mu}H\nabla_{\nu}H-(\nabla H)^{2}(D_ {\mu}FD_{\nu}G+D_{\nu}FD_{\mu}G)\big{)}+\] \[+4\sin^{2}{H}\cos^{2}{H}\big{(}(DF\cdot DG)(D_{\mu}FD_{\nu}G+D_{ \nu}FD_{\mu}G)-(DG)^{2}\nabla_{\mu}F\nabla_{\nu}F-(DG)^{2}\nabla_{\mu}G\nabla _{\nu}F\big{)}+\] \[+(\nabla H\cdot DF)(\nabla_{\mu}HD_{\nu}F+\nabla_{\nu}HD_{\mu}F) +(\nabla H\cdot DG)(\nabla_{\mu}HD_{\nu}G+\nabla_{\nu}HD_{\mu}G)-\] \[-\nabla_{\mu}H\nabla_{\nu}H((DF)^{2}+(DG)^{2})-(\nabla H)^{2}(D_ {\mu}FD_{\nu}F+D_{\mu}GD_{\nu}G)\Big{)}\Big{\}}\.\] The analysis of the energy-momentum tensor is relevant to determine the contour plots and the shape of the condensates. The ungauged configurations which represent Hadronic layers of arbitrary Baryonic charge can be generalized by including an arbitrary light-like function as a degree of freedom. The suitable generalization of the ansatz in [66, 70, 71, 72, 74, 75], and [76] is \[H=H(r)\,\quad F=q\theta\,\quad G=G(u)\,\qquad u=\frac{t}{L_{\phi}}-\phi\, \tag{4.3}\] where \(q\) is an integer number. Here \(G\left(u\right)\) is an arbitrary function of the light-like coordinate \(u\). The Eq. (4.3) explicitly avoids the no-go Derrick's theorem [89] due to the time dependence of the \(U\) field. The above ansatz keeps all the nice properties of the one in [66]. In particular, it satisfies the following identities \[\left(\nabla F\cdot\nabla G\right)=\left(\nabla H\cdot\nabla F\right)=\left( \nabla H\cdot\nabla G\right)=\left(\nabla F\right)^{2}=0, \tag{4.4}\] which greatly simplifies the field equations. In fact, the ansatz in Eq. (4.3) reduces the Skyrme field equations in Eqs. (2.9) and (2.11) to a simple linear equation \[\partial_{r}^{2}H(r)=0\quad\Rightarrow H(r)=\kappa r+\kappa_{0}\,\] where \(\kappa_{0}\) can be fixed to zero, the integration constant \(\kappa\) will be determined using appropriate boundary conditions. Plugging the ansatz in Eqs. (2.8), (4.3) into Eq. (2.5), the topological charge density of the matter field reads \[\rho_{B}=-12q\sin(2H)H^{\prime}\partial_{\phi}G,\] where it can be seen that the appropriate boundary conditions for the soliton profile \(H(r)\) and the light-like function \(G(u)\) are the following: \[H(r=2\pi)=\frac{\pi}{2}\,\quad H(r=0)=0\,\quad G(t,\phi=0)-G(t,\phi=2\pi)=(2 \pi)p\, \tag{4.5}\] so that the topological charge takes the value \[B=pq. \tag{4.6}\] Therefore, using the above boundary conditions, the profile becomes \[H(r)=\frac{r}{4}.\] #### 4.2.1 Physical interpretation of the chiral degrees of freedom In order to clarify the physical meaning of the function \(G(u)\) appearing in the ansatz in Eq. (4.3), let us consider the slightly different ansatz \[H=\frac{r}{4}\,\quad F=q\theta\,\quad G=G(t,\phi)\.\] With the ansatz here above the Skyrme field equations would reduce to \[\left(\left(\frac{\partial}{\partial t}-\frac{1}{L_{\phi}}\frac{\partial}{ \partial\phi}\right)G\right)\left(\left(\frac{\partial}{\partial t}+\frac{1}{ L_{\phi}}\frac{\partial}{\partial\phi}\right)G\right)=0.\] Thus, the Skyrme field equations force the choice of a chirality: \(G\) can represent either left movers or a right movers (but cannot represent both). Let us choose, as in Eq. (4.3) \(G=G(u)\). Then, the boundary conditions in Eqs. (4.5) and (4.6) ensure that \(G(u)\) has the following expression: \[G\left(u\right)=pu+\widetilde{G}\left(u\right)\, \tag{4.7}\] where \(\widetilde{G}\left(u\right)\) is periodic in the coordinate \(\phi\): \[\widetilde{G}\left(u\right)=\sum_{N}a_{N}\cos\left(Nu\right)+b_{N}\sin\left( Nu\right)\,\ N\in\mathbb{N}\,\] where \(a_{N}\) and \(b_{N}\) are real coefficients and the integer \(N\) labels the chiral modes. Thus, the first term (linear in \(u\)) on the right hand side of Eq. (4.7) contributes to the Baryonic charge while \(\widetilde{G}\) does not (being periodic in the coordinate \(\phi\)). In order to interpret \(\widetilde{G}\) it is enough to observe that, when \(\widetilde{G}=0\), the energy-momentum tensor only depends on the coordinate \(r\) (while it is homogeneous in the other two spatial coordinates). Thus, in this case the solution represents a homogeneous Baryonic layer. On the other hand, when we turn on \(\widetilde{G}\), the energy-momentum tensor depends not only on \(r\) but also on \(u\) (through the modes of \(\widetilde{G}\)). In this case, the plots 1 of the energy-density reveal that \(\widetilde{G}\) represents modulations of the layer in the \(\phi\) direction moving at the speed of light. Consequently, the present family of exact analytic solutions of the Skyrme field equations represents Baryonic layers dressed by modulations in the \(\phi\) direction. It is worth emphasizing here the high theoretical interest to reveal the emergence of chiral conformal degrees of freedom living on Hadronic layers. Such chiral conformal degrees of freedom are not only exact solutions of the full Skyrme field equations (as described in detail here above): such chiral field \(\widetilde{G}\) can be also considered as a solution of the linearized Skyrme field equations on top of the exact solution where \(\widetilde{G}=0\) (which represents a homogeneous Baryonic layer). This observation allows to interpret \(\widetilde{G}\) as a chiral field propagating within a Baryonic medium (provided by the homogeneous Baryonic layer itself). Hence, it is interesting to analyze the transport properties of such chiral field (as such analysis could reveal how the Baryonic medium affects this important quantities): we hope to come back on this issue in a future publication. ### Exponential parametrization: Hadronic tubes The energy-momentum tensor of the Skyrme Model (2.4), in the exponential parametrization (2.6) with (2.7) reads \[T_{\mu\nu} =g_{\mu\nu}{\cal L}^{\rm SK}-K\Big{\{}\nabla_{\mu}\alpha\nabla_{\nu }\alpha+\sin^{2}\!\alpha\left(\nabla_{\mu}\Theta\nabla_{\nu}\Theta+\sin^{2}\! \Theta\,D_{\mu}\Phi D_{\nu}\Phi\right)\;+\] \[+\;\lambda\Big{[}\sin^{2}\!\alpha\left((\nabla\Theta)^{2}\, \nabla_{\mu}\alpha\nabla_{\nu}\alpha+(\nabla\alpha)^{2}\,\nabla_{\mu}\Theta \nabla_{\nu}\Theta-(\nabla\alpha\!\cdot\!\nabla\Theta)\,(\nabla_{\mu}\alpha \nabla_{\nu}\Theta+\nabla_{\nu}\alpha\nabla_{\mu}\Theta)\right)\;+\] \[+\sin^{2}\!\alpha\,\sin^{2}\!\Theta\left((D\Phi)^{2}\,\nabla_{ \mu}\alpha\nabla_{\nu}\alpha+(\nabla\alpha)^{2}\,D_{\mu}\Phi D_{\nu}\Phi-( \nabla\alpha\!\cdot\!D\Phi)\,(\nabla_{\mu}\alpha D_{\nu}\Phi+\nabla_{\nu} \alpha D_{\mu}\Phi)\right)\;+\] \[+\sin^{4}\!\alpha\sin^{2}\!\Theta\left((D\Phi)^{2}\,\nabla_{\mu} \Theta\nabla_{\nu}\Theta+(\nabla\Theta)^{2}\,D_{\mu}\Phi D_{\nu}\Phi-(\nabla \Theta\cdot D\Phi)\,(\nabla_{\mu}\Theta D_{\nu}\Phi+\nabla_{\nu}\Theta D_{\mu }\Phi)\right)\Big{]}\Big{\}}\;,\] \({\cal L}^{\rm SK}\) being the Skyrme Lagrangian, while the topological charge density in Eq. (2.5) becomes \[\rho_{B}=12\sin^{2}\!\alpha\sin\Theta d\Phi\wedge d\Theta\wedge d\alpha\;.\] From the above expression, it follows that to have non-trivial topological configurations, we must demand that \(d\Theta\wedge d\Phi\wedge d\alpha\neq 0\). This implies the necessary (but insufficient) condition that \(\alpha\), \(\Theta\), and \(\Phi\) must be three independent functions in order for the Baryon charge to be non-zero. The suitable generalization of the ansatz in [58] and [59] which (as in the case of Hadronic layers) allows to generalize the analytic configurations representing Hadronic tubes to inhomogeneous tubes Figure 1: Energy density with and without modulation of hadronic layers, with topological charge \(B=4\). For both cases we have set \(L_{r}=L_{\theta}=L_{\phi}=K=\lambda=1\) and \(p=q=2\). When \(a_{I}=b_{I}=0\), we obtain the left plot for hadronic layers without modulations. However, if we consider \(a_{1}=a_{2}=b_{1}=-b_{3}=0.1\), we obtain the right plot for hadronic layers with modulations. (namely, Hadronic tubes which are not anymore homogeneous along their axis) is \[\alpha =\alpha(r),\] \[\Theta =Q\theta,\ \ \ \ \ \ \ \ Q=2v+1\,\ \ v\in\mathbb{Z},\] \[\Phi =G\left(u\right),\ \ \ u=\frac{t}{L_{\phi}}-\phi, \tag{4.8}\] where now \(G\left(u\right)\) is an arbitrary function of the light-like coordinate \(u\). It is easy to see that the above ansatz keeps all the nice properties of the one in [58] and [59]. Firstly, it satisfies the following identities \[\left(\nabla\Phi\cdot\nabla\alpha\right)=\left(\nabla\alpha\cdot\nabla\Theta \right)=\left(\nabla\Theta\cdot\nabla\Phi\right)=\left(\nabla\Phi\right)^{2}= 0. \tag{4.9}\] The great usefulness of the above identities is that they allow to decouple the Skyrme field equations for the three degrees of freedom (\(\alpha\), \(\Phi\) and \(\Theta\)) without killing the topological charge. With the ansatz introduced in Eqs. (2.6), (2.7) and (4.8) the Skyrme field equations reduce to a second order ODE for the profile \(\alpha\): \[\alpha^{\prime\prime}+\frac{Q^{2}\sin(\alpha)\cos(\alpha)(\lambda\alpha^{ \prime 2}-L_{r}^{2})}{L_{\theta}^{2}+\lambda Q^{2}\sin^{2}(\alpha)}=0\, \tag{4.10}\] which can be reduced to a first-order ODE that can be conveniently written as \[\frac{d\alpha}{\eta(\alpha,E_{0})}=\pm dr\,\ \ \ \eta\left(\alpha,E_{0} \right)=\left[\frac{E_{0}L_{\theta}^{2}-\frac{1}{2}q^{2}L_{r}^{2}\cos(2\alpha )}{L_{\theta}^{2}+\lambda Q^{2}\sin^{2}(\alpha)}\right]^{\frac{1}{2}}\,, \tag{4.11}\] \(E_{0}\) being an integration constant (fixed by the boundary conditions, as we will see below)4. Footnote 4: Eq. (4.11) can be solved analytically in terms of Elliptic Functions; however, the explicit solution is not necessary for our purposes. Plugging the ansatz in Eqs. (2.6), (2.7) and (4.8) into Eq. (2.5), the topological charge turns out to be \[\rho_{B}=12q\sin(q\theta)\sin^{2}(\alpha)\alpha^{\prime}\partial_{\phi}G\,\] where it is clearly seen that the appropriate boundary conditions for the soliton profile \(\alpha(r)\) and the light-like function \(G(u)\) are the following: \[\alpha(2\pi)-\alpha(0) =n\pi,\] \[G(t,\phi=2\pi)-G(t,\phi=0) =(2\pi)p, \tag{4.12}\] with \(n\) and \(p\) integer numbers. Therefore, using Eq. (4.12) and integrating with the ranges defined in Eq. (4.2), the topological charge takes the value \[B=np\,.\] We have used that \(q\) is an odd number, as specified in the ansatz in Eq. (4.8). From Eqs. (4.2), (4.11) and (4.12) it follows that the integration constant \(E_{0}\) must satisfy \[n\int_{0}^{\pi}\frac{d\alpha}{\eta\left(\alpha,E_{0}\right)}=2\pi. \tag{4.13}\] Eq. (4.13) is an equation for \(E_{0}\) in terms of \(n\) that always has a real solution when \[E_{0}>\frac{Q^{2}L_{r}^{2}}{2L_{\theta}^{2}}\,\] so that, for given values of \(q\), \(L_{r}\) and \(L_{\theta}\), the integration constant \(E_{0}\) determines the value of the \(\alpha\) profile for the boundary conditions defined in Eq. (4.12). From the above condition, it is clear that for large \(n\), the integration constant \(E_{0}\) scales as \(n^{2}\) \[E_{0}=n^{2}\xi_{0},\qquad\xi_{0}>0,\] where \(\xi_{0}\) can also be interpreted as an integration constant and does not depend on \(n\) for large \(n\). #### 4.3.1 Physical interpretation of the chiral degrees of freedom Additionally, in the present case one can clarify the physical meaning of the function \(G(u)\) appearing in the ansatz in Eq. (4.8) by the slightly more general ansatz \[\alpha =\alpha(r),\ \frac{d\alpha}{\eta(\alpha,E_{0})}=\pm dr\,\] \[\eta\left(\alpha,E_{0}\right) =\left[\frac{E_{0}L_{\theta}^{2}-\frac{1}{2}q^{2}L_{r}^{2}\cos(2 \alpha)}{L_{\theta}^{2}+\lambda Q^{2}\sin^{2}(\alpha)}\right]^{\frac{1}{2}}\,,\] \[\Theta =Q\theta,\qquad\quad Q=2v+1\,\ \ v\in\mathbb{Z},\] \[\Phi =G\left(t,\phi\right),\quad\alpha(2\pi)-\alpha(0)=n\pi\,\] where \(\alpha(r)\) and \(\Theta(\theta)\) are the same as in Eq. (4.8) but \(G\) has been taken as a generic function of \(t\) and \(\phi\) (instead of taking \(G\) as a function of a single light-like coordinate). In this way one can shed light on the true nature of \(G\). As in the case of the Baryonic layers described in the previous sections, with the ansatz here above the Skyrme field equations reduce to \[\left(\left(\frac{\partial}{\partial t}-\frac{1}{L_{\phi}}\frac{\partial}{ \partial\phi}\right)G\right)\left(\left(\frac{\partial}{\partial t}+\frac{1}{ L_{\phi}}\frac{\partial}{\partial\phi}\right)G\right)=0\, \tag{4.14}\] plus \[\left(\frac{\partial^{2}}{\partial t^{2}}-\frac{1}{L_{\phi}^{2}}\frac{ \partial^{2}}{\partial\phi^{2}}\right)G=0\,,\] which is a consequence of Eq. (4.14). Thus, \(G\) can represent either left movers or right movers (but cannot represent both). Let us then choose \(G=G(u)\). The boundary conditions in Eq. (4.12) require that \(G(u)\) has the following expression: \[G\left(u\right)=pu+\widetilde{G}\left(u\right)\, \tag{4.15}\] where \(\widetilde{G}\left(u\right)\) is periodic in the coordinate \(\phi\): \[\widetilde{G}\left(u\right)=\sum_{N}a_{N}\cos\left(Nu\right)+b_{N}\sin\left( Nu\right)\,\ N\in\mathbb{N}\,\] where \(a_{N}\) and \(b_{N}\) are real coefficients. Furthermore, in this case the first term (linear in _u_) on the right hand side of Eq. (4.15) contributes to the Baryonic charge while \(\widetilde{G}\) does not (being periodic in the coordinate \(\phi\)). Moreover, when \(\widetilde{G}=0\), the stationary energy-momentum tensor only depends on the coordinates \(r\) and \(\theta\) (while it does not depend on \(\phi\)). Thus, in this case the solution represents an ordered array of homogeneous Baryonic tubes. On the other hand, when we turn on \(\widetilde{G}\), the energy-momentum tensor depends not only on \(r\) and \(\theta\) but also on \(u\) (through the modes of \(\widetilde{G}\)). In this case, the plots 2 of the energy-density reveal that \(\widetilde{G}\) describes modulations of the tubes in the \(\phi\) direction moving at the speed of light. These analytic solutions are Baryonic tubes dressed by modulations in the \(\phi\) direction. Also in this case, the emergence of chiral conformal degrees of freedom living on Hadronic tubes is a Figure 2: Energy density with and without modulation of hadronic tubes, with topological charge \(B=8\). For both cases we have set \(L_{r}=L_{\theta}=L_{\phi}=K=\lambda=1\), \(p=Q=2\) and \(n=4\). When \(a_{I}=b_{I}=0\) we obtain the left plot for hadronic tubes without modulations. However, if we consider \(a_{1}=a_{2}=b_{1}=-b_{3}=0.1\) we obtain the right plot for hadronic tubes with modulations. quite remarkable phenomenon. As in the case of the Hadronic layers, such chiral conformal degrees of freedom are not only exact solutions of the full Skyrme field equations: \(\widetilde{G}\) can be also considered as a solution of the linearized Skyrme field equations on top of the exact solution where \(\widetilde{G}=0\) (which represents ordered arrays of homogeneous Baryonic tubes). This observation allows to interpret \(\widetilde{G}\) as a chiral field propagating within the corresponding Baryonic medium. Hence, it is interesting to analyze the transport properties of such a chiral field (as such analysis could reveal how the Baryonic medium affects these important quantities): we hope to come back on this issue in a future publication. ## 5 Gauged Skyrmions and applications In this section, we will discuss how to generalize the modulated inhomogeneous Baryonic condensates constructed in the previous sections in the case in which the minimal coupling with Maxwell theory is taken into account. This issue is extremely important for the following reason. The numerical simulations discussing the appearance of inhomogeneous Baryonic condensates (see [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], and the nice up to date review [16]) usually do not consider explicitly the electromagnetic interactions of the Baryons (which often are modeled as point-like particles). The self-consistent coupling of many Baryons with the electromagnetic field generated by the Baryons themselves would make the numerical simulations considerably heavier. Therefore, to have analytic tools which can help to understand the nature of the electromagnetic field naturally associated to these inhomogeneous Hadronic condensates would be extremely helpful also as a guide for numerical simulations. Here we will describe how one can "the condensates introduced in the previous sections with their own electromagnetic fields. We will try to give a unified description of this construction which is valid both for Hadronic layers and tubes. The starting point is the observation that the key property of the ansatz in the ungauged cases which allows decoupling the Skyrme field equations without killing the topological charge is the relations in Eqs. (4.4) and (4.9) for layers and tubes respectively. Thus, we have to construct an ansatz for the gauge potential \(A_{\mu}\) which keeps (the covariant version of) Eqs. (4.4) and (4.9) alive. The answer to this question was found in [59, 60, 71], and [72]. We will first describe the method in the case of the Hadronic tube, which is easier to understand. In order to minimally couple the Hadronic tubes discussed in the previous sections with \(U(1)\) Maxwell gauge field let us consider a gauge potential \(A_{\mu}\) with the following characteristics: \[A_{\mu}\partial^{\mu}\alpha=0,\,A_{\mu}\partial^{\mu}\Theta=0,\,A_{\mu} \partial^{\mu}G=0\, \tag{5.1}\] \[A_{\mu}A^{\mu}=0\,\ \ \partial_{\mu}A^{\mu}=0. \tag{5.2}\] ### Abelian Higgs Model Here we will clarify the conditions in Eqs. (5.1) and (5.2) in a simpler case: the Abelian-Higgs model whose action \(S_{AH}\) is: \[S_{AH} = -\frac{1}{2}\int d^{4}x\sqrt{-g}\left[\left(D_{\mu}\Psi^{*}\right) D^{\mu}\Psi+\frac{\gamma}{2}\left(|\Psi|^{2}-v^{2}\right)^{2}+\frac{1}{2}F_{\mu \nu}F^{\mu\nu}\right]\,\] \[D_{\mu}\Psi = \partial_{\mu}\Psi-ieA_{\mu}\Psi\,\ \ F_{\mu\nu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu}\,\ \Psi=h\exp\left(iG\right)\,\] where \(\Psi\) is a complex scalar field (which can be parametrized in terms of two real scalar degrees of freedom: the amplitude \(h(x^{\mu})\) and the phase \(G(x^{\mu})\)). The field equations of the Abelian-Higgs model are: \[-D_{\mu}D^{\mu}\Psi+\gamma\left(|\Psi|^{2}-v^{2}\right)\Psi = 0 \tag{5.3}\] \[-\partial^{\mu}F_{\mu\nu}+\mbox{Im}\left(\Psi^{*}D_{\nu}\Psi- \Psi D_{\nu}\Psi^{*}\right) = 0. \tag{5.4}\] In general, the above system of equations is quite complicated (so much so that not even in the BPS limit one can find analytic vortex-like solutions: see [79] and references therein). Nevertheless, one can change a little bit point of view asking the following question: can we choose an ansatz for the Higgs field \(\Psi\) and for the Abelian field \(A_{\mu}\) in such a way to decouple the equation of the Higgs field in Eq. (5.3) from the Maxwell gauge potential \(A_{\mu}\) (keeping alive the field strength \(F_{\mu\nu}\))? In order to answer this question, let us make a list of all the terms which couple the Higgs field \(\Psi\) with \(A_{\mu}\) in Eq. (5.3): such terms are \[A_{\mu}A^{\mu}\Psi\,\ \ A_{\mu}\partial^{\mu}\Psi\,\ \ \Psi\partial^{\mu}A_{\mu}\.\] Thus, if there are non-trivial configurations such that the above terms all vanish then the gauge potential would disappear from the Higgs equation and all the terms that couple the gauge potential to the Higgs field would be left in the Maxwell equation Eq. (5.4). This is a very useful technical achievement since in this way one can solve first the Higgs field equation without worrying about the Maxwell field and then the Maxwell equations can be solved using the Higgs field as input. Examples of configurations with such properties in the Abelian-Higgs model have been found in [95]. The key idea is to choose a gauge potential with two components defining a light-like vector, so that \(A_{\mu}A^{\mu}=0\) (keeping alive \(F_{\mu\nu}\)). Moreover, one can choose the phase \(G\) to depend on a light-like variable in such a way that \(A_{\mu}\partial^{\mu}G=0\) and the amplitude \(h(x^{\mu})\) can be chosen to depend on space-like coordinates corresponding to the spatial directions which are absent in \(A_{\mu}\) so that \(A_{\mu}\partial^{\mu}h=0\). Last but not least, the spatial dependence of the gauge potential can be chosen so that \(\partial^{\mu}A_{\mu}=0\). An explicit example of where this decoupling strategy works is \[ds^{2} = -dt^{2}+L_{r}^{2}dr^{2}+L_{\theta}^{2}d\theta^{2}+L_{\phi}^{2}d \phi^{2}\,\] \[A_{\mu} = \left(\varpi,0,0,-L_{\phi}\varpi\right)\qquad\varpi=\varpi(r, \theta)\,\ u=\frac{t}{L_{\phi}}-\phi\,\] \[\Psi = h\exp\left(iG\right)\,\ \ h=h(r,\theta)\,\ G=Pu\,\] where \(P\) is an integer. One can check directly that with the ansatz here above Eq. (5.3) reduces to just one PDE for \(h\) (where the gauge potential does not appear) while the Maxwell equations of the Abelian-Higgs model reduce to just one linear equation for \(\varpi\) where \(h\) plays the role of an effective Schrodinger-like potential. In conclusion, in the Abelian Higgs model it is possible to decouple the Higgs equation from the Maxwell gauge potential by adopting the strategy explained here above. It is worth emphasize here that the requirements \(A_{\mu}A^{\mu}=0\) and \(A_{\mu}\partial^{\mu}\Psi=0\)_are not gauge-fixing choices_ (since we are already requiring \(\partial^{\mu}A_{\mu}=0\) and it is not possible to implement more than one gauge fixing condition at the same time). Instead, the conditions \(A_{\mu}A^{\mu}=0\) and \(A_{\mu}\partial^{\mu}\Psi=0\) must be seen as a guide to find the best possible ansatz which is able to reduce the complicated field equations of the Abelian Higgs model to a solvable system of equations in a consistent way (keeping alive the interactions between the matter field and the gauge field). _A priori_, one could think that to require the three conditions (\(A_{\mu}A^{\mu}=0\), \(A_{\mu}\partial^{\mu}\Psi=0\) and \(\partial^{\mu}A_{\mu}=0\)) at the same time is too restrictive and only trivial configurations can satisfy all of them. In fact, as it has been shown in [95], this is not the case and many examples can be explicitly constructed. The same strategy also works in the more complicated case of the gauged Skyrme Maxwell system as we will now discuss. Let us now go back to the gauged Skyrme model and to the conditions in Eqs. (5.1) and (5.2). One observes that the above conditions for \(A_{\mu}\) are not empty. In order to satisfy both conditions it is enough to consider a gauge potential with components only along the \(t-\)direction and the \(\phi-\)direction, then these two components, \(A_{t}\) and \(A_{\phi}\), must be proportional (in order to satisfy \(A_{\mu}A^{\mu}=0\)) and can depend on \(r\), \(\theta\) and the same null coordinate \(u\) which enters in \(G\): \[A_{\mu}=(\xi,0,0,-L_{\phi}\xi) \xi=\xi(u,r,\theta)\, \tag{5.5}\] \[\Rightarrow A_{\mu}A^{\mu}=0\quad\mbox{ and }\nabla^{\mu}A_{\mu}=0\.\] One can easily check that the field strength of such gauge potential in general is non-vanishing. The above conditions in Eqs. (5.1) and (5.2) complement the condition in Eq. (4.9) for the \(SU(2)\)-valued Skyrme field. From the viewpoint of the gauged Skyrme field equations, Eqs. (5.1) and (5.2) possess the very welcome feature to eliminate all the terms of the gauged Skyrme field equations which could, potentially, mix the \(SU(2)\) degrees of freedom with the gauge potential. Consequently, with the above ansatz for the gauge potential, the gauged Skyrme field equations remain the same as the ungauged Skyrme field equations corresponding to the ansatz in Eq. (4.8). On the other hand, one may wonder whether the above conditions in Eqs. (5.1) and (5.2) are too restrictive from the viewpoint of the Maxwell equations. In particular, the left hand side of the Maxwell field equations (namely \(\partial^{\mu}F_{\mu\nu}\)) only have components along the \(t-\)direction and the \(\phi-\)direction (due to the form of the gauge potential in Eq. (5.5)). Thus, we have to analyze the \(U(1)\) Skyrme current. A direct computation shows, that in the exponential parametrization, the right hand side \(J_{\mu}\) of the Maxwell equations (2.3) is \[J_{\mu}=-eK\sin^{2}\alpha\sin^{2}\Theta\biggl{\{}D_{\mu}\Phi+\frac{\lambda}{2} \Bigl{(}(\nabla\alpha)^{2}D_{\mu}\Phi-(\nabla\alpha\cdot D\Phi)\nabla_{\mu} \alpha+\sin^{2}\alpha\bigl{(}(\nabla\Theta)^{2}D_{\mu}\Phi-(\nabla\Theta\cdot D \Phi)\nabla_{\mu}\Theta\bigr{)}\Bigr{)}\biggr{\}}\.\] The terms which could spoil the consistency of the ansatz are all the terms which are not proportional to \(D_{\mu}\Phi\) (such as the terms proportional to \(\nabla_{\mu}\alpha\) and \(\nabla_{\mu}\Theta\)). In fact, all these "vanish (since the ansatz has the property that \(\nabla\alpha\cdot D\Phi=0=\nabla\Theta\cdot D\Phi\)). Hence, quite remarkably, with the above choice of the gauge potential the field equations of the gauged Skyrme Maxwell theory reduce exactly (no approximation involved here) in a consistent way to Eq. (4.10) for \(\alpha\) and only one linear Schrodinger-like equation \(\xi\) (see [59] and [60]). From the mathematical viewpoint, this ansatz for the gauge potential has been chosen to simplify as much as possible the coupled gauged Skyrme Maxwell system keeping alive the field strength and the interactions. From the physical viewpoint it turns out that such gauge fields belong to the important class of force free Maxwell field [63] (which are very important in astrophysics: see [96, 97, 98, 99, 100, 101] and references therein). Hence, from the physical viewpoint, the present approach disclosed a relevant property of the inhomogeneous condensates introduced in the previous sections which would have been very difficult to discover with other methods: such condensates are natural sources of force free plasmas. As far as the Hadronic layers are concerned, the story is very similar although slightly more complicated (see [71, 72], and [73]). The ansatz for the Hadronic layers \[H=H(r)\,\quad F=q\theta\,\quad G=G(u)\,\qquad u=\frac{t}{L_{\phi}}-\phi\,\] which satisfies the useful identities \[(\nabla F\cdot\nabla G)=(\nabla H\cdot\nabla F)=(\nabla H\cdot\nabla G)=( \nabla F)^{2}=0\,,\] can also be complemented with a suitable gauge potential by requiring that the gauge potential does not spoil the solvability of the gauged Skyrme field equations. The only difference is that instead of requiring \(A_{\mu}A^{\mu}=0\) one has to require a suitable quadratic constraint on the components of \(A_{\mu}\) (see [71, 72], and [73]) together with the usual conditions on the orthogonality of the gradients as in the case of Hadronic layers. Furthermore, final results are similar: it is possible to explicitly construct an ansatz for the gauge potential which allows the gauged Skyrme field equations to be solved and the Maxwell equations with the \(U(1)\) Skyrme current reduce consistently to a Schrodinger-like equation. The case of Hadronic layers is simpler since the Schrodinger-like equation can be reduced to the Mathieu equation which is solvable in terms of special functions. For the Hadronic layers it is also true that they are natural sources of force free plasmas (see [73] for details). ## 6 Applications to Yang-Mills-Higgs theory In this section, we will discuss how the previous results on the Skyrme model can be used in the Yang-Mills-Higgs case to disclose the existence of chiral conformal modes dressing topologically non-trivial configurations (see [67] and [104]). ### Pure Yang-Mills theory The Yang-Mills theory in \(\left(3+1\right)\)-dimensions is described by the action \[I[A] = \frac{1}{2e^{2}}\int d^{4}x\sqrt{-g}\,\mbox{Tr}(F_{\mu\nu}F^{\mu\nu} )\,\] \[F_{\mu\nu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+\left[A_{\mu},A_{\nu} \right],\] where \(e\) is the coupling constant and \(F_{\mu\nu}\) is the field strength. The field equations and the energy-momentum tensor read \[\nabla_{\nu}F^{\mu\nu}+\left[A_{\nu},F^{\mu\nu}\right]=\ 0\,\] \[T_{\mu\nu}=-\frac{2}{e^{2}}\mbox{Tr}\bigg{(}F_{\mu\alpha}F_{\nu} ^{\ \alpha}-\frac{1}{4}g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}\bigg{)}\.\] The experience with the Skyrme model suggests that a very effective way to confine topologically non-trivial configurations to a flat box with finite spatial volume is to use the flat metric defined in Eq. (4.1), with the ranges \[0\leq\theta\leq 2\pi\,\qquad 0\leq\phi\leq\pi\,\qquad 0\leq r\leq 4\pi. \tag{6.1}\] The above ranges for the coordinates \(\theta\), \(\phi\) and \(r\) are related to the Euler angle parameterization for \(SU(2)\) valued fields. The basic building block of the construction in the Yang-Mills-Higgs case is the ansatz which describes Hadronic layers in the Skyrme case. To see how this works, let us consider \(U(x)\in SU(2)\) defined as \[U=\exp\left(p\,\theta\frac{\mbox{\bf t}_{3}}{2}\right)\exp\left(H\left(t,\phi \right)\frac{\mbox{\bf t}_{2}}{2}\right)\exp\left(q\,r\frac{\mbox{\bf t}_{3}}{ 2}\right)\,, \tag{6.2}\] where \(p\) and \(q\) are non-vanishing integers. The theory of Euler angles for \(SU(N)\)[91, 92, 93] dictates the range of \(\theta\), \(r\) and \(H\). As it will be discussed in a moment, when \(H\left(t,\phi\right)\) satisfies either \[H\left(t,\phi=0\right)=0\,,\ \ \ \ H\left(t,\phi=\pi\right)=\pi\, \tag{6.3}\] or \[H\left(t,\phi=0\right)=\pi\,\ \ H\left(t,\phi=\pi\right)=0\,. \tag{6.4}\] The Chern-Simons topological charge of the gauge field (to be constructed in a moment) will be non-zero. To complete the construction of the ansatz for the gauge potential, we will use the following definitions \[A_{\mu}=\sum_{j=1}^{3}\lambda_{j}\Omega_{\mu}^{j}\mbox{\bf t}_{j}\,,\qquad U^ {-1}\partial_{\mu}U=\sum_{j=1}^{3}\Omega_{\mu}^{j}\mbox{\bf t}_{j}\,\] where \[H\left(t,\phi\right)=\arccos\left(G\right)\,,\,\,\,\,G=G\left(t,\phi \right)\,\,,\] \[\lambda_{1}\left(t,\phi\right) = \lambda_{2}\left(t,\phi\right)=\frac{G}{\sqrt{G^{2}+\exp(2\eta)}} \stackrel{{ def}}{{=}}\lambda\left(t,\phi\right)\,\,,\,\,\,\,\, \lambda_{3}\left(t,\phi\right)=1\,\,,\,\,\eta\in\mathbb{R}\,\,,\] \[G\left(t,\phi\right) = \exp(3\eta)\frac{F}{\sqrt{1-\exp(4\eta)\cdot F^{2}}}\,,\,\,\,\,\, \,\,\,F=F\left(t,\phi\right)\,\,.\] The real parameter \(\eta\) will be fixed by requiring that the CS charge is an integer. The option in Eq. (6.4) gives rise to the following boundary condition for \(F\left(t,\phi\right)\), \[F\left(t,\phi=0\right)=F_{0}=F\left(t,\phi=\pi\right)\,\,. \tag{6.5}\] In the latter case the CS charge vanishes. On the other hand, the option in Eq. (6.3), in terms of \(F\left(t,\phi\right)\), reads \[F\left(t,\phi=0\right)=-\frac{\exp(-2\eta)}{\sqrt{1+\exp(2\eta)}}\,\,,\qquad F \left(t,\phi=\pi\right)=\frac{\exp(-2\eta)}{\sqrt{1+\exp(2\eta)}}\,\,,\] in order to have a non-zero CS charge. In this case both the CS charge and the CS density will be non-trivial. Then, \(A_{\mu}\) reads \[A_{\mu}= \lambda\left(t,\phi\right)\left[\frac{\mathbf{t}_{1}}{2}\left\{- \sin\left(qr\right)dH+p\cos\left(qr\right)\sin\left(H\right)d\theta\right\}+ \frac{\mathbf{t}_{2}}{2}\{\cos\left(qr\right)dH+p\sin\left(qr\right)\sin\left( H\right)d\theta\}\right]+\] \[+\,\,\frac{\mathbf{t}_{3}}{2}\left[qdr+p\cos(H)d\theta\right]\,,\] \[dH=\frac{\partial H}{\partial t}dt+\frac{\partial H}{\partial\phi}d\phi\,\,.\] With the above, the complete set of \((3+1)\)-dimensional Yang-Mills field equations reduce to \[\Box F\,\equiv\,\left(\frac{\partial^{2}}{\partial t^{2}}-\frac{1}{L_{\phi}^{ 2}}\frac{\partial^{2}}{\partial\phi^{2}}\right)F\,=\,0\,,\] which corresponds to the field equation of a free massless scalar field in two dimensions. ### Energy-momentum tensor and topological charge With the above ansatz the topological density, the on-shell Lagrangian and the energy-momentum tensor read \[\rho_{\rm CS}=\frac{pq\exp(3\eta)}{16\pi^{2}\left(1-\exp(4\eta)F^{2}\right)^{3 /2}}\frac{\partial F}{\partial\phi}\,,\] \[T_{tt}= \frac{p^{2}}{e^{2}L_{\theta}^{2}}\exp(5\eta)\cosh\left(\eta\right) \left[\left(\frac{\partial F}{\partial t}\right)^{2}+\frac{1}{L_{\phi}^{2}} \left(\frac{\partial F}{\partial\phi}\right)^{2}\right]\;,\] \[L_{\text{on-shell}}= \frac{p^{2}}{e^{2}L_{\theta}^{2}}\exp(5\eta)\cosh\left(\eta \right)\left[\left(\frac{\partial F}{\partial t}\right)^{2}-\frac{1}{L_{\phi}^ {2}}\left(\frac{\partial F}{\partial\phi}\right)^{2}\right]\;.\] The full energy-momentum tensor is \[T_{\mu\nu}=\left[\begin{array}{cccc}T_{tt}&0&0&P_{\phi}\\ 0&T_{rr}&0&0\\ 0&0&T_{\theta\theta}&0\\ P_{\phi}&0&0&T_{\phi\phi}\end{array}\right]\,,\] where \[T_{rr}= \frac{p^{2}L_{r}^{2}}{e^{2}L_{\theta}^{2}}\exp\left(5\eta\right) \cosh\left(\eta\right)\left[\left(\frac{\partial F}{\partial t}\right)^{2}- \frac{1}{L_{\phi}^{2}}\left(\frac{\partial F}{\partial\phi}\right)^{2}\right] =-\frac{L_{r}^{2}}{L_{\theta}^{2}}T_{\theta\theta}\;,\] \[T_{\phi\phi}= \frac{p^{2}L_{\phi}^{2}}{e^{2}L_{\theta}^{2}}\exp\left(5\eta \right)\cosh\left(\eta\right)\left[\left(\frac{\partial F}{\partial t}\right) ^{2}+\frac{1}{L_{\phi}^{2}}\left(\frac{\partial F}{\partial\phi}\right)^{2} \right]\,,\] \[T_{t\phi}=P_{\phi}=\frac{2p^{2}\exp\left(5\eta\right)\cosh\left(\eta\right)}{ e^{2}L_{\theta}^{2}}\frac{\partial F}{\partial t}\frac{\partial F}{ \partial\phi}\,.\] Of course, the energy-momentum tensor is traceless; \(g^{\mu\nu}T_{\mu\nu}=0\). Just as in the cases of Hadronic layers and tubes described in previous sections, the effective two-dimensional energy-momentum tensor restricted to the \(t\) and \(\phi\) directions are still traceless: \[T_{ab}=\left(\begin{array}{cc}T_{tt}&P_{\phi}\\ P_{\phi}&T_{\phi\phi}\end{array}\right)\;,\hskip 28.452756pta,b=t,\phi\,.\] On the other hand, the topological \(Q_{CS}\) charge is \[Q_{CS}=\left.\frac{pq\exp\{(3\eta)\}}{2}\left[\frac{F}{\sqrt{1-\exp\{(4\eta) \}F^{2}}}\right]\right|_{F(t,0)}^{F(t,\pi)}.\] We will consider the boundary conditions of (6.5) for \(F(t,\phi)\), because in Eq. (6.5) it was concluded that for values \(F(t,0)=F(t,\pi)\) the topological charge is canceled. Now, we will introduce an auxiliary function for the \(B_{CS}\) to be expressed in terms of an integer. \[\Omega(\eta,a,b)\equiv\frac{\exp(3\eta)}{2}\left[\frac{a}{\sqrt{1-\exp\{(4\eta )\}a^{2}}}-\frac{b}{\sqrt{1-\exp\{(4\eta)\}b^{2}}}\right].\] Then, the topological charge is the following \[Q_{CS}=pq\cdot\Omega(\eta,a=F(t,\pi),b=F(t,0)).\] Now if we consider the boundary conditions (6.5) for \(F(t,\phi)\) in the auxiliary function \(\Omega\) we obtain the following reduced expression for the topological charge \[Q_{CS}=pq.\] Thus \(pq\) must be integers because the \(Q_{CS}\) must be an integer. ### Yang-Mills-Higgs theory The previous results can be extended to the Yang-Mills-Higgs theory in \((3+1)\)-dimensions (with the Higgs field in the adjoint representation) whose action reads \[I[A,\varphi]=\int d^{4}x\sqrt{-g}\left(\frac{1}{2e^{2}}{\rm Tr}(F_{\mu\nu}F^{ \mu\nu})+\frac{1}{4}{\rm Tr}(D_{\mu}\varphi D^{\mu}\varphi)\right).\] The field equations and the energy-momentum tensor are \[\nabla_{\nu}F^{\mu\nu}+[A_{\nu},F^{\mu\nu}]+\frac{e^{2}}{4}[\varphi,D^{\mu} \varphi]\ =\ 0\,\] \[D_{\mu}D^{\mu}\varphi\ =\ 0\,\] \[T_{\mu\nu}=-\frac{2}{e^{2}}{\rm Tr}\biggl{(}F_{\mu\alpha}{F_{\nu}}^{\alpha}- \frac{1}{4}g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}\biggr{)}-\frac{1}{2}{\rm Tr }\biggl{(}D_{\mu}\varphi D_{\nu}\varphi-\frac{1}{2}g_{\mu\nu}D_{\alpha} \varphi D^{\alpha}\varphi\biggr{)}\.\] Also in the present case, the main building block is the ansatz for the gauge potential "inspired" by the expression for the Hadronic layers of the Skyrme model (described in the previous subsection). On the other hand, the Higgs field can be chosen as \[\varphi=\sum_{j=1}^{3}f_{j}(r)h^{j}(t,\phi){\bf t}_{j}\,\] \[h_{1}(t,\phi)=\frac{a}{b}h(t,\phi)\,\quad h_{3}(t,\phi)=a\cot\left(H(t,\phi )\right)\frac{h(t,\phi)}{\lambda(t,\phi)}\,\quad\lambda_{3}=1\,\] \[f_{1}(r)=b\cos(qr)f_{3}(r)\,\quad f_{2}(r)=a\sin(qr)f_{3}(r)\,\quad f_{3}(r)=f_{ 0}\,r\,\] \[h_{2}(t,\phi):=h(t,\phi)\,\qquad\lambda_{1}(t,\phi)=\lambda_{2}(t,\phi):= \lambda(t,\phi)\,\] where \(a\), \(b\), and \(f_{0}\) are integration constants. In this way, the Yang-Mills-Higgs equations read \[\Box H=\left(\frac{\partial^{2}}{\partial t^{2}}-\frac{1}{L_{\phi}^{2}}\frac{ \partial^{2}}{\partial\phi^{2}}\right)H=0\,,\qquad\Box h=\left(\frac{\partial^ {2}}{\partial t^{2}}-\frac{1}{L_{\phi}^{2}}\frac{\partial^{2}}{\partial\phi^{ 2}}\right)h=0\,,\qquad\Box\lambda=\left(\frac{\partial^{2}}{\partial t^{2}}- \frac{1}{L_{\phi}^{2}}\frac{\partial^{2}}{\partial\phi^{2}}\right)\lambda=0\,,\] together with \[\left(\frac{\partial H}{\partial t}\right)^{2}-\frac{1}{L_{\phi}^{ 2}}\left(\frac{\partial H}{\partial\phi}\right)^{2} = \left(\frac{\partial H}{\partial t}-\frac{1}{L_{\phi}}\frac{ \partial H}{\partial\phi}\right)\left(\frac{\partial H}{\partial t}+\frac{1} {L_{\phi}}\frac{\partial H}{\partial\phi}\right)=0\,,\] \[\left(\frac{\partial h}{\partial t}\right)^{2}-\frac{1}{L_{\phi}^ {2}}\left(\frac{\partial h}{\partial\phi}\right)^{2} = \left(\frac{\partial h}{\partial t}-\frac{1}{L_{\phi}}\frac{ \partial h}{\partial\phi}\right)\left(\frac{\partial h}{\partial t}+\frac{1} {L_{\phi}}\frac{\partial h}{\partial\phi}\right)=0\,,\] \[\left(\frac{\partial\lambda}{\partial t}\right)^{2}-\frac{1}{L_{ \phi}^{2}}\left(\frac{\partial\lambda}{\partial\phi}\right)^{2} = \left(\frac{\partial\lambda}{\partial t}-\frac{1}{L_{\phi}}\frac{ \partial\lambda}{\partial\phi}\right)\left(\frac{\partial\lambda}{\partial t}+ \frac{1}{L_{\phi}}\frac{\partial\lambda}{\partial\phi}\right)=0\,,\] \[\lambda=\pm\frac{\cos(H)}{\sqrt{\exp(2\lambda_{0})+\cos^{2}(H)}}\.\] Summarizing, with the ansatz presented above, the complete set of field equations of the Yang-Mills-Higgs theory has been reduced to the field equations of three chiral massless scalar fields in \((1+1)\)-dimensions plus a non-linear constraint between the two of them. Consequently, the techniques that have been developed to analyze the "dressing" of Hadronic tubes and layers with chiral modes (allowing, in principle, the computations of relevant transport coefficients) also, work in the Yang-Mills-Higgs case. ## 7 Conclusion This review describes a proper analytic framework to construct inhomogeneous Baryonic condensates in the gauged Skyrme Maxwell theory. This approach cannot only produce exact sol utions with high Baryonic charges but also gives considerable physical insights into the nature of the configurations (such as the fact that these Hadronic layers and tubes constructed in the previous sections are natural sources of force free plasmas). Another characteristic of the present technique is that it discloses the appearance of chiral conformal degrees of freedom which describes modulations of the condensates. Finally, we have discussed that a similar strategy also works in the case of Yang-Mills-Higgs theory in \((3+1)\) dimensions where the insights from the gauged Skyrme model help to construct novel analytic solutions (which can also be dressed with chiral conformal degrees of freedom). ## Acknowledgements F. C. has been funded by Fondecyt Grants 1200022. S. R. has been funded by ANID-Beca de Magister Nacional 2022-22221100, FONDECYT Grant 221504, and 1200022. The Centro de Estudios Cientificos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of ANID.
2306.06444
On the $D_ω$-classical orthogonal polynomials
We wish to investigate the $D_{\omega}$-classical orthogonal polynomials, where $D_{\omega}$ is a special case of the Hahn operator. For this purpose, we consider the problem of finding all sequences of orthogonal polynomials such that their $D_{\omega}$-derivatives are also orthogonal polynomials. To solve this problem we adopt a different approach to those employed in this topic. We first begin by determining the coefficients involved in their recurrence relations, and then providing an exhaustive list of all solutions. When $\omega=0$, we rediscover the classical orthogonal polynomials of Hermite, Laguerre, Bessel and Jacobi. For $\omega=1$, we encounter the families of discrete classical orthogonal polynomials as particular cases.
Khalfa Douak
2023-06-10T13:48:50Z
http://arxiv.org/abs/2306.06444v1
# On the \(D_{\omega}\)-classical orthogonal polynomials ###### Abstract We wish to investigate the \(D_{\omega}\)-classical orthogonal polynomials, where \(D_{\omega}\) is a special case of the Hahn operator. For this purpose, we consider the problem of finding all sequences of orthogonal polynomials such that their \(D_{\omega}\)-derivatives are also orthogonal polynomials. To solve this problem we adopt a different approach to those employed in this topic. We first begin by determining the coefficients involved in their recurrence relations, and then providing an exhaustive list of all solutions. When \(\omega=0\), we rediscover the classical orthogonal polynomials of Hermite, Laguerre, Bessel and Jacobi. For \(\omega=1\), we encounter the families of discrete classical orthogonal polynomials as particular cases. _Keywords_: Classical orthogonal polynomials; discrete orthogonal polynomials; recurrence relations; difference operator; difference equations. **AMS Classification.** 33C45; 42C05. ## 1 Introduction and preliminary results The orthogonal polynomials are characterized by the fact that they satisfy a second-order recurrence relation. They said to be classical if their derivatives also form a sequence of orthogonal polynomials [1]. Hahn generalized the classical orthogonal polynomials by generalizing their characteristic properties (see [2, 3] for more details). For this, he considered the linear operator [4] \[\left(H_{q,\omega}f\right)(x):=\frac{f(qx+\omega)-f(x)}{(q-1)x+\omega}, \tag{1.1}\] for all polynomial \(f\), with \(q\) and \(\omega\) are two fixed complex numbers. Hahn showed that there is no loss of generality in assuming \(\omega\) to be zero so that in what follows \(q\) may be thought of as \(1\) or different to \(1\). For \(q\neq 1\) and \(\omega=0\), we obtain the \(q\)-difference operator (also known as the Jackson's \(q\)-operator) which we write \(\left(\mathscr{D}_{q}f\right)(x):=\left(H_{q,0}f\right)(x)\). When \(q=1\) with \(\omega\neq 0\) we get the discrete operator \(\left(D_{\omega}f\right)(x):=\left(H_{1,\omega}f\right)(x)\), that is, \[\left(D_{\omega}f\right)(x)=\frac{f(x+\omega)-f(x)}{\omega}. \tag{1.2}\] For \(\omega=1\), we meet the finite (or forward) difference operator \(\Delta f(x)=f(x+1)-f(x)\). The limiting case \(\omega\to 0\) (resp. \(q\to 1\)) of \(D_{\omega}\) (resp. \(\mathscr{D}_{q}\)) gives rise to the derivative operator \(D=d/dx\), giving \((Df)(x):=f^{\prime}(x)\). Because it is always possible to take such a limit, this point is not really important at this time. It will be dealt with a little bit later when necessary. Motivated by the several properties common to all of the classical orthogonal polynomials, Hahn [4] posed and solved five (equivalent) problems that are related to the operator \(\mathscr{D}_{q}\) and obtained that all possible solutions lead to the same orthogonal polynomial sequences (OPS) which are the so-called classical \(q\)-orthogonal polynomials. Later on, the study of such polynomials has known an increasing interest (see for instance the [5] and the references therein). The first problem studied by Hahn is the following: Find all OPS \(\{P_{n}\}_{n\geqslant 0}\) such that \(\{\mathscr{D}_{q}P_{n}\}_{n\geqslant 0}\) is also an OPS. For more details about the solutions of these problems we refer the reader to [1, 2, 3, 6]. In [7] Douak and Maroni considered the problem of finding all OPS such that their \(D\)-derivatives are also OPS. Instead of basing the study of this problem on the various properties of orthogonal polynomials, the authors have rather founded their exposures in a purely algebraic point of view, focusing primarily on the explicit calculation of recurrence coefficients. The identified polynomials are none other than the classical orthogonal polynomials of Hermite, Laguerre, Jacobi and Bessel with the usual restrictions on the parameters. Referring back to the operator \(D_{\omega}\), we will pose the analogous problem: \((\mathbf{P})\) _Find all OPS \(\{P_{n}\}_{n\geqslant 0}\) such that \(\{D_{\omega}P_{n}\}_{n\geqslant 0}\) are also OPS._ Note that in this field a general method of studying the classical orthogonal polynomials of a discrete variable as solutions of a second-order difference equation of hypergeometric type was considered by Nikiforov _et al._[8]. This approach was also adopted by Lesky in [9, 10]. This work is mainly intended to constructing the \(D_{\omega}\)-classical orthogonal polynomials by proceeding as in [7]. Such approach is rather new and, of course, different to those previously used in several studies dedicated to this topic (see for instance [1, 8, 11, 12] and the references therein). After determining the recurrence coefficients, we proceed to the identification of the resulting polynomials. Under some restrictions on the parameters, we establish that these polynomials can be reduced to one of the well-known families of discrete classical orthogonal polynomials. The same method was also used within the \(d\)-orthogonality context (\(d\geqslant 1\)) to provide many extensions of the classical orthogonal polynomials (see, e.g. [7, 13, 14, 15] and the references therein). In an earlier survey, Abdelkarim and Maroni [12] investigated the problem \((\mathbf{P})\) according to a functional approach. The authors had established various equivalent properties characterizing the resulting polynomials. Particularly, they showed that those polynomials satisfy the so-called functional Rodrigues's formula (1.14). Based on this last characterization, up to linear transformation of the variable, they found that there are four classes of \(D_{\omega}\)-classical orthogonal sequences satisfying Rodrigues's formula, including the Charlier, Meixner, Krawchuk and Hahn polynomials as special cases of them. Let \(\mathscr{P}\) be the vector space of polynomials of one variable with complex coefficients and let \(\mathscr{P}^{\prime}\) be its algebraic dual. We denote by \(\left\langle\cdot\,,\,.\right\rangle\) the duality brackets between \(\mathscr{P}^{\prime}\) and \(\mathscr{P}\). Let us denote by \(\{P_{n}\}_{n\geqslant 0}\) a polynomials sequence (PS), \(\deg P_{n}=n\), and \(\{u_{n}\}_{n\geqslant 0}\) its associated dual sequence (basis) defined by \(\left\langle u_{n},P_{m}\right\rangle=\delta_{nm}\); \(n,m\geqslant 0\), where \(\delta_{nm}\) is the Kronecker's delta symbol. The first element \(u_{0}\) of the dual sequence is said to be the _canonical_ form associated to the PS \(\{P_{n}\}_{n\geqslant 0}\). Throughout this article, we will always consider the sequence of _monic_ polynomials, i.e. the leading coefficient of each polynomial \(P_{n}\) is one (\(P_{n}(x)=x^{n}+\cdots\)). Given a form \(u\in\mathscr{P}^{\prime}\). The sequence of complex numbers \((u)_{n},\ n=0,1,2,\dots\), defined by \((u)_{n}:=\left<u,x^{n}\right>\) denotes the moments of \(u\) with respect to the sequence \(\{x^{n}\}_{n\geqslant 0}\). The form \(u\) is called _regular_ (or _quasi-definite_) if we can associate with it a PS \(\{P_{n}\}_{n\geqslant 0}\) such that \[\left<u,P_{n}P_{m}\right>=k_{n}\delta_{n,m},\,n,m\geqslant 0\ ;\ k_{n}\neq 0,\ n \geqslant 0.\] In this case \(\{P_{n}\}_{n\geqslant 0}\) is an orthogonal polynomials sequence (OPS) with respect to (w.r.t.) \(u\). As an immediate consequence of the regularity of \(u\), we have \((u)_{0}\neq 0\) and \(u=\lambda u_{0}\) with \(\lambda\neq 0\). Furthermore, the elements of the dual sequence \(\{u_{n}\}_{n\geqslant 0}\) are such that \[u_{n}=\left(\left<u_{0},P_{n}^{2}\right>\right)^{-1}P_{n}u_{0},\ \ n=0,1,2,\dots. \tag{1.3}\] So, in all what follows, we consider the orthogonality of any PS w.r.t. its canonical form \(u_{0}\). First, let us introduce the two operators \(h_{a}\) and \(\tau_{b}\) defined for all \(f\in\mathscr{P}\) by \[(h_{a}f)(x)=f(ax)\ \ \text{and}\ \ (\tau_{b}f)(x)=f(x-b),\quad a\in\mathbb{C}^{ *}:=\mathbb{C}\backslash\{0\},\ b\in\mathbb{C}. \tag{1.4}\] On the other hand, for any functional \(u\), we can write by transposition \[\left<\tau_{-b}u,f(x)\right>:=\left<u,\tau_{b}f(x)\right>=\left<u, f(x-b)\right>, f\in\mathscr{P}, \tag{1.5}\] \[\left<h_{a}u,f(x)\right>:=\left<u,h_{a}f(x)\right>=\left<u,f(ax) \right>, f\in\mathscr{P}. \tag{1.6}\] For further formulas and other properties fulfilled by the operator \(D_{\omega}\) see [12]. We now consider the sequence of monic polynomials \(\{Q_{n}(x):=(n+1)^{-1}D_{\omega}P_{n+1}(x)\}_{n\geqslant 0}\), with its associated dual sequence denoted by \(\{v_{n}\}_{n\geqslant 0}\) and fulfilling \[D_{-\omega}\left(v_{n}\right)=-(n+1)u_{n+1},\ n\geqslant 0, \tag{1.7}\] where by definition \[\left<D_{-\omega}u\,,\,f\right>=-\left<u\,,\,D_{\omega}f\right>,\,u\in\mathscr{ P}^{\prime},\,f\in\mathscr{P}. \tag{1.8}\] Next, in the light of the so-called Hahn property [1], we give the following definition. **Definition 1.1**: _The OPS \(\{P_{n}\}_{n\geqslant 0}\) is called "\(D_{\omega}\)-classical" if the sequence of its derivatives \(\{Q_{n}\}_{n\geqslant 0}\) is also a OPS._ Thus, \(\{P_{n}\}_{n\geqslant 0}\) is orthogonal w.r.t. \(u_{0}\) and satisfies the second-order recurrence relation \[P_{n+2}(x)=(x-\beta_{n+1})P_{n+1}(x)-\gamma_{n+1}P_{n}(x),\ n \geqslant 0, \tag{1.9a}\] \[P_{1}(x)=x-\beta_{0},\ P_{0}(x)=1, \tag{1.9b}\] and \(\{Q_{n}\}_{n\geqslant 0}\) is orthogonal w.r.t. \(v_{0}\) and satisfies the second-order recurrence relation \[Q_{n+2}(x)=(x-\tilde{\beta}_{n+1})Q_{n+1}(x)-\tilde{\gamma}_{n+1 }Q_{n}(x),\ n\geqslant 0, \tag{1.10a}\] \[Q_{1}(x)=x-\tilde{\beta}_{0},\ Q_{0}(x)=1, \tag{1.10b}\] with the regularity conditions \(\gamma_{n}\neq 0\) and \(\tilde{\gamma}_{n}\neq 0\) for every \(n\geqslant 1\). Finally, we will summarise in the following proposition the important properties characterizing the \(D_{\omega}\)-classical orthogonal polynomials as stated in [12, Propositions 2.1-2.3]. **Proposition 1.2**: _For any \(\mathrm{OPS}\ \{P_{n}\}_{n\geqslant 0}\), the following are equivalent statements:_ (a) _The sequence_ \(\{P_{n}\}_{n\geqslant 0}\) _is_ \(D_{\omega}\)_-classical._ (b) _The sequence_ \(\{Q_{n}\}_{n\geqslant 0}\) _is orthogonal._ (c) _There exist two polynomials_ \(\Phi\) _monic_ (_\(with\ \deg\Phi=t\leqslant 2\)_) _and_ \(\Psi\) _(_\(with\ \deg\Psi=1\)_), and a sequence_ \(\{\lambda_{n}\}_{n\geqslant 0}\)_,_ \(\lambda_{n}\neq 0\) _for all_ \(n\)_, such that_ \[\Phi(x)\left(D_{\omega}D_{-\omega}P_{n+1}\right)(x)-\Psi(x)\left(D_{-\omega}P_ {n+1}\right)(x)+\lambda_{n}P_{n+1}(x)=0,\ n\geqslant 0. \tag{1.11}\] (d) _The sequences_ \(\{Q_{n}\}_{n\geqslant 0}\) _and_ \(\{P_{n}\}_{n\geqslant 0}\) _are interlinked via the differentiation formula_ \[\Phi(x)Q_{n}(x)=\alpha_{n+2}^{2}P_{n+2}(x)+\alpha_{n+1}^{1}P_{n+1}(x)+\alpha_ {n}^{0}P_{n}(x),\ n\geqslant 0,\quad(\alpha_{n}^{0}\neq 0). \tag{1.12}\] _This identity is referred to as the first structure relation of the OPS_ \(\{P_{n}\}_{n\geqslant 0}\)_._ (e)_The form_ \(u_{0}\) _is_ \(D_{\omega}\)_-classical, say, it is regular and satisfies the functional equation_ \[D_{-\omega}\left(\Phi u_{0}\right)+\Psi u_{0}=0. \tag{1.13}\] (f) _There exist a monic polynomial_ \(\Phi\)_,_ \(\deg\Phi\leqslant 2\)_, and a sequence_ \(\{\lambda_{n}\}_{n\geqslant 0}\)_,_ \(\lambda_{n}\neq 0\) _for all_ \(n\)_, such that the canonical form_ \(u_{0}\) _satisfies the so-called functional Rodrigues formula_ \[P_{n}u_{0}=\lambda_{n}D_{-\omega}^{n}\Big{\{}\Big{(}\prod_{\nu=0}^{n-1}\tau_{ -\nu\omega}\Phi\Big{)}u_{0}\Big{\}},\ n\geqslant 0,\quad\text{ with }\prod_{\nu=0}^{-1}:=1. \tag{1.14}\] Here we will add a new characterization to those established in this proposition. This will be proved once we give the lemma below. To begin with, apply the operator \(D_{\omega}\) to (1.9a)-(1.9b), taking into account (1.10a)-(1.10b), we obtain \[P_{n+2}(x) =Q_{n+2}(x)+\tilde{\alpha}_{n+1}^{1}Q_{n+1}(x)+\tilde{\alpha}_{n} ^{0}Q_{n}(x),\ n\geqslant 0, \tag{1.15a}\] \[P_{1}(x) =Q_{1}(x)+\tilde{\alpha}_{0}^{1},\ P_{0}(x)=Q_{0}(x)=1. \tag{1.15b}\] We will refer to this relation as the _second structure relation_ of the polynomials \(P_{n},\,n\geqslant 0\). This will be instrumental in Section 2 to derive the system connecting the coefficients \(\alpha_{n+2}^{2}\), \(\alpha_{n+1}^{1}\) and \(\alpha_{n}^{0}\), for \(n\geqslant 0\), and the recurrence coefficients \(\beta_{n}\) and \(\gamma_{n+1},\ n\geqslant 0\). Combining (1.15a)-(1.15b) with (1.9a)-(1.9b) and then use (1.10a)-(1.10b), we infer that \[\tilde{\alpha}_{n}^{1}=(n+1)(\beta_{n+1}-\tilde{\beta}_{n}-\omega),\ n \geqslant 0,\ \ \text{and}\ \ \tilde{\alpha}_{n}^{0}=(n+1)\gamma_{n+2}-(n+2)\tilde{\gamma}_{n+1},\ n \geqslant 0, \tag{1.16}\] Reminder that, for the classical orthogonal polynomials (\(\omega=0\)), the first structure relation was given by Al Salam and Chihara [16], and the second one was established by Maroni [17]. When \(D_{\omega}\) is replaced by the finite difference operator \(\Delta\), Garcia _et al._[11] proved that the structure relations (1.12) and (1.15a)-(1.15b), as well as the functional Rodrigues formula (1.14) characterize the discrete classical polynomials of Charlier, Meixner, Krawchuk and Hahn. We will now return to the operator \(D_{\omega}\) and show that (1.15a)-(1.15b) also characterize the \(D_{\omega}\)-classical orthogonal polynomials. To do so, we need the following lemma. **Lemma 1.3** ([18]): _Let \(\{P_{n}\}_{n\geqslant 0}\) be a sequence of monic polynomials and let \(\{u_{n}\}_{n\geqslant 0}\) be its associated dual sequence. For any linear functional \(u\) and integer \(m\geqslant 1\), the following statements are equivalent:_ (i) \(\ \big{\langle}u\,,\,P_{m-1}\big{\rangle}\neq 0\ ;\ \big{\langle}u\,,\,P_{n} \big{\rangle}=0,\ n\geqslant m;\)__ (ii) \(\ \exists\ \lambda_{\nu}\in\mathbb{C},\ 0\leqslant\nu\leqslant m-1,\ \lambda_{m-1}\neq 0,\ \ \text{such that}\ u=\sum_{\nu=0}^{m-1}\lambda_{\nu}u_{\nu}.\) **Proposition 1.4**: _Let \(\{P_{n}\}_{n\geq 0}\) be an OPS satisfying (1.9a)-(1.9b). The sequence \(\{P_{n}\}_{n\geq 0}\) is \(D_{\omega}\)-classical if and only if it fulfils (1.15a)-(1.15b)._ _Proof_ The proof is similar in spirit to that of [11, Proposition 2.10]. The necessary condition has already been shown. Conversely, suppose that the OPS \(\{P_{n}\}_{n\geq 0}\) fulfils (1.15a) with (1.15b). The action of the functional \(v_{0}\) on both sides of the aforementioned identities gives rise to \[\big{\langle}v_{0}\,,\,P_{0}\big{\rangle}=1\,\ \big{\langle}v_{0}\,,\,P_{1} \big{\rangle}=\tilde{\alpha}_{0}^{1}\,\ \big{\langle}v_{0}\,,\,P_{2}\big{\rangle}=\tilde{\alpha}_{0}^{0}\ \text{ and }\ \big{\langle}v_{0}\,,\,P_{n}\big{\rangle}=0,\ \text{ for }\ n\geq 3.\] Application of Lemma 1.3 shows that \[v_{0}=\lambda_{0}u_{0}+\lambda_{1}u_{1}+\lambda_{2}u_{2}, \tag{1.17}\] with \(\lambda_{0}=1\), \(\lambda_{1}=\tilde{\alpha}_{0}^{1}=\beta_{1}-\tilde{\beta}_{0}-\omega\) and \(\lambda_{2}=\tilde{\alpha}_{0}^{0}=\gamma_{2}-2\tilde{\gamma}_{1}\). Now, the use of (1.3) enables us to write \(u_{1}=\big{(}\big{\langle}u_{0},P_{1}^{2}\big{\rangle}\big{)}^{-1}\,P_{1}u_{0}\) and \(u_{2}=\big{(}\big{\langle}u_{0},P_{2}^{2}\big{\rangle}\big{)}^{-1}\,P_{2}u_{0}\). In (1.17) we replace \(u_{1}\) and \(u_{2}\) by their respective expressions given above, to deduce that there exists a polynomial \(\Phi\), with \(\deg\Phi\leqslant 2\), such that \(v_{0}=\Phi u_{0}\). On the other hand, setting \(n=0\) in (1.7), it follows immediately that \[D_{-\omega}\left(v_{0}\right)=-u_{1}=-\left(\big{\langle}u_{0},P_{1}^{2} \big{\rangle}\right)^{-1}P_{1}u_{0}:=-\Psi u_{0}.\] Combining these last results, we deduce that \[D_{-\omega}\left(\Phi u_{0}\right)+\Psi u_{0}=0,\ \text{ with }\ \deg\Phi\leqslant 2\ \text{ and }\ \deg\Psi=1.\] By Proposition 1.2, we easily conclude that the orthogonal polynomials sequence \(\{P_{n}\}_{n\geqslant 0}\) is \(D_{\omega}\)-classical, and the proof is complete. At the end of this section, let us remember the definition of the _shifted_ polynomials denoted \(\{\hat{P}_{n}\}_{n\geqslant 0}\) corresponding to the PS \(\{P_{n}\}_{n\geqslant 0}\). For all \(n,n=0,1,\dots\), we have \[\hat{P}_{n}(x):=\hat{a}^{-n}P_{n}(\hat{a}x+\hat{b}),\text{ for }(\hat{a};\hat{b}) \in\mathbb{C}^{*}\times\mathbb{C}. \tag{1.18}\] Since the classical character of the considered polynomials is preserved by any linear change of the variable, for the OPS \(\{P_{n}\}_{n\geqslant 0}\) satisfying (1.9a)-(1.9b), we obtain that the polynomials \(\hat{P}_{n},n=0,1,\dots\), satisfy also the second-order recurrence relation \[\hat{P}_{n+2}(x)=(x-\hat{\beta}_{n+1})\hat{P}_{n+1}(x)-\hat{ \gamma}_{n+1}\hat{P}_{n}(x),\ n\geqslant 0, \tag{1.19a}\] \[\hat{P}_{1}(x)=x-\hat{\beta}_{0},\ \hat{P}_{0}(x)=1, \tag{1.19b}\] with \[\hat{\beta}_{n}=\frac{\beta_{n}-\hat{b}}{\hat{a}},\,n\geqslant 0,\ \text{ and }\ \hat{\gamma}_{n+1}=\frac{\gamma_{n+1}}{\hat{a}^{2}},\,n\geqslant 0\ \ \ (\hat{a}\neq 0). \tag{1.20}\] Furthermore, from (1.5)-(1.6) we readily see that (1.18) becomes \(\hat{P}_{n}(x)=\hat{a}^{-n}\big{(}h_{\hat{a}}\circ\tau_{-\hat{b}}P_{n}\big{)}(x)\). In addition, if \(u_{0}\) is \(D_{\omega}\)-classical, then Equation (1.13) leads to \[D_{-\frac{\omega}{\hat{a}}}\big{(}\hat{\Phi}\hat{u}_{0}\big{)}+\hat{\Psi}\hat{ u}_{0}=0, \tag{1.21}\] where \(\hat{\Phi}(x)=\hat{a}^{-t}\Phi(\hat{a}x+\hat{b})\), \(\hat{\Psi}=\hat{a}^{1-t}\Psi(\hat{a}x+\hat{b})\) and \(\hat{u}_{0}=(h_{\hat{a}^{-1}}\circ\tau_{-\hat{b}})u_{0}\). The paper is organized as follows. In the next section we pose and solve two nonlinear systems. The first and most fundamental one relates the recurrence coefficients \(\beta_{n}\), \(\gamma_{n+1}\) with \(\tilde{\beta}_{n}\), \(\tilde{\gamma}_{n+1}\). The second system combines the coefficients \(\alpha_{n}^{i},\ i=0,1,2\ ;\ \tilde{\alpha}_{n}^{j},\ j=0,1\), \(\tilde{\beta}_{n}\) and \(\tilde{\gamma}_{n+1}\) with those of the polynomial \(\Phi\) (Proposition 2.2). This allows to express the coefficients \(\alpha_{n}^{i}\), in terms of \(\beta_{n}\) or \(\gamma_{n+1}\). In Section 3, we investigate the canonical families of \(D_{\omega}\)-classical orthogonal polynomials which we identify after assigning particular values to the free parameters. The last section is devoted to the sequences of higher order derivatives. We give, principally, the explicit expressions of their recurrence coefficients in terms of the coefficients \(\big{(}\beta_{n},\gamma_{n+1}\big{)}_{n\in\mathbb{N}}\). When \(\omega=0\), we rediscover the link between every higher order derivative sequences for the classical polynomials of Hermite, Laguerre, Bessel and Jacobi with each of these families. ## 2 Computation of the related coefficients In order to compute the various related recurrence coefficients, the first step is to establish the main system connecting the recurrence coefficients \(\{\beta_{n}\}_{n\geqslant 0},\,\{\gamma_{n+1}\}_{n\geqslant 0}\) with \(\{\tilde{\beta}_{n}\}_{n\geqslant 0}\), \(\{\tilde{\gamma}_{n+1}\}_{n\geqslant 0}\). To do this, we proceed as follows. Substituting in (1.9a), \(P_{n+2},P_{n+1}\) and \(P_{n}\) by their expressions provided in (1.15a) we derive a relation in terms of the polynomials \(Q_{k}\) for \(k=n+2,\ldots,n-2\). Now, in this new relation we replace \(xQ_{n+1}\), \(xQ_{n}\) and \(xQ_{n-1}\) by their respective expressions derived from the recurrence relation (1.10a) obtaining an expansion depending only on the polynomials \(Q_{n+2},\ldots,Q_{n-2}\). After rearranging the terms in the resulting expansion and making some simplifications, the next system (valid for all \(n\geqslant 1\)) follows by identification \[(n+2)\tilde{\beta}_{n}-n\tilde{\beta}_{n-1}=(n+1)\beta_{n+1}-(n-1 )\beta_{n}-\omega;\] \[2\tilde{\beta}_{0}=\beta_{1}+\beta_{0}-\omega,\] \[(n+3)\tilde{\gamma}_{n+1}-(n+1)\tilde{\gamma}_{n}\!=\!(n\!+\!1) \gamma_{n+2}\!-\!(n\!-\!1)\gamma_{n+1}\!+\!(n\!+\!1)\big{(}\beta_{n+1}\!-\! \tilde{\beta}_{n}\big{)}\big{(}\beta_{n+1}\!-\!\tilde{\beta}_{n}-\omega\big{)};\] \[3\tilde{\gamma}_{1}=\gamma_{2}+\gamma_{1}+\big{(}\beta_{1}- \tilde{\beta}_{0}\big{)}\big{(}\beta_{1}-\tilde{\beta}_{0}-\omega\big{)},\] \[(n+1)\tilde{\gamma}_{n}\big{(}2\beta_{n+1}-\tilde{\beta}_{n}- \tilde{\beta}_{n-1}-\omega\big{)}-n\gamma_{n+1}\big{(}\beta_{n+1}+\beta_{n}-2 \tilde{\beta}_{n-1}-\omega\big{)}=0,\] \[(n+2)\tilde{\gamma}_{n}\tilde{\gamma}_{n+1}-2(n+1)\tilde{\gamma} _{n}\gamma_{n+2}+n\gamma_{n+1}\gamma_{n+2}=0.\] We should mention here the important role played by the Hahn property (Definition 1.1) to establish such a system, since we have used only the fact that these polynomials as well as their \(D_{\omega}\)-derivatives are orthogonal w.r.t. regular forms. In other words, each sequence satisfies a second order recurrence relation, as it is shown in Sec. 1. When \(\omega=0\), we recover the system initiated and solved by Douak and Maroni [7] whose the solutions provide the classical OPS of Hermite, Laguerre, Jacobi and Bessel, after assigning particular values to the free parameters. We will encounter these families again in this paper. To solve the above system, let us introduce the auxiliary coefficients \(\delta_{n}\) and \(\theta_{n}\) by writing \[\tilde{\beta}_{n} =\beta_{n+1}+\delta_{n}\,,\ n\geqslant 0, \tag{2.1}\] \[\tilde{\gamma}_{n} =\frac{n}{n+1}\gamma_{n+1}\theta_{n}\,,\ n\geqslant 1,\quad \big{(}\theta_{n}\neq 0\big{)}. \tag{2.2}\] With these considerations, the two qualities (1.16) take the form \[\tilde{\alpha}_{n}^{1}=-(n+1)(\delta_{n}+\omega),\ \ n\geqslant 0\,;\ \ \tilde{\alpha}_{n}^{0}=(n+1)\gamma_{n+2}(1-\theta_{n+1}),\ \ n \geqslant 0. \tag{2.3}\] ### The coefficients of the recurrence relations Our main objective here is to initially compute the auxiliary coefficients \(\delta_{n}\) and \(\theta_{n}\), and then give the explicit expressions of the coefficients \(\beta_{n}\) and \(\gamma_{n}\). This in turn allows to determine the coefficients \(\tilde{\beta}_{n}\) and \(\tilde{\gamma}_{n}\) and write significantly better each of the coefficients \(\alpha_{n}^{i}\) and \(\tilde{\alpha}_{n}^{j}\). Under the formulas (2.1)-(2.2), it is easy to see that the above system can be transformed into \[\beta_{n+1}-\beta_{n}=n\delta_{n-1}-(n+2)\delta_{n}-\omega,\ n \geqslant 0,\quad(\delta_{-1}=0), \tag{2.4}\] \[\big{[}(n+3)(\theta_{n+1}-1)+1\big{]}\frac{\gamma_{n+2}}{n+2}- \big{[}n(\theta_{n}-1)+1\big{]}\frac{\gamma_{n+1}}{n+1}=\delta_{n}(\delta_{n}+ \omega),\ n\geqslant 1,\] (2.5) \[\big{(}3\theta_{1}-2)\gamma_{2}-2\gamma_{1}=2\delta_{0}(\delta_{ 0}+\omega),\] (2.6) \[\big{[}(n+3)(\theta_{n}-1)+1\big{]}\delta_{n}-\big{[}(n-1)(\theta _{n}-1)+1\big{]}\delta_{n-1}+2(\theta_{n}-1)\omega=0,\ n\geqslant 1,\] (2.7) \[(\theta_{n+1}-2)\,\theta_{n}+1=0,\ n\geqslant 1, \tag{2.8}\] To solve this system, we begin with the Riccati equation (2.8) whose solutions are \[\mathbf{A}.\quad\theta_{n}=1,\ \ n\geqslant 1,\] \[\mathbf{B}.\quad\theta_{n}=\frac{n+\theta+1}{n+\theta},\ \ n\geqslant 1,\quad\theta\neq-1,-2,\dots.\] Hence, the first three equations must be examined in the light of these solutions. **Case A.** For \(\theta_{n}=1\), the above system reduces to \[\beta_{n+1}-\beta_{n}=n\delta_{n-1}-(n+2)\delta_{n}-\omega,\ n \geqslant 0, \tag{2.9}\] \[\frac{\gamma_{n+2}}{n+2}-\frac{\gamma_{n+1}}{n+1}=\delta_{n}( \delta_{n}+\omega),\ n\geqslant 0,\] (2.10) \[\delta_{n+1}-\delta_{n}=0,\ n\geqslant 0. \tag{2.11}\] Equation (2.11) clearly shows that \(\delta_{n}=\delta_{0},\ n\geqslant 0\), giving \(\delta_{n}(\delta_{n}+\omega)=\delta_{0}(\delta_{0}+\omega),\ n\geqslant 0\). Thus it is quite natural to single out the two statements \(\delta_{0}(\delta_{0}+\omega)=0\) and \(\delta_{0}(\delta_{0}+\omega)\neq 0\). But right now we go back to the two first equations from which we readily deduce that \[\beta_{n}=\beta_{0}-(2\delta_{0}+\omega)n,\ n\geqslant 0, \tag{2.12}\] \[\gamma_{n+1}=(n+1)\big{(}\delta_{0}(\delta_{0}+\omega)n+\gamma_{1 }\big{)},\ n\geqslant 0. \tag{2.13}\] If we take \(\omega=0\), we recover the recurrence coefficients of the Hermite or Laguerre polynomials as shown in [7]. This will be made more precise in the subcases \(\mathbf{A_{1}}\) and \(\mathbf{A_{2}}\) below. **Case B.** For \(\theta_{n}=(n+\theta+1)/(n+\theta)\), the system (2.4)-(2.8) becomes \[\beta_{n+1}-\beta_{n}=n\delta_{n-1}-(n+2)\delta_{n}-\omega,\ n \geqslant 0, \tag{2.14}\] \[\frac{(2n+\theta+4)}{(n+\theta+1)}\frac{\gamma_{n+2}}{n+2}-\frac{ (2n+\theta)}{(n+\theta)}\frac{\gamma_{n+1}}{n+1}=\delta_{n}(\delta_{n}+\omega ),\ n\geqslant 0,\] (2.15) \[\big{(}2n+\theta+3\big{)}\delta_{n}-\big{(}2n+\theta-1\big{)} \delta_{n-1}=-2\omega,\ n\geqslant 1, \tag{2.16}\] unless, of course, \(\theta\) happen to be zero for the index \(n=0\). So, we will first discuss the solution of the above system when \(\theta\neq 0\). The case \(\theta=0\) is special, it will be considered separately. When \(\omega=0\), with appropriate choices of the parameters, the only orthogonal polynomials obtained as solutions of this problem are those of Bessel and Jacobi (see [7] for more details). We now return to seeking solutions for the equations (2.14)-(2.16). Observe first that the RHS of Equation (2.15) vanishes if and only if \(\delta_{n}=-\omega\) or \(\delta_{n}=0\). Each of these is possible. It is easy to check that the former statement is dismissed, since it contradicts Equality (2.16). For the latter, if we replace \(\delta_{n}=0\) in (2.16), we immediately see that this leads to \(\omega=0\). Straightforwardly from Equations (2.14) and (2.15), one has \[\beta_{n}=\beta_{0}\ ;\ \gamma_{n+1}=\gamma_{1}\frac{(\theta+2)(n+1)(n+\theta)}{(2 n+\theta+2)(2n+\theta)},\ n\geqslant 0. \tag{2.17}\] If we set \(\theta=2\lambda\) and make a linear transformation with \(\hat{a}^{2}=2(\lambda+1)\gamma_{1}\ ;\ \hat{b}=\beta_{0}\), then \[\hat{\beta}_{n}=0\ ;\ \hat{\gamma}_{n+1}=\frac{(n+1)(n+2\lambda)}{(n+\lambda+1) (n+\lambda)},\ n\geqslant 0. \tag{2.18}\] We thus meet the Gegenbauer polynomials which will reappear again in Subcase \(\mathbf{B_{21}}\). From now on we assume that \(\delta_{n}(\delta_{n}+\omega)\neq 0,\ n\geqslant 0.\) Starting from Equation (2.16), multiply both sides by \(2n+\theta+1\), after summation we get \[\delta_{n}=\frac{\delta_{0}(\theta+3)(\theta+1)-2\omega n(n+\theta+2)}{(2n+ \theta+3)(2n+\theta+1)},\,n\geqslant 0. \tag{2.19}\] Use a division to obtain \[\delta_{n} =\frac{2\mu}{\big{(}2n+\theta+3\big{)}\big{(}2n+\theta+1\big{)}}- \frac{1}{2}\omega,\,n\geqslant 0, \tag{2.20a}\] \[\delta_{n} =\mu\big{(}\vartheta_{n}-\vartheta_{n+1}\big{)}-\frac{1}{2}\omega,\,n\geqslant 0, \tag{2.20b}\] where we have written \(\mu:=\frac{1}{4}(2\delta_{0}+\omega)(\theta+3)(\theta+1)\) and \(\vartheta_{n}=(2n+\theta+1)^{-1},\ n\geqslant 0.\) Thanks to the identity (2.20b), we can write \[(2n+\theta+2)\,\delta_{n}(\delta_{n}+\omega)=\mu^{2}\big{(}\vartheta_{n}^{2}- \vartheta_{n+1}^{2}\big{)}-\frac{1}{4}\omega^{2}(2n+\theta+2),\ n\geqslant 0. \tag{2.21}\] The objective of course is to incorporate this new expression into (2.15) to derive the coefficients \(\gamma_{n},\ n\geqslant 1,\) which will in fact be processed in a next step. We first calculate the coefficients \(\beta_{n}\). For this, observe that the RHS of (2.14) may be rewritten using Equation (2.16) in the form \[n\delta_{n-1}-(n+2)\delta_{n}-\omega=\frac{1}{2}(\theta-1)(\delta_{n}-\delta_ {n-1}),\ n\geqslant 1. \tag{2.22}\] It is easily seen that, if \(\theta\) assumes the value \(1\), the equation (2.22) provides \(\delta_{n}=-\frac{1}{2}\omega,\,n\geqslant 0.\) As a straightforward consequence of this last result, one sees immediately that (2.14), (2.15) respectively provides \[\beta_{n} =\beta_{0},\ n\geqslant 0, \tag{2.23}\] \[\gamma_{n+1} =-\frac{1}{4}\frac{(n+1)^{2}\left(\omega^{2}n(n+2)-12\gamma_{1} \right)}{(2n+3)(2n+1)},\ n\geqslant 0. \tag{2.24}\] When \(\omega=0\), under the transformation \(\hat{a}^{2}=3\gamma_{1},\hat{b}=\beta_{0}\), we meet the Legendre polynomials. We now turn to the case \(\theta\neq 1\). Using the identity (2.22), Equation (2.14) gives rise to \[\beta_{n+1}-\beta_{n}=\frac{1}{2}(\theta-1)(\delta_{n}-\delta_{n-1 }),\ n\geqslant 1, \tag{2.25}\] \[\beta_{1}-\beta_{0}=-(2\delta_{0}+\omega), \tag{2.26}\] with \(\delta_{n}\) is given by (2.19). From this, it may be concluded that \[\beta_{n}=\beta_{0}-\frac{(2\delta_{0}+\omega)(\theta+3)n(n+\theta)}{(2n+ \theta+1)(2n+\theta-1)},\ n\geqslant 0. \tag{2.27}\] We can now proceed to compute the coefficients \(\gamma_{n+1},n\geqslant 0\). To do so, multiply both sides of Equation (2.15) by \(2n+\theta+2\) and set \[\Theta_{n+1}=\frac{(2n+\theta+2)\left(2n+\theta\right)}{(n+\theta)}\frac{ \gamma_{n+1}}{n+1},\ n\geqslant 0. \tag{2.28}\] Then, taking into consideration (2.21), we easily check that (2.15) takes the form \[\Theta_{n+2}-\Theta_{n+1}=\mu^{2}\big{(}\vartheta_{n}^{2}-\vartheta_{n+1}^{2} \big{)}-\frac{1}{4}\omega^{2}(2n+\theta+2),\,n\geqslant 0.\] By summation, we deduce that \[\Theta_{n+1}=\Theta_{1}+\mu^{2}\big{(}\vartheta_{0}^{2}-\vartheta_{n}^{2} \big{)}-\frac{1}{4}\omega^{2}n(n+\theta+1),\ n\geqslant 0. \tag{2.29}\] Substituting (2.28) into (2.29) yields \[\gamma_{n+1}=-\frac{(n+1)(n+\theta)\Big{\{}\!\big{[}\frac{1}{4}\omega^{2}n \left(n+\theta+1\right)\!-\!\big{(}\mu^{2}\vartheta_{0}^{2}\!+\!(\theta\!+\!2 )\gamma_{1}\big{)}\big{]}(2n+\theta+1)^{2}\!+\!\mu^{2}\Big{\}}}{(2n+\theta+2)( 2n+\theta+1)^{2}(2n+\theta)}. \tag{2.30}\] It is possible to write the expression between braces in the numerator of (2.30) in the form \[\big{(}\omega n(n+\theta+1)+\varrho n+\rho_{1}\big{)}\big{(}\omega n(n+ \theta+1)-\varrho n+\rho_{2}\big{)},\] where the three parameters \(\rho_{1}\), \(\rho_{2}\) and \(\varrho\) are such that \[(\theta+1)\varrho^{2}+(\rho_{2}-\rho_{1})\varrho=0, \tag{2.31}\] \[\varrho^{2}-(\rho_{2}+\rho_{1})\omega=\big{(}(\theta+3)\delta_{0} +(\theta+2)\omega\big{)}\big{(}(\theta+3)\delta_{0}+\omega\big{)}+4(\theta+2 )\gamma_{1},\] (2.32) \[\rho_{2}\rho_{1}=-(\theta+1)^{2}(\theta+2)\gamma_{1}. \tag{2.33}\] The roots of the quadratic equation (2.31) are \(\varrho=0\) and \(\varrho=(\rho_{1}-\rho_{2})/(\theta+1)\) for \(\rho_{2}\neq\rho_{1}\), with the root \(\varrho=0\) being double, if \(\rho_{2}=\rho_{1}\). The last equation clearly shows that \(\rho_{2}\rho_{1}\neq 0\). All these parameters will be well specified when dealing with the canonical families. But, in any way, we have to consider the following two cases. **1.** For \(\varrho=0\), we obtain \[\gamma_{n+1}=-\frac{(n+1)(n+\theta)\big{(}\omega n(n+\theta+1)+\rho_{1}\big{)} \big{(}\omega n(n+\theta+1)+\rho_{2}\big{)}}{(2n+\theta+2)(2n+\theta+1)^{2}(2 n+\theta)},\ n\geqslant 0. \tag{2.34}\] In the particular case \(\rho_{2}=\rho_{1}:=\rho\), (2.34) simplifies to \[\gamma_{n+1}=-\frac{(n+1)(n+\theta)\big{(}\omega n(n+\theta+1)+\rho \big{)}^{2}}{(2n+\theta+2)(2n+\theta+1)^{2}(2n+\theta)},\ n\geqslant 0. \tag{2.35}\] **2.** For the general case \(\varrho\neq 0\), we have \[\gamma_{n+1}= -\frac{(n+1)(n+\theta)\big{(}\omega n(n+\theta+1)+\varrho n+ \rho_{1}\big{)}\big{(}\omega n(n+\theta+1)-\varrho n+\rho_{2}\big{)}}{(2n+ \theta+2)(2n+\theta+1)^{2}(2n+\theta)},\,n\geqslant 0. \tag{2.36}\] We now turn to the special case \(\theta=0\). Substituting this in (2.4)-(2.7) yields \[\beta_{n+1}-\beta_{n}=n\delta_{n-1}-(n+2)\delta_{n}-\omega,\ n \geqslant 0,\] \[\gamma_{n+2}-\gamma_{n+1}=\frac{1}{2}(n+1)\delta_{n}(\delta_{n} +\omega),\ n\geqslant 1,\] \[\gamma_{2}-\frac{1}{2}\gamma_{1}=\frac{1}{2}\delta_{0}(\delta_{0 }+\omega),\] \[(2n+3)\delta_{n}-(2n-1)\delta_{n-1}=-2\omega,\ n\geqslant 1.\] The same reasoning applies to this case gives \[\delta_{n}=\frac{3\delta_{0}-2\omega n(n+2)}{(2n+3)(2n+1)},\ n \geqslant 0.\] \[\beta_{n}=\beta_{0}-\frac{3(2\delta_{0}+\omega)n^{2}}{(2n+1)(2n-1 )},\ n\geqslant 0.\] \[\gamma_{n+1}=-\frac{\big{(}\omega n(n+1)+\tau n+\tau_{1}\big{)} \big{(}\omega n(n+1)-\tau n+\tau_{2}\big{)}}{4(2n+1)^{2}},\,n\geqslant 1,\] where \(\tau_{1}\), \(\tau_{2}\) and \(\tau\) are such that \[\tau^{2}+(\tau_{2}-\tau_{1})\tau=0,\] \[\tau^{2}-(\tau_{2}+\tau_{1})\omega=(3\delta_{0}+2w)(3\delta_{0}+ \omega)+8\gamma_{1},\] \[\tau_{2}\tau_{1}=-2\gamma_{1}.\] When \(\omega=0\), if moreover \(\delta_{0}=0\), which we may assume, it follows that \[\delta_{n}=0,\ n\geqslant 0,\ \beta_{n}=\beta_{0},\ n\geqslant 0,\ \gamma_{n+1}=\frac{1}{2}\gamma_{1},\,n\geqslant 1.\] Thus, choosing \(\beta_{0}=0\) and \(\gamma_{1}=\frac{1}{2}\), we meet the Tchebychev polynomials of the first kind. After having finished solving the first system, we now proceed to the determination of the coefficients \(\tilde{\alpha}_{n}^{j}\) and \(\alpha_{n}^{j}\) in terms of \(\beta_{n}\) and \(\gamma_{n+1}\), \(n\geqslant 0\). This will be done in the next subsection. ### The coefficients of the structure relations **Proposition 2.1**: _Let \(\Phi\) be the polynomial arising in_ Proposition 1.2_. We let the degree of \(\Phi\) to be two \(2\) and write \(\Phi(x)=a_{2}x^{2}+a_{1}x+a_{0}\). Then the coefficients implicated in the two structure relations (1.12) and (1.15a)-(1.15b) are interlinked through the following system_ \[\alpha^{2}_{n+2}=a_{2},\ n\geqslant 0, \tag{2.37}\] \[\alpha^{1}_{n+1}+a_{2}\tilde{\alpha}^{1}_{n+1}=a_{2}(\tilde{\beta} _{n+1}+\tilde{\beta}_{n})+a_{1},\ n\geqslant 0,\] (2.38) \[\alpha^{1}_{n+1}\tilde{\alpha}^{1}_{n}+a_{2}\tilde{\alpha}^{0}_{n }+\alpha^{0}_{n}=a_{2}(\tilde{\gamma}_{n+1}+\tilde{\gamma}_{n}+\tilde{\beta}^ {2}_{n})+a_{1}\tilde{\beta}_{n}+a_{0},\ n\geqslant 0,\] (2.39) \[\alpha^{1}_{n+1}\tilde{\alpha}^{0}_{n-1}+\alpha^{0}_{n}\tilde{ \alpha}^{1}_{n-1}=a_{2}\tilde{\gamma}_{n}(\tilde{\beta}_{n}+\tilde{\beta}_{n-1 })+a_{1}\tilde{\gamma}_{n},\ n\geqslant 1,\] (2.40) \[\alpha^{0}_{n+1}\tilde{\alpha}^{0}_{n-1}=a_{2}\tilde{\gamma}_{n+ 1}\tilde{\gamma}_{n},\ n\geqslant 1, \tag{2.41}\] _where we have adopted the convention that \(\tilde{\gamma}_{0}:=0\) so that (2.39) remains valid for \(n=0\)._ _Proof_ As for the preceding system, we give only the main ideas of the proof. First, comparison of coefficients in (1.12) shows that \(\alpha^{2}_{n+2}=a_{2}\). Now, use the recurrence relation (1.10a) twice to write the product \(\Phi(x)Q_{n}\), which is the LHS of (1.12), in terms of the polynomials \(Q_{n+2},\ldots,Q_{n-2}\). Then replace in the RHS of (1.12) \(P_{n+2},P_{n+1}\) and \(P_{n}\) by their expressions provided in (1.15a) to obtain another expansion in terms of the polynomials \(Q_{n+2},\ldots,Q_{n-2}\). After some simplifications, and by identification the equations (2.38)-(2.41) follow. As far as the author knows, the technique used here to find explicit expressions for the coefficients involving in (1.15a) with (1.15b) is new, and the results obtained still unknown. So, the solution of the above system brings an answer to this question. But before doing so, recall that when the form \(u_{0}\) is \(D_{\omega}\)-classical, namely, it is regular and satisfies (1.13), the polynomials \(\Phi\) and \(\Psi\) necessarily satisfy (see [12, p.7] for further details): \[\kappa\Phi(x) =(1\!-\!\theta_{1})x^{2}\,-\big{(}(1\!-\!\theta_{1})(\beta_{1}+ \beta_{0})+\delta_{0}+\omega\big{)}x+\big{(}(1\!-\!\theta_{1})\beta_{1}+\delta _{0}+\omega\big{)}\beta_{0}+\theta_{1}\gamma_{1}, \tag{2.42}\] \[\kappa\Psi(x) =P_{1}(x)=x-\beta_{0}. \tag{2.43}\] Note that the expression in the RHS of (2.42) is slightly transformed in accordance with the relations (2.1) and (2.2). The coefficient \(\kappa\) is to be chosen later so that \(\Phi(x)\) is being monic. Since we are proceeding following the two situations Case **A** and Case **B**, we see that the leading coefficient \(a_{2}\) of \(\Phi(x)\) assumes the value \(0\), when \(\theta_{1}=1\), or is such that \(\kappa a_{2}=-(\theta+1)^{-1}\), when \(\theta_{1}=(\theta+2)/(\theta+1)\). To get \(a_{2}=1\) in the latter case, we choose \(\kappa=-(\theta+1)^{-1}\) which, in turn, allows to write the polynomial \(\Phi(x)\) in one of the four standard forms \[\Phi(x)=1,\ \Phi(x)=x,\ \Phi(x)=x^{2}\ \text{and}\ \Phi(x)=(x+1)(x-c),\ c\in \mathbb{C}\backslash\{-1\}.\] The classification achieved according to the degree of \(\Phi\) as in [12] is of course exhaustive and is equivalent to that based on the values of \(\theta_{n}\). It is this second alternative that we will retain in the next section to go over the diverse families of \(D_{\omega}\)-classical orthogonal polynomials or some relevant cases. But before doing so, let us discuss the solutions of the system (2.37)-(2.41) when \(a_{2}\) takes one of the values \(0\) or \(1\), with use of (2.3). **I.** For \(a_{2}=0\), since \(\alpha^{0}_{n}\neq 0\) for each \(n\), Equation (2.41) readily gives \(\tilde{\alpha}^{0}_{n}=0,n\geqslant 0\), so, due to (2.3), we necessarily have \(\theta_{n}=1\) for all \(n\geqslant 1\). It turns out that \(\delta_{n}=\delta_{0},n\geqslant 0\), and so \(\tilde{\alpha}^{1}_{n}=-(n+1)(\delta_{0}+\omega),n\geqslant 0\). In this case, it is easily seen that the coefficients \(\beta_{n}\) and \(\gamma_{n+1}\) are given by (2.12)-(2.13). This actually happens in the case **A** when the polynomial \(\Phi\) takes the form \[\kappa\Phi(x)=-(\delta_{0}+\omega)x+(\delta_{0}+\omega)\beta_{0}+\gamma_{1}. \tag{2.44}\] Using the fact that the polynomial \(\Phi(x)\) is monic, we are brought to consider the following two situations according with the degree of this polynomial. * \(\Phi(x)\) is constant. In this case we have \(\delta_{0}+\omega=0\) which leads to \(a_{2}=a_{1}=0\) and \(a_{0}=1\). It follows that \(\kappa=\gamma_{1}\), \(\Phi(x)=1\) and \(\Psi(x)=\gamma_{1}^{-1}P_{1}(x)\). Therefore \(\tilde{\alpha}_{n}^{1}=0,\ n\geqslant 0\), and the above system readily gives \(\alpha_{n+2}^{2}=\alpha_{n+1}^{1}=0\ ;\ \alpha_{n}^{0}=1,\,n\geqslant 0\). Thus, \(Q_{n}=P_{n}\), \(n\geqslant 0\), and so the two structure relations coincide. * \(\Phi(x)\) is linear. By setting \(\kappa=-(\delta_{0}+\omega)\neq 0\) and \((\delta_{0}+\omega)\beta_{0}+\gamma_{1}=0\), we conclude that \(a_{2}=a_{0}=0\) and \(a_{1}=1\) and so \(\Phi(x)=x\) and \(\Psi(x)=\gamma_{1}^{-1}\beta_{0}P_{1}(x)\). We thus have \[\alpha_{n+2}^{2}=0\ ;\ \alpha_{n+1}^{1}=1\ \text{and}\ \alpha_{n}^{0}=\beta_{0}-\delta_{0}n,\,n \geqslant 0.\] Accordingly, the two structure relations may be written as follows \[xQ_{n}(x) =P_{n+1}(x)+(\beta_{0}-\delta_{0}n)P_{n}(x),\,n\geqslant 0,\] \[P_{n}(x) =Q_{n}(x)-(\delta_{0}+\omega)nQ_{n-1}(x),\,n\geqslant 0,\ (Q_{-1}:=0).\] For \(n=1\) in the first relation, we recover the equality \((\delta_{0}+\omega)\beta_{0}+\gamma_{1}=0\), that is, \(\kappa\beta_{0}=\gamma_{1}\). This interconnection between \(\beta_{0}\) and \(\gamma_{1}\) will prove useful in Subcase \(\mathbf{A_{2}}\). **II.** For \(a_{2}\neq 0\), \(\Phi(x)\) is then quadratic and so \(\kappa=1-\theta_{1}=-(\theta+1)^{-1}\), since we take \(a_{2}\) to be \(1\). Changing \(n\) into \(n+1\) in (2.41) yields \(\alpha_{n+2}^{0}\tilde{\alpha}_{n}^{0}\neq 0,\ n\geqslant 0\), from which we see that the coefficients \(\tilde{\alpha}_{n}^{0}\) are not identically \(0\) for all \(n\), and hence, due to (2.3), we conclude that \(\theta_{n}\neq 1,\ n\geqslant 1\). This in fact shows that the case \(\mathbf{B}\) is the one to be naturally considered here. We thus have \[\tilde{\alpha}_{n}^{1}=-(n+1)(\delta_{n}+\omega)\ \ \text{and}\ \ \tilde{\alpha}_{n}^{0}=-\frac{n+1}{n+\theta+1}\gamma_{n+2},\ n \geqslant 0.\] Additionally, the coefficients \(\beta_{n}\) are given by (2.31), while the \(\gamma_{n}\) are generated either by (2.34) or by (2.36). Moreover, taking into account the position of the zeros of the polynomial \(\Phi\), we have to consider again two subcases. * \(\Phi(x)=x^{2}\). Since \(a_{2}=1\) and \(a_{1}=a_{0}=0\), (2.42) leads to \(\beta_{1}+\beta_{0}=(\theta+1)(\delta_{0}+\omega)\) and \(\beta_{0}^{2}=-(\theta+2)\gamma_{1}\). Analogously to the former case, taking into consideration (2.3), the system (2.37)-(2.41) readily provides \[\alpha_{n+2}^{2}=1\ ;\ \alpha_{n+1}^{1}=\beta_{n+1}+\beta_{n}+n(\delta_{n-1}+ \omega)\ \ \text{and}\ \ \alpha_{n}^{0}=-\frac{n+\theta+1}{n+1}\gamma_{n+1},\,n \geqslant 0.\] * \(\Phi(x)=(x+1)(x-c),\ c\neq-1\). We have \(a_{2}=1\), \(a_{1}=1-c\) and \(a_{0}=-c\). Therefore, (2.42) gives rise to \(\beta_{1}+\beta_{0}=(\theta+1)(\delta_{0}+\omega)+c-1\) and \((\beta_{0}+1)(\beta_{0}-c)=-(\theta+2)\gamma_{1}\). In the same manner we can see that \[\alpha_{n+2}^{2}=1\ ;\ \alpha_{n+1}^{1}=\beta_{n+1}+\beta_{n}+n(\delta_{n-1}+ \omega)-c+1\ \text{and}\ \alpha_{n}^{0}=-\frac{n+\theta+1}{n+1}\gamma_{n+1},\,n \geqslant 0.\] ## 3 The canonical families of \(D_{\omega}\)-classical polynomials In order to present an exhaustive classification of those polynomials, we will examine different situations in both cases \(\mathbf{A}\) and \(\mathbf{B}\). Under certain restrictions on the parameters, we rediscover the well-known families of discrete classical orthogonal polynomials or some particular cases. Since we are only interested in regular OPS, the finite sequences are not considered here. We often use the linear transformation (1.18) with (1.20) to provide the desired results. This is also achieved on account of the specific conditions observed in Subsection 2.2. For each situation, we summarise the relevant properties of the corresponding family of polynomials. \(\triangleright\)**Case A.** We first investigate the two main subcases, namely, \(\delta_{0}+\omega=0\) and \(\delta_{0}+\omega\neq 0\). Then, we consider the particular subcase when \(\delta_{0}\) assumes the value \(0\). \(\mathbf{A_{1}:\delta_{0}+\omega=0}\). From (2.12)-(2.13), taking into account (2.1)-(2.2), we immediately obtain \(\hat{\beta}_{n}=\beta_{n}=\beta_{0}+\omega n\ ;\ \tilde{\gamma}_{n+1}=\gamma_{n+1}= \gamma_{1}(n+1),\ n\geqslant 0\). It follows that \(Q_{n}=P_{n}\), \(n\geqslant 0\), and hence the PS \(\{P_{n}\}_{n\geqslant 0}\) belongs to the class of the so-called \(D_{\omega}\)-Appell sequences. \(\mathbf{A_{1a}}\). With the choice \(\hat{a}^{2}=2\gamma_{1}\) and \(\hat{b}=\beta_{0}\), we easily get \(\hat{\beta}_{n}=\frac{\omega}{a}n\ ;\ \hat{\gamma}_{n+1}=\frac{1}{2}(n+1),\ n \geqslant 0\). Now, replacing \(\omega\) by \(\hat{a}\omega\), we obtain \(\hat{\beta}_{n}=\omega n\ ;\ \hat{\gamma}_{n+1}=\frac{1}{2}(n+1),\ n \geqslant 0\). When \(\omega=0\), and so \(\delta_{0}=0\), we meet again the Hermite polynomials. \(\mathbf{A_{1b}}\). The choice \(\hat{a}=\omega\) and \(\hat{b}=\beta_{0}-\omega a\) with \(\gamma_{1}=a\omega^{2}\) gives \(\hat{\beta}_{n}=a+n\ ;\ \hat{\gamma}_{n+1}=a(n+1),\ n\geqslant 0\). We thus encounter the Charlier polynomials [19]. \(\mathbf{A_{2}:\delta_{0}+\omega\neq 0}\). We can assume without loss of generality that \(\delta_{0}(\delta_{0}+\omega)=1\Leftrightarrow\omega=\delta_{0}^{-1}-\delta_ {0}\). Use of (2.12)-(2.13) then shows that \(\ \beta_{n}=\beta_{0}-(\delta_{0}^{-1}+\delta_{0})n\ ;\ \gamma_{n+1}=(n+1) \big{(}n+\gamma_{1}\big{)},\ n\geqslant 0\), where the parameters \(\beta_{0}\) and \(\gamma_{1}\) are related via \(\beta_{0}=-\delta_{0}^{-1}\gamma_{1}\) as in the statement **I-(ii)** above. By setting \(\gamma_{1}:=\alpha+1\), we can write \(\beta_{0}=-\delta_{0}^{-1}(\alpha+1)\). If we take now \(\omega=0\) which yields \(\delta_{0}^{2}=1\), we recover the Laguerre polynomials for \(\delta_{0}=-1\), and the shifted Laguerre polynomials for \(\delta_{0}=1\). For \(\delta_{0}=-1\), we have \(\ \beta_{n}=2n+\alpha+1\ ;\ \gamma_{n+1}=(n+1)\big{(}n+\alpha+1\big{)},\ n \geqslant 0\). From now on, we assume that \(\delta_{0}\neq-1\) and set \(\gamma_{1}:=\alpha+1\) again. We wish to examine two interesting situations already investigated in [12] with slight differences in notation. The first one occurs for \(\delta_{0}:=-e^{-\varphi},\varphi\neq 0\), so that \(\omega=-2\sinh\varphi\). It follows that \[\beta_{n}=e^{\varphi}(\alpha+1)+2n\cosh\varphi\ ;\ \gamma_{n+1}=(n+1)(n+\alpha+1), \ n\geqslant 0.\] Afterwards, choose \(\hat{a}=\omega\ ;\ \hat{b}=0\) and put \(c:=e^{2\varphi}\), to get \[\hat{\beta}_{n}=\frac{c}{1-c}(\alpha+1)+\frac{1+c}{1-c}n\ ;\ \hat{\gamma}_{n+1}= \frac{c}{(1-c)^{2}}(n+1)(n+\alpha+1),\ n\geqslant 0.\] When the parameter \(c\in\mathbb{R}\backslash\{0,1\}\), we obtain the Meixner polynomials of the first kind [20]. The Krawtchouk polynomials are a special case of the Meixner polynomials of the first kind. The second situation appears for \(\delta_{0}:=e^{i\phi},\ 0<\phi<\pi\) by taking \(2\lambda=\gamma_{1}:=\alpha+1\). After making a linear transformation via the changes \(\hat{a}=i\omega\ ;\ \hat{b}=-\lambda\omega\), we obtain that \[\hat{\beta}_{n}=-(n+\lambda)\cot\phi\ ;\ \hat{\gamma}_{n+1}=\frac{1}{4}\frac{(n+1) (n+2\lambda)}{\sin^{2}\phi},\ n\geqslant 0.\] From this, we conclude that the resulting polynomials are those of Meixner-Pollaczek [21]. \(\mathbf{A_{3}:}\) For \(\delta_{0}=0\), one sees immediately that \(\beta_{n}=\beta_{0}-\omega n\ ;\ \gamma_{n+1}=\gamma_{1}(n+1),\ n\geqslant 0\). From this, taking into consideration (2.1)-(2.2), we check at once that \(\hat{\beta}_{n}=\beta_{n}-\omega\) and \(\tilde{\gamma}_{n+1}=\gamma_{n+1},\ n\geqslant 0\). In accordance with (1.18), it is easily seen that \(Q_{n}(x)=\tau_{-\omega}P_{n}(x)=P_{n}(x+\omega)\) for all \(n\geqslant 0\). It is worth pointing out the following two subcases for which we can proceed analogously to the generation of \(\mathbf{A_{1a}}\) and \(\mathbf{A_{1b}}\). **A3a**. Similarly to \(\mathbf{A_{1a}}\), the choice \(\hat{a}^{2}=2\gamma_{1}\;;\;\hat{b}=\beta_{0}\) yields \(\hat{\beta}_{n}=-\frac{\omega}{a}n\;;\;\hat{\gamma}_{n+1}=\frac{1}{2}(n+1),\;n\geqslant 0\). Replacing \(\omega\) by \(-\hat{a}\omega\), we get \(\hat{\beta}_{n}=\omega n\;;\;\hat{\gamma}_{n+1}=\frac{1}{2}(n+1),\;n\geqslant 0\), which for \(\omega=0\) gives rise to the Hermite polynomials. **A3b**. If we take \(\hat{a}=-\omega\;;\;\hat{b}=\beta_{0}+\omega a\) with \(\gamma_{1}=a\omega^{2}\), we get the Charlier polynomials again. Note that, with \(\hat{a}=i\omega\;;\;\hat{b}=\beta_{0}+i\omega b\) and \(\gamma_{1}=-a\omega^{2}\), a specific case relative to this situation have been mentioned in [12], where \(\hat{\beta}_{n}=b+in\;;\;\hat{\gamma}_{n+1}=-a(n+1),\;n\geqslant 0\). Note that the resulting polynomials encountered here are those obtained in \(\mathbf{A_{1a}}\) and \(\mathbf{A_{1b}}\). \(\triangleright\)**Case B.** Two main situations will be also investigated with some of their special subcases. The remarks referred to in the statements **II-(iii)** and **II-(iv)** above must of course be taken into account to choose more precise certain parameters involved in the recurrence coefficients. In what follows, unless otherwise stated, we assume that \(\theta\neq 1\). \(\mathbf{B_{1}}\)**: \(\theta=2\alpha-1\). From (2.27) and (2.30), if we choose \(\beta_{0}=\frac{1}{2}\alpha\omega-\mu_{1}\), we get \[\beta_{n} =\frac{1}{2}\alpha\omega-\frac{\alpha(\alpha-1)\mu_{1}}{(n+ \alpha)(n+\alpha-1)},\;n\geqslant 0, \tag{3.1}\] \[\gamma_{n+1} =-\frac{(n+1)(n+2\alpha-1)\big{(}\omega(n+\alpha)^{2}-2\alpha\mu _{1}\big{)}^{2}}{(2n+2\alpha+1)(2n+2\alpha)^{2}(2n+2\alpha-1)},\;n\geqslant 0, \tag{3.2}\] where we have set \(\mu_{1}:=-\frac{1}{2}(\alpha+1)(2\delta_{0}+\omega)\). Note that (3.1) is valid for \(n=0\), except that it becomes worthless if concurrently \(\alpha=1\). For \(\alpha=\frac{1}{2}\), and so \(\theta=0\), the coefficients \(\beta_{n}\) and \(\gamma_{n+1}\) coincide with those previously established in Subsection 2.1. \(\mathbf{B_{11}}.\) For \(\mu_{1}\neq 0\), the choice \(\hat{a}=\alpha\mu_{1}=1\;;\;\hat{b}=0\), leads to \[\hat{\beta}_{n} =\frac{1}{2}\alpha\omega+\frac{1-\alpha}{(n+\alpha)(n+\alpha-1)},\;n\geqslant 0, \tag{3.3}\] \[\hat{\gamma}_{n+1} =-\frac{(n+1)(n+2\alpha-1)\big{(}\frac{1}{2}\omega(n+\alpha)^{2} -1\big{)}^{2}}{(2n+2\alpha+1)(n+\alpha)^{2}(2n+2\alpha-1)},\;n\geqslant 0. \tag{3.4}\] When \(\omega=0\), we clearly encounter the Bessel polynomials. \(\mathbf{B_{12}}.\) For \(\omega\neq 0\) with the choice \(\hat{a}=i\omega\;;\;\hat{b}=\frac{1}{2}\alpha\omega\), another specific case was discovered in [12]. If on top of that we take \(\mu_{1}=0\) and \(\alpha>0\), the corresponding orthogonal polynomials are symmetric and their associated form is positive definite. \(\mathbf{B_{2}}:\theta=\nu-1\). Let \(c\in\mathbb{C}-\{-1\}\), thus from and, we obtain \[\beta_{n} =\frac{1}{4}\nu\omega-\frac{1}{2}(1-c)+\frac{\nu(\nu-2)\mu_{2}}{(2 n+\nu)(2n+\nu-2)},\;n\geqslant 0, \tag{3.5}\] \[\gamma_{n+1} =-\frac{(n+1)(n+\nu-1)}{(2n+2\nu+1)(2n+2\nu)^{2}(2n+2\nu-1)}\times\] \[\big{[}\omega n(n+\nu)+(1+c)n+\nu(\beta_{0}+1)\big{]}\big{[} \omega n(n+\nu)-(1+c)n+\nu(\beta_{0}-c)\big{]},\;n\geqslant 0, \tag{3.6}\] where we have put \(\mu_{2}:=\frac{1}{4}(\nu+2)(2\delta_{0}+\omega)\) and chosen \(\beta_{0}:=\frac{1}{4}\nu\omega-\frac{1}{2}(1-c)+\mu_{2}\). \(\mathbf{B_{21}}.\) For \(\omega=0\) and \(c=1\), we may write \(\nu(1+\beta_{0})=2(\alpha+1)\) and \(\nu(1-\beta_{0})=2(\beta+1)\), so that \[\nu=\alpha+\beta+2\;\text{ and }\;\beta_{0}=\frac{\alpha-\beta}{\alpha+\beta+2}.\] Hence, we rediscover the Jacobi polynomials whose coefficients are \[\beta_{n} =\frac{\alpha^{2}-\beta^{2}}{(2n+\alpha+\beta+2)(2n+\alpha+\beta)},\ n \geqslant 0,\] \[\gamma_{n+1} =4\frac{(n+1)(n+\alpha+\beta+1)(n+\alpha+1)(n+\beta+1)}{(2n+ \alpha+\beta+3)(2n+\alpha+\beta+2)^{2}(2n+\alpha+\beta+1)},\ n\geqslant 0.\] For \(\alpha=\beta=\lambda-\frac{1}{2}\), we meet again the Gegenbauer polynomials. \(\mathbf{B_{22}}.\) When \(\omega\neq 0\), we first write the two expressions between square brackets in (3.6) as follows \[\omega n(n+\nu)+(1+c)n+\nu(\beta_{0}+1) =\big{[}\omega(n+\beta+1)+(1+c)\big{]}(n+\alpha+1),\] \[\omega n(n+\nu)-(1+c)n+\nu(\beta_{0}-c) =\big{[}\omega(n+\alpha+1)-(1+c)\big{]}(n+\beta+1).\] If we set \(\eta:=\alpha-(1+c)/\omega\) and apply a linear transformation with \(\hat{a}=i\omega\ ;\ \hat{b}=\frac{1}{4}\nu\omega-\frac{1}{2}(1-c)\), we may rewrite (3.5)-(3.6) as \[\hat{\beta}_{n} =\frac{1}{2}i(\alpha^{2}-\beta^{2})\frac{\eta-\frac{1}{2}( \alpha+\beta)}{(2n+\alpha+\beta+2)(2n+\alpha+\beta)}, \tag{3.7}\] \[\hat{\gamma}_{n+1} =\frac{(n+1)(n+\alpha+\beta+1)(n+\alpha+\beta+1-\eta)(n+\eta+1)(n +\alpha+1)(n+\beta+1)}{(2n+\alpha+\beta+3)(2n+\alpha+\beta+2)^{2}(2n+\alpha+ \beta+1)}. \tag{3.8}\] Observe that if \(\alpha^{2}-\beta^{2}=0\) or \(\eta=\frac{1}{2}(\alpha+\beta)\), the corresponding polynomials are symmetric. In consequence, it is worth while to discuss the following subcases mentioned in [12]. \(\mathbf{B_{22a}}\). For \(\alpha-\beta=0\), we obtain \[\hat{\gamma}_{n+1}=\frac{1}{4}\frac{(n+1)(n+2\alpha+1)(n+2\alpha+1-\eta)(n+ \eta+1)}{(2n+2\alpha+3)(2n+2\alpha+1)},\ n\geqslant 0. \tag{3.9}\] When \(-1<\eta<2\alpha+1\) or when \(\alpha\in\mathbb{R}\) and \(\eta+\bar{\eta}=2\alpha,\ \alpha+1>0\), the obtained polynomials are orthogonal with respect to a positive definite form. - For \(\alpha=0\), the resulting polynomials are related to Pasternak polynomials [22] having the coefficients \[\hat{\gamma}_{n+1}=\frac{1}{4}\frac{(n+1)^{2}(n+\eta+1)(n-\eta+1)}{(2n+3)(2n +1)},\ n\geqslant 0. \tag{3.10}\] - For \(\eta=0\), the resulting polynomials are the Touchard ones [23] which are themselves particular cases of the continuous Hahn polynomials [24]. In this case we have \[\hat{\gamma}_{n+1}=\frac{1}{4}\frac{(n+1)^{2}(n+\alpha+1)(n-\alpha+1)}{(2n+3)( 2n+1)},\ n\geqslant 0. \tag{3.11}\] \(\mathbf{B_{22b}}\). For \(\alpha+\beta=0\), we get \[\hat{\gamma}_{n+1}=\frac{1}{4}\frac{(n+\alpha+1)(n-\alpha+1)(n+\eta+1)(n-\eta +1)}{(2n+3)(2n+1)},\ n\geqslant 0. \tag{3.12}\] Again, observe that the associated form to these orthogonal polynomials is positive definite when \(-1<\eta<2\alpha+1\) or when \(\alpha\in\mathbb{R}\) and \(\eta+\bar{\eta}=2\alpha,\ \alpha+1>0\). - When \(\alpha=0\), this leads to the Pasternak polynomials. - When \(\eta=0\), we meet again the Touchard polynomials. \(\mathbf{B_{22c}}\). For \(\eta=\frac{1}{2}(\alpha+\beta)\), we get \[\hat{\gamma}_{n+1}=\frac{1}{4}\frac{(n+1)(n+\alpha+\beta+1)(n+ \alpha+1)(n+\beta+1)}{(2n+\alpha+\beta+3)(2n+\alpha+\beta+1)},\ n\geqslant 0. \tag{3.13}\] For \(\alpha+1>0\) and \(\beta+1>0\), we have \(\hat{\gamma}_{n+1}>0\), for all \(n\geqslant 0\). The form associated to these polynomials is then positive definite. On the other hand, if both \(\alpha\) and \(\beta\) are assumed to be real numbers, and so \(\eta+\bar{\eta}=\alpha+\beta\), it is possible to find two numbers \(a\) and \(b\), such that \[a+\bar{a}=\alpha+1,\,b+\bar{b}=\beta+1,\,a+\bar{b}=\eta+1\text{ and }\bar{a}+b= \alpha+\beta+1-\eta.\] In addition, under the conditions \(\Re a>0\) and \(\Re b>0\), it was shown in [12, p.18] that the resulting polynomials are orthogonal w.r.t. a positive definite form and coincide with the continuous Hahn polynomials. For more details we refer the reader to the aforementioned paper. ## 4 The recurrence coefficients of the higher order derivatives Let \(k\) be a positive integer and let \(\{P_{n}\}_{n\geqslant 0}\) be a \(D_{\omega}\)-classical OPS. The sequence of the normalized higher order \(D_{\omega}\)-derivatives, denoted as \(\{P_{n}^{[k]}\}_{n\geqslant 0}\), is recursively defined by \[P_{n}^{[k]}(x):=\frac{1}{n+1}D_{\omega}P_{n+1}^{[k-1]}(x),\ \ k\geqslant 1,\] (4.1a) or, equivalently, \[P_{n}^{[k]}(x):=\frac{1}{(n+1)_{k}}D_{\omega}^{k}P_{n+k}(x),\ \ k\geqslant 1, \tag{4.1b}\] where \((\mu)_{n}=\mu(\mu+1)\cdots(\mu+n-1)\), \((\mu)_{0}=1,\ \mu\in\mathbb{C},\,n\in\mathbb{N}\), is the Pochhammer symbol. Application of Definition 1.1, with the special notations \(P_{n}^{[1]}:=Q_{n}\) and \(P_{n}^{[0]}:=P_{n}\), enables one to write \(\beta_{n}^{[1]}:=\bar{\beta}_{n}\), \(\gamma_{n}^{[1]}:=\tilde{\gamma}_{n}\) and \(\beta_{n}^{[0]}:=\beta_{n}\), \(\gamma_{n}^{[0]}:=\gamma_{n}\), respectively. The following corollary, which is in fact an immediate consequence of Proposition 1.2, plays an important role in establishing our results. **Corollary 4.1** ([12]): _If the OPS \(\{P_{n}\}_{n\geqslant 0}\) is \(D_{\omega}\)-classical, then the sequence \(\{P_{n}^{[k]}\}_{n\geqslant 0}\) is also \(D_{\omega}\)-classical OPS for any \(k\geqslant 1\)._ By an application of this corollary, if we denote by \(\big{(}\beta_{n}^{[k]},\gamma_{n+1}^{[k]}\big{)}_{n\in\mathbb{N}}\) the recurrence coefficients corresponding to the OPS \(\{P_{n}^{[k]}\}_{n\geqslant 0}\), with \(k\geqslant 1\), then \[P_{n+2}^{[k]}(x)=(x-\beta_{n+1}^{[k]})P_{n+1}^{[k]}(x)-\gamma_{n +1}^{[k]}P_{n}^{[k]}(x),\ n\geqslant 0, \tag{4.2a}\] \[P_{1}^{[k]}(x)=x-\beta_{0}^{[k]},\ P_{0}^{[k]}(x)=1. \tag{4.2b}\] Our objective here is to express the coefficients \(\beta_{n}^{[k]}\) and \(\gamma_{n+1}^{[k]}\) in terms of the corresponding coefficients of the OPS \(\{P_{n}\}_{n\geqslant 0}\), namely, \(\beta_{n}\) and \(\gamma_{n+1}\) obtained either in Case **A** or in Case **B**. This will be stated in the next proposition. **Proposition 4.2**: _Let \(\{P_{n}\}_{n\geq 0}\) be a \(D_{\omega}\)-classical OPS. Then, for every \(k\in\mathbb{N}\), we have_ \[\beta_{n}^{[k]} =\beta_{n+k}+k\delta_{n+k-1},\ \ n\geqslant 0, \tag{4.3}\] \[\gamma_{n}^{[k]} =\frac{n}{n+k}\gamma_{n+k}\big{(}k(\theta_{n+k-1}-1)+1\big{)},\ \ n \geqslant 1, \tag{4.4}\] _where \(\delta_{n}\) and \(\theta_{n}\) are solutions for the equations (2.7) and (2.8), respectively._ _Proof_ From Corollary 4.1 it follows that each sequence \(\{P_{n}^{[k]}\}_{n\geqslant 0}\), \(k\geqslant 1\), is also \(D_{\omega}\)-classical. Accordingly, both of the OPS \(\{P_{n}^{[k]}\}_{n\geqslant 0}\) and \(\{P_{n}^{[k+1]}\}_{n\geqslant 0}\) are characterized by the fact that they satisfy a second structure relation of type (1.15a)-(1.15b). For \(\{P_{n}^{[k]}\}_{n\geqslant 0}\), this is given by \[P_{n+2}^{[k-1]} =P_{n+2}^{[k]}+\alpha_{n+1}^{k,1}P_{n+1}^{[k]}+\alpha_{n}^{k,0}P_ {n}^{[k]},\ n\geqslant 0, \tag{4.5a}\] \[P_{1}^{[k-1]} =P_{1}^{[k]}+\alpha_{0}^{k,1},\ \ P_{0}^{[k-1]}=P_{0}^{[k]}=1, \tag{4.5b}\] where \[\alpha_{n}^{k,1}=(n+1)\left(\beta_{n+1}^{[k-1]}-\beta_{n}^{[k]}- \omega\right)\ \ \text{and}\ \ \alpha_{n}^{k,0}=(n+1)\gamma_{n+2}^{[k-1]}-(n+2)\gamma_{n+1}^{[k]}. \tag{4.6}\] Likewise, for \(\{P_{n}^{[k+1]}\}_{n\geqslant 0}\), an equivalent relation may be written as \[P_{n+2}^{[k]}=P_{n+2}^{[k+1]}+\alpha_{n+1}^{k+1,1}P_{n+1}^{[k+1]} +\alpha_{n}^{k+1,0}P_{n}^{[k+1]},\ n\geqslant 0, \tag{4.7a}\] \[P_{1}^{[k]}=P_{1}^{[k+1]}+\alpha_{0}^{k+1,1},\ \ P_{0}^{[k]}=P_{0}^{[k+1]}=1, \tag{4.7b}\] with \[\alpha_{n}^{k+1,1}=(n+1)\left(\beta_{n+1}^{[k]}-\beta_{n}^{[k+1]}-\omega \right)\ \ \text{and}\ \ \alpha_{n}^{k+1,0}=(n+1)\gamma_{n+2}^{[k]}-(n+2)\gamma_{n+1}^{[k+1]}. \tag{4.8}\] To prove the equalities (4.3) and (4.4), we can proceed by induction on \(k\). For this, let \(\mathrm{P}(k)\) be the proposition that these two equalities hold. Observe first that the assertion \(\mathrm{P}(1)\) is trivial. Let us check that \(\mathrm{P}(2)\) is true. Setting \(k=1\) in (4.5a)-(4.5b) we get \[P_{n+2}=P_{n+2}^{[1]}+\alpha_{n+1}^{1,1}P_{n+1}^{[1]}+\alpha_{n} ^{1,0}P_{n}^{[1]},\ n\geqslant 0, \tag{4.9a}\] \[P_{1}=P_{1}^{[1]}+\alpha_{0}^{1,1},\ \ P_{0}=P_{0}^{[1]}=1, \tag{4.9b}\] where \(\alpha_{n}^{1,1}:=\tilde{\alpha}_{n}^{1}\) and \(\alpha_{n}^{1,0}:=\tilde{\alpha}_{n}^{0}\), since \(P_{n}^{[1]}:=Q_{n}\) and \(P_{n}^{[0]}:=P_{n}\). Roughly speaking, in this special case, the formulas (4.9a)-(4.9b) and (1.15a)-(1.15b) coincide. Similarly, if we take \(k=2\) in (4.5a)-(4.5b), we have \[P_{n+2}^{[1]}=P_{n+2}^{[2]}+\alpha_{n+1}^{2,1}P_{n+1}^{[2]}+ \alpha_{n}^{2,0}P_{n}^{[2]},\ n\geqslant 0, \tag{4.10a}\] \[P_{1}^{[1]}=P_{1}^{[2]}+\alpha_{0}^{2,1},\ \ P_{0}^{[1]}=P_{0}^{[2]}=1. \tag{4.10b}\] Replace \(n\) by \(n+1\) in (4.9a) and then apply the operator \(D_{\omega}\) yields \[(n+3)P_{n+2}^{[1]}=(n+3)P_{n+2}^{[2]}+(n+2)\alpha_{n+2}^{1,1}P_{n+1}^{[2]}+(n+ 1)\alpha_{n+1}^{1,0}P_{n}^{[2]},\ n\geqslant 0. \tag{4.11}\] Multiply both sides of (4.10a) by \((n+3)\) and compare this with (4.11) readily gives \[(n+3)\alpha_{n+1}^{2,1} =(n+2)\alpha_{n+2}^{1,1}, \tag{4.12a}\] \[(n+3)\alpha_{n}^{2,0} =(n+1)\alpha_{n+1}^{1,0}. \tag{4.12b}\] By (4.6), for \(k=2\) and \(k=1\), it is easy to check that (4.12a) and (4.12b), respectively, lead to \[(n+3)(n+2)\big{[}\beta_{n+2}^{[1]}-\beta_{n+1}^{[2]}-\omega\big{]} =(n+3)(n+2)\big{[}\beta_{n+3}-\beta_{n+2}^{[1]}-\omega\big{]},\] \[(n+3)\big{[}(n+1)\gamma_{n+2}^{[1]}-(n+2)\gamma_{n+1}^{[2]}\big{]} =(n+1)\big{[}(n+2)\gamma_{n+3}-(n+3)\gamma_{n+2}^{[1]}\big{]}.\] Thanks to (2.1)-(2.2), we respectively deduce that \[\beta_{n}^{[2]} =\beta_{n+2}+2\delta_{n+1},\ \ n\geqslant 0,\] \[\gamma_{n}^{[2]} =\frac{n}{n+2}\gamma_{n+2}\big{(}2\theta_{n+1}-1\big{)},\ \ n\geqslant 1,\] which is precisely the desired conclusion, that is, \(\mathrm{P}(2)\) is true. Now, we must show that the conditional statement \(\mathrm{P}(k)\to\mathrm{P}(k+1)\) is true for all positive integers \(k\). We can proceed analogously to the proof of \(\mathrm{P}(2)\). Starting from (4.5a), replacing \(n\) by \(n+1\) and then apply the operator \(D_{\omega}\) we find \[(n+3)P_{n+2}^{[k]}=(n+3)P_{n+2}^{[k+1]}+(n+2)\alpha_{n+2}^{k,1}P_{n+1}^{[k+1]} +(n+1)\alpha_{n+1}^{k,0}P_{n}^{[k+1]},\ n\geqslant 0. \tag{4.13}\] Multiply both sides of (4.7a) by \((n+3)\) and compare this with (4.13) readily gives \[(n+3)\alpha_{n+1}^{k+1,1} =(n+2)\alpha_{n+2}^{k,1}, \tag{4.14a}\] \[(n+3)\alpha_{n}^{k+1,0} =(n+1)\alpha_{n+1}^{k,0}. \tag{4.14b}\] On account of (4.6)-(4.8), we have \[(n+3)(n+2)\big{[}\beta_{n+2}^{[k]}-\beta_{n+1}^{[k+1]}-\omega\big{]} =(n+3)(n+2)\big{[}\beta_{n+3}^{[k-1]}-\beta_{n+2}^{[k]}-\omega \big{]},\] \[(n+3)\big{[}(n+1)\gamma_{n+2}^{[k]}-(n+2)\gamma_{n+1}^{[k+1]}\big{]} =(n+1)\big{[}(n+2)\gamma_{n+3}^{[k-1]}-(n+3)\gamma_{n+2}^{[k]}\big{]}.\] From this, we deduce that \[\beta_{n+1}^{[k+1]} =2\beta_{n+2}^{[k]}-\beta_{n+3}^{[k-1]}, \tag{4.15}\] \[\gamma_{n+1}^{[k+1]} =2\frac{n+1}{n+2}\gamma_{n+2}^{[k]}-\frac{n+1}{n+3}\gamma_{n+3}^ {[k-1]}. \tag{4.16}\] On the other hand, according to the induction hypothesis we may write \[\beta_{n+2}^{[k]} =\beta_{n+k+2}+k\delta_{n+k+1}, \beta_{n+3}^{[k-1]}=\beta_{n+k+2}+(k-1)\delta_{n+k+1},\] \[\gamma_{n+2}^{[k]}\!=\!\frac{n\!+\!2}{n\!+\!k\!+\!2}\gamma_{n+k+2} \big{(}k(\theta_{n+k+1}\!-\!1)\!+\!1\!\big{)}, \gamma_{n+3}^{[k-1]}\!=\!\frac{n\!+\!3}{n\!+\!k\!+\!2}\gamma_{n+k+2} \big{(}\!(k\!-\!1)(\theta_{n+k+1}\!-\!1)\!+\!1\!\big{)}.\] Substituting these into (4.15)-(4.16), and then changing \(n\) into \(n-1\), it follows that \[\beta_{n}^{[k+1]} =\beta_{n+k+1}+(k+1)\delta_{n+k},\ \ n\geqslant 0,\] \[\gamma_{n}^{[k+1]} =\frac{n}{n+k+1}\gamma_{n+k+1}\big{(}(k+1)(\theta_{n+k}-1)+1\big{)},\ \ n\geqslant 1.\] These last equalities show that \(\mathrm{P}(k+1)\) is also true, which completes the proof. **Remark.** Due to (4.3) and (4.4), from (4.6), it is seen that the two coefficients \(\alpha_{n}^{k,1}\) and \(\alpha_{n}^{k,0}\) involving in the structure relation (4.5a)-(4.5b) can be rewritten as \[\alpha_{n}^{k,1}=-(n+1)\left(\delta_{n+k-1}+\omega\right)\,;\,\alpha_{n}^{k,0} =\frac{(n+1)(n+2)}{n+k+1}\gamma_{n+k+1}\left(1-\theta_{n+k}\right),\ n \geqslant 0. \tag{4.17}\] For \(k=1\), this reduces to (2.3) providing the coefficients of (1.15a)-(1.15b). **Application.** When \(\omega=0\), the identities (4.5a)-(4.5b) consist of the structure relation characterising the higher order derivatives sequence of the classical orthogonal polynomials. In this case, a direct application of the preceding proposition enables us to express the recurrence coefficients of the sequence \(\{P_{n}^{[k]}\}_{n\geqslant 0}\) in terms of the recurrence coefficients for each of the four classical families. To this purpose, if we denote by \(\{\hat{H}_{n}\}_{n\geqslant 0}\), \(\{\hat{L}_{n}(.;\alpha)\}_{n\geqslant 0}\), \(\{\hat{B}_{n}(.;\alpha)\}_{n\geqslant 0}\) and \(\{\hat{J}_{n}(.;\alpha,\beta)\}_{n\geqslant 0}\), respectively, the (monic) Hermite, Laguerre, Bessel and Jacobi polynomials, and by \(\hat{H}_{n}^{[k]}(x)\), \(\hat{L}_{n}^{[k]}(x;\alpha)\), \(\hat{B}_{n}^{[k]}(x;\alpha)\) and \(\hat{J}_{n}^{[k]}(x;\alpha,\beta)\) their corresponding sequences of derivatives of order \(k\), then application of Formulas (4.3) and (4.4) successively give: **Case A.** For \(\theta_{n}=1\), \(n\geqslant 1\), and \(\delta_{n}=\delta_{0}\), \(n\geqslant 1\), we have \[\hat{\beta}_{n}^{[k]} =\hat{\beta}_{n+k}+k\delta_{0},\ \ n\geqslant 0, \tag{4.18}\] \[\hat{\gamma}_{n}^{[k]} =\frac{n}{n+k}\hat{\gamma}_{n+k},\ \ n\geqslant 1. \tag{4.19}\] _Hermite case_ : When \(\delta_{0}=0\), \(\hat{\gamma}_{1}=\frac{1}{2}\), this yields \[\hat{\beta}_{n}^{[k]}=0,n\geqslant 0,\ \ \text{and}\ \ \hat{\gamma}_{n+1}^{[k]}=\frac{1}{2}(n+1),n\geqslant 0.\] _Laguerre case_ : When \(\delta_{0}=-1\), \(\hat{\gamma}_{1}=\hat{\beta}_{0}=\alpha+1\), we get \[\hat{\beta}_{n}^{[k]}=2n+\alpha+k+1,n\geqslant 0,\ \ \text{and}\ \ \hat{\gamma}_{n+1}^{[k]}=(n+1)(n+\alpha+k+1),n\geqslant 0.\] **Case B.** For \(\theta_{n}=\frac{n+\theta+1}{n+\theta}\), \(n\geqslant 1,\ \text{and}\ \delta_{n}=\frac{\delta_{0}(\theta+3)(\theta+1)}{\big{(}2n+\theta+3\big{)} \big{(}2n+\theta+1\big{)}}\), \(n\geqslant 0\), we deduce that \[\hat{\beta}_{n}^{[k]} =\hat{\beta}_{n+k}+\frac{k\delta_{0}(\theta+3)(\theta+1)}{\big{(} 2(n+k)+\theta+1\big{)}\big{(}2(n+k)+\theta-1\big{)}},\ \ n\geqslant 0, \tag{4.20}\] \[\hat{\gamma}_{n}^{[k]} =\frac{n(n+\theta+2k-1)}{(n+k)(n+\theta+k-1)}\hat{\gamma}_{n+k}, \ \ n\geqslant 1. \tag{4.21}\] _Bessel case_ : When \(\theta=2\alpha-1\), \(\delta_{0}=-1/(\alpha+1)\alpha\), we obtain \[\hat{\beta}_{n}^{[k]} =\frac{1-(\alpha+k)}{(n+\alpha+k)(n+\alpha+k-1)},\ n\geqslant 0,\] \[\hat{\gamma}_{n+1}^{[k]} =-\frac{(n+1)(n+2(\alpha+k)-1)}{(2n+2(\alpha+k)+1)(n+\alpha+k)^{2 }(2n+2(\alpha+k)-1)},\ n\geqslant 0.\] _Jacobi case_ : When \(\theta=\alpha+\beta+1\), \(\delta_{0}=2(\alpha-\beta)/(\alpha+\beta+4)(\alpha+\beta+2)\), we get \[\hat{\beta}_{n}^{[k]} =\frac{(\alpha-\beta)(\alpha+\beta+2k)}{(2n+\alpha+\beta+2k+2)(2 n+\alpha+\beta+2k)},\ n\geqslant 0,\] \[\hat{\gamma}_{n+1}^{[k]} =4\frac{(n+1)(n+\alpha+\beta+2k+1)(n+\alpha+k+1)(n+\beta+k+1)}{(2 n+\alpha+\beta+2k+3)(2n+\alpha+\beta+2k+2)^{2}(2n+\alpha+\beta+2k+1)},\ n\geqslant 0.\] We thus rediscover the well known relations \(\hat{H}_{n}^{[k]}(x)=\hat{H}_{n}(x)\), \(\hat{L}_{n}^{[k]}(x;\alpha)=\hat{L}_{n}(x;\alpha+k)\), \(\hat{B}_{n}^{[k]}(x;\alpha)=\hat{B}_{n}(x;\alpha+k)\) and \(\hat{J}_{n}^{[k]}(x;\alpha,\beta)=\hat{J}_{n}(x;\alpha+k,\beta+k)\). The results presented above are of course known and clearly show that the sequences of higher order derivative for the classical orthogonal polynomials belonging to the same class, provided that the parameters \(\alpha\) and \(\beta\) take values in the range of regularity. **Conclusion.** We studied the \(D_{\omega}\)-classical orthogonal polynomials using a new method in this domain. The results obtained in Section 3 are expected and are consistent with those found in [12], where four different families are pointed out with some of their special cases. The recurrence coefficients of the resulting orthogonal polynomials are explicitly determined. Proposition 1.4 established a new characterization of these polynomials via a structure relation, and Proposition 4.2 provided relations connecting the recurrence coefficients of each sequence of polynomials with those of its higher-order derivatives. For \(\omega=0\), the classical orthogonal polynomials are rediscovered.
2304.07123
Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.
Jiahua Dong, Guohua Cheng, Yue Zhang, Chengtao Peng, Yu Song, Ruofeng Tong, Lanfen Lin, Yen-Wei Chen
2023-04-14T13:39:39Z
http://arxiv.org/abs/2304.07123v1
# Tailored Multi-Organ Segmentation with Model Adaptation and Ensemble ###### Abstract Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy. keywords: model adaptation, model ensemble, unsupervised learning, multi-organ segmentation + Footnote †: journal: Computer Vision and Image Processing ## 1 Introduction Multi-organ segmentation delineates important organs (e.g., the liver, spleen, and kidney) from abdominal medical images, which is essential for various clinical applications including computer-aided surgery, computer-aided diagnosis, radiation therapy, etc [1]. Recently, deep learning-based methods, such as U-Net [2] and nnU-Net [3], have shown promising results in multi-organ segmentation, which rely heavily on large-scale annotated data with strong supervision. However, constrained by labor costs and required expertise, the limited availability of multi-organ annotations presents a challenge in acquiring sufficient data for supervised learning [4]. This limitation impedes the wider adoption of deep learning-based multi-organ segmentation in clinical applications [5]. With the development of the open source community, many researchers released codes and well-trained medical image segmentation models to the public, but usually without providing training/testing datasets due to privacy concerns [6; 7; 8]. We have observed that most of such released models are trained for single organ segmentation and retain specific knowledge distilled from their training data. Inspired by this, in this paper, we aim to explore obtaining a multi-organ segmentation model on the target dataset from a union of public single-organ segmentation models (termed as **Multi-Model Adaptation** problem), which has the potential to get rid of the dependence on annotated data for multi-organ segmentation. Fig. 1 illustrates a conceptual diagram of integrating three models, trained respectively for segmenting the liver, spleen, and kidney, into a single multi-organ segmentation model. To solve the Multi-Model Adaptation problem, there are two main challenges that need to be considered. As public single-organ segmentation models are trained using different datasets across various institutions, directly applying such models to out-of-distribution target data in unseen domains usually leads to bad generalization [9; 10; 11]. Thus, the first challenge is _how to adapt source segmentation models to the target domain without access to the source data_, which is also known as model adaptation. Once we have obtained adapted single-organ models that generalize well on the target domain, the second challenge is _how to aggregate multiple single-organ segmentation models into a desired multi-organ segmentation model_. To tackle the aforementioned issues, in this paper, we propose a dual-stage method for Multi-Model Adaptation, which comprises a Model Adaptation stage and a Model Ensemble stage. The **Model Adaptation stage** aims to enhance the generalization of each off-the-shelf segmentation model on the target domain, without access to both source data and target labels. To achieve this, we introduce a Label Refinement Module (LRM) to produce reliable pseudo-labels for the target data and a Feature Generalization Module (FRM) to ensure the learned feature space of the target domain is robust and compact. The **Model Figure 1: An example of Multi-Model Adaptation, which integrates three single-organ segmentation models into a target multi-model segmentation model. **Ensemble stage** aims to distill and integrate knowledge from multiple adapted single-organ segmentation models. To that end, we propose a novel certainty-aware ensemble function, which can dynamically compose the teacher model from adapted single-organ models through a teacher selection map. Then, the information is distilled and transferred from the teacher model to the target multi-organ segmentation model. To verify the performance of our framework, we conduct extensive experiments on four abdominal datasets. Experimental results demonstrate that our approach can effectively leverage public single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy. In summary, our contributions are threefold: * We investigate an unexplored problem: how to obtain a multi-organ segmentation network by combining off-the-shelf single-organ segmentation models. To address this challenge, we propose a baseline approach that can serve as a starting point for further research in this field. * We propose a novel model adaptation strategy for adapting off-the-shelf models to target domains, without requiring access to source data or altering the structures of source models. Our adaptation strategy mainly consists of a Label Refinement Module and a Feature Generalization Module, which enhances the model generalization from the perspective of pseudo-labeling and feature space optimization. * We propose a novel model ensemble method for aggregating multiple single-organ segmentation models into a tailored multi-organ segmentation model. For this goal, we devise a certainty-aware ensemble function to dynamically compose the teacher model from multiple single-organ segmentation models, and then transfer knowledge from the teacher model to the target multi-organ segmentation model. ## 2 Related Work In this section, we briefly review the most related works in the literature: **(1)** medical image segmentation with model adaptation (Sec. 2.1) and **(2)** distillation and ensemble (Sec. 2.2). ### Medical image segmentation with model adaptation In recent years, several studies proposed the concept of source-free domain adaptation (SFDA) [12; 13], which adapts models without access to source data. These methods have been widely used in medical scenarios due to the constraints of data availability. For example, Bateson et al. [14] proposed AdaMI, which extracted prior knowledge by adding auxiliary branches in the source models to refine the task. Stan et al. [15] encoded the source samples into a prototypical distribution, and aligned the distribution to the target domain. However, these SFDA methods need to alter the structure of the source model with auxiliary branches or additional training tasks, which could not adapt off-the-shelf models that are already well-trained on the source domains. To address this limitation, some researchers focused on improving segmentation performance only using an off-the-shelf segmentation model, which is known as model adaptation. Existing model adaptation methods reduce domain shifts through either pseudo-labeling or distribution alignment. In terms of pseudo-labeling methods, Cheng et al. [16] proposed the denoised pseudo-labeling (DPL), which filtered out noisy pseudo labels and re-trained the model based on the preserved pseudo labels. In terms of distribution alignment methods, Liu et al. [17] proposed an adaptive batch-wise normalization statistics adaptation framework to adapt off-the-shelf segmentation models. Bruggemann et al. [18] leverages unlabeled pairs of adverse- and normal- condition images to learn condition-invariant features via contrastive learning. In this paper, we argue that pseudo-labeling and distribution alignment methods are valuable complements of each other. Our experiments have shown that distribution alignment is effective at adapting the feature distribution, while pseudo-label refinement is better at refining the decision boundary. Therefore, we propose to integrate both pseudo-labeling and distribution alignment to improve model adaptation. ### Distillation and ensemble Distillation and ensemble is extended from knowledge distillation, which transfers knowledge from teachers to a compact student model. Hinton et al.[19] were the first to investigate transferring knowledge from a teacher model ensemble to the student. They simply combined soft predictions of teacher models using the averaging operation and adopted KL-divergence loss for knowledge transfer. Malinin et al.[20] argued that using the averaging operation for combination harmed the diversity of the models in an ensemble. To address this issue, they used a prior network to estimate output uncertainties of different teacher models, and fused predictions with the guidance of uncertainty. However, the above approaches still depend on labeled datasets, which cannot meet many real-world scenarios where manual labels are unavailable. To solve this problem, Chao et al.[21] introduced an unsupervised domain adaptation-based ensemble-distillation framework, which was robust to the inconsistency in the scale of the output certainty values and the performance variations among the members in an ensemble. However, this method needed to use adapted models instead of directly using off-the-shelf models, and still had the strict requirement for the label consistency, which is not available for real scenarios. ## 3 Method In this section, we introduce the proposed Multi-Model Adaptation method from three aspects: First, we give an overview of the framework in Section. 3.1, including the problem definition and overall pipeline; Then, we delve into the details of the model adaptation strategy in Section. 3.2; Finally, we provide details of the model ensemble strategy in Section. 3.2. ### Overview #### 3.1.1 Problem Definition Assuming there exist a pool of off-the-shelf single-organ segmentation models, denoted as \(\{T_{1},T_{2},...,T_{n}\}\). These models were trained for distinct single-organ segmentation tasks, denoted as \(\{\mathcal{K}_{1},\mathcal{K}_{2},...,\mathcal{K}_{n}\}\), and were optimized using different datasets with varying distributions. Let \(\mathcal{K}=\bigcup_{i}K_{i}\) denote the union of segmentation tasks covered by all models. Our goal is to combine information from all the off-the-shelf models to build a target multi-organ segmentation model for tasks \(\mathcal{K}\) that can perform well on an unseen target dataset with a new distribution. Figure 2: A schematic view of our Multi-Model Adaptation framework, which mainly experiences the Model Adaptation and Model Ensemble stage. #### 3.1.2 Overall Pipeline The overview of our framework is illustrated in Fig. 2. To clearly introduce our method, we consider the simplest scenario: obtaining a dual-organ segmentation model (also referred to as the student net) from two single-organ segmentation models (also referred to as teacher nets). However, our method is not limited to just two teacher models; it can be extended to aggregate any number of single-organ segmentation models. As shown in Fig. 2, the overall training process experiences two stages: the Model Adaptation stage and the Model Ensemble stage. In the Model Adaptation stage, we adapt teacher nets to fit the target data distribution. These adapted models, which generalize well on the target dataset, are called adapted teacher nets. In the Model Ensemble stage, we obtain a tailored student net from the union of adapted teacher nets that can complete two segmentation tasks (the union of tasks of all teacher nets). Below, we delve into the specifics of each stage. ### Stage-I: Model Adaptation The Model Adaptation stage adapts each teacher net to make it fit the target data distribution. Fig. 3 (a) illustrates the feature distribution and decision boundary on the target domain generated by the teacher net before adaptation. Due to domain shifts between the source dataset and target dataset, the dense classification fails in many pixel locations. Our ultimate objective is to optimize the feature distribution and obtain an accurate decision boundary (see Fig. 3 (d)), such that the teacher net is adapted well to the target domain. To achieve this, we introduce the label refinement module (LRM) and feature generalization module (FGM). Specifically, LRM produces reliable pseudo-labels for the target dataset, which avoids having decision boundaries in high-density areas (see Fig. 3(b)). Meanwhile, FGM enforces robust and compact feature distribution by leveraging a consistency loss and information maximization loss (see Fig. 3(c)). By combining both two modules, we finally obtain a well-adapted teacher net (see Fig. 3 (d)). Below, we elaborate on the details of LRM and FGM. #### 3.2.1 Label Refinement Module Common pseudo-labeling methods directly used network predictions as pseudo-labels, which inevitably generate error-prone labels for the target domain. As training progresses, these errors accumulate and could significantly degrade model performance. Considering this, LRM is proposed to preserve certain confident pseudo-labels and refine low-confident ones based on class prototypes. Generally, a class prototype is the most representative feature vector that characterizes its category. For each class \(c\) in the target domain, its class prototype \(proto_{c}\) is calculated by combining the feature vector of each pixel: \[proto_{c}=\frac{\sum_{x\in X}\delta_{c}(x)f(x)}{\sum_{x\in X}\delta_{c}(x)} \tag{1}\] Figure 3: The schematic of different stages of model adaptation. Panel (a) shows the feature space and decision boundary of the target data before model adaptation. Panel (b) and (c) show the refined decision boundary and feature distribution through LRM and FGM, respectively. By incorporating LRM and FGM, we can obtain the ideal feature space and boundary in panel (d). where \(x\) is a pixel location, \(\delta_{c}(x)\) is the probability score for class \(c\) (yielded by the Softmax layer of the teacher net), and \(f(x)\) decision feature of \(x\) yielded by the second-to-last convolutional layer. Given that a reliable pseudo-label should be similar to its class prototype [22, 23], we measure the similarity between the features of each pixel with the prototype using cosine similarity: \[Sim_{c}(x)=\frac{\left\|(f(x)\cdot proto_{c}\right\|_{2}}{\left\|(f(x)\right\|_ {2}\left\|proto_{c}\right\|_{2}} \tag{2}\] where \(Sim_{c}(x)\) is the similarity between pixel \(x\) and prototypes of class \(c\), operator \(||*||_{2}\) means L2 norm. So far, we can refine pseudo-labels according to the following equations: \[\hat{y}(x)=\begin{cases}\arg\min_{c}Sim_{c}(x),&\lambda>\lambda_{th}\\ y^{\prime}(x),&\lambda\leq\lambda_{th}\end{cases} \tag{3}\] where \(\lambda\) is the entropy measuring the confidence of network output, \(\lambda_{th}\) is the entropy threshold which is set according to different segmentation tasks (_e.g._, 0.1 for liver, 0.4 for spleen, and 0.2 for kidney segmentation considering different organ characteristics), \(y^{\prime}(x)\) is the prediction of source model. For such confidently classified pixels (with \(\lambda\leq\lambda_{th}\)), we preserve the original network prediction. Meanwhile, for the remnant low-confident pixels (with \(\lambda>\lambda_{th}\)), we update their pseudo-labels to the class that is most similar to them using Eq. 3. Then, the refined-pseudo labels are used to fine-tune the teacher net using a cross-entropy loss: \[L_{lrm}(x)=L_{ce}(\hat{y}(x),\delta(x)) \tag{4}\] where \(L_{ce}\) denotes the cross-entropy loss; \(\delta(x)\) is the Softmax output; and \(\hat{y}(x)\) is the refined pseudo label. Intuitively, the LRM modifies the original decision boundaries (Fig. 3 (a)) by updating the classification of each pixel. This avoids the decision boundary in high-density regions on the target domain (Fig. 3 (b)) and reduces the ambiguity of predictions. #### 3.2.2 Feature Generalization Module The FGM further improves the model adaptation from the perspective of feature space optimization. It encourages a more compact feature distribution and reduces the class conditional distribution overlap (see Fig. 3(c)). We achieve this by introducing two loss functions, including a consistency loss and an information maximization loss. Specifically, the consistency loss aims to enforce the invariance of the model's predictions over small perturbations introduced to the network. To apply perturbations to the hidden representation, we introduce multiple auxiliary decoders that use dropout, while the original main decoder does not use dropout [24]. The consistency constraint is then imposed between predictions of the main decoder and auxiliary decoders: \[\mathcal{L}_{con}(x)=\frac{1}{K}\sum_{k=1}^{K}\left(\hat{\delta}^{k}(x)-\delta (x)\right)^{2} \tag{5}\] where \(K\) is the number of auxiliary decoders that is set to 4, \(\delta(x)\) is the output of the main decoder, and \(\hat{\delta}^{k}(x)\) is the output of \(k^{th}\) auxiliary decoder. The information maximization (IM) loss is designed to reduce uncertainty while ensuring the diversity of categories [13], which consists of an entropy loss item \(L_{ent}\)[25] and a diversity loss item \(L_{div}\)[26]: \[\mathcal{L}_{im}(x)=\mathcal{L}_{ent}(x)+\beta\mathcal{L}_{div}(x) \tag{6}\] where the trade-off coefficient \(\beta\) is impirically set to 0.1. Finally, the overall objective of the Model Adaptation stage is formulated as: \[L_{ma}(x)=L_{lrm}(x)+\lambda_{con}L_{con}(x)+\lambda_{im}L_{im}(x) \tag{7}\] where \(\lambda_{con}\) and \(\lambda_{im}\) are trade-off coefficients, which are set to 0.1 and 1 respectively. ### Stage-II: Model Ensemble The Model Ensemble stage aims to obtain a tailored student net for multi-organ segmentation from the union of adapted teacher nets, whose architecture is shown in Fig. 2. In this stage, constructing an ensemble function that indicates the aggregation criteria is the key issue, which can greatly affect the effectiveness of the model ensemble. Then, we can aggregate multiple teacher models into a student model based on the ensemble function from two aspects: label aggregation and feature aggregation. #### 3.3.1 Ensemble Function To combine effective information from adapted teacher nets, we propose the certainty-aware ensemble function, which dynamically selects the best teacher model for each pixel position by computing a certainty-based teacher selection map. _Definition of Model Certainty:_ Generally, entropy can be used to quantify the certainty of teacher models, in which lower entropy values indicate higher certainty. To ensure comparable ranges of entropy values across different teacher models, we regularize the entropy values using mean normalization, which is formulated as: \[\begin{array}{l}H(x;t_{i})=-\sum_{c}p_{c}(x;t_{i})logp_{c}(x;t_{i})\\ H_{norm}(x;t_{i})=\frac{H(x;t_{i})-\mu_{y(x;t_{i}),t_{i}}}{\sigma_{y(x;t_{i}),t _{i}}}\end{array} \tag{8}\] where \(t_{i}\) is the \(i^{th}\) teacher model, \(c\) is the class, \(p_{c}(x;t_{i})\) is the probability of pixel \(x\) belonging to class \(c\) yielded by the \(i^{th}\) teacher model, \(y(x;t_{i})\) is the prediction result of \(i^{th}\) teacher model for pixel \(x\), \(\mu\) and \(\sigma\) is the mean and standard deviation. The normalized entropy value \(H_{norm}\) is regarded as the certainty of a model. _Certainty-Based Teacher Selection Map:_ Considering each teacher model for single-organ segmentation cannot provide decision-making information for other segmentation tasks that have not been learned, we exclude this part of the teacher net before selecting the best teacher net. For each pixel \(x\), we prepare the pre-selected teacher models and define the set as \(A(x)\), which is formulated as: \[A(x)=\begin{cases}(i)A_{o},&\text{if }\forall t_{i},y(x;t_{i})=0\\ (ii)A_{o}\backslash\bigcup A_{c^{\prime}},c^{\prime}\in D_{c},&\text{ otherwise}\end{cases} \tag{9}\] where \(A_{c}\) is the set of teacher nets that have no information about class \(c\), \(A_{o}\) is the set of all teacher nets and \(D_{c}=\{c|\forall t_{i}\in A_{c},y(x;t_{i})=0\}\) is the set of categories that can not be identified as \(c\) (all teacher models \(t_{i}\) in \(A_{c}\) predict pixel \(x\) as background). In Eq. 9, (i) is the condition when pixel \(x\) is unlabeled by all teacher models and (ii) is the condition when there exist conflicts and we need to discard the teacher nets that lack information about the category of pixel \(x\). Then, the teacher model with the highest certainty among \(A(x)\) is chosen to generate the teacher selection map: \[t_{se}(x)=\operatorname*{arg\,min}_{t_{i}\in A(x)}H_{norm}(x;t_{i}) \tag{10}\] where \(t_{se}\) denotes the selected teacher model for a pixel location \(x\). For an image sample, different pixel locations correspond to different selected teachers. #### 3.3.2 Label Aggregation To aggregate knowledge from multiple adapted teacher nets, we first aggregate single-organ predictions into multi-organ pseudo-labels based on the teacher selection map for self-training. We denote \(y_{aggr}\) as the aggregated multi-organ pseudo-label and express it as follows: \[y_{aggr}(x)=y(x;t_{se}(x)) \tag{11}\] Then, the aggregated pseudo-labels are used to train the student net using a cross-entropy loss: \[L_{la}=L_{ce}(y_{aggr}(x),p_{s}(x)) \tag{12}\] where \(p_{s}(x)\) is the output of student model at the pixel \(x\). #### 3.3.3 Feature Aggregation To help the student net learn the advantages of each network, we also aggregate the knowledge from multiple adapted teachers under the guidance of the teacher selection map. To achieve this, we first project the teacher net and student net into the same feature space, which is implemented by a learnable convolution layer with \(3\times 3\) kernel. The projection can be formulated as: \[F^{\prime}_{s}=Conv_{3\times 3}(F_{s}),\ F^{\prime}_{t_{i}}=Conv_{3\times 3}(F_{t_{i}}) \tag{13}\] where \(F_{s}\) (\(F_{t_{i}}\)) represent the original features of student model \(s\) (teacher model \(t_{i}\)) and \(F^{\prime}_{s}\) (\(F^{\prime}_{t_{i}}\)) represent the projected features of student model \(s\) (teacher model \(t_{i}\)). After that, we distill the knowledge from the selected teacher net to the student net. Specifically, we minimize the distance between the projected feature of the student net and the selected teacher net (based on the teacher selection map, Eq. 10) via the L2 loss. And the feature distillation loss \(L_{fa}\) of learned teacher model \(i\) is formulated as: \[\begin{split}\mathcal{L}^{(i)}_{fa}(x)=M_{i}(x)*\left\|Upsample[ F^{\prime}_{s}](x)-Upsample[F^{\prime}_{t_{i}}](x)\right\|^{2}_{2}\\ M_{i}(x)=\begin{cases}1,&\text{ if }t_{se}(x)=t_{i}\\ 0,&\text{ otherwise}\end{cases}\end{split} \tag{14}\] where \(Upsample[F^{\prime}_{s}(x)]\) (\(Upsample[F^{\prime}_{t_{i}}(x)]\)) means up-sampling the projected features of student model \(s\) (teacher model \(t_{i}\)) to the size of teacher selection mask. In our experiments, all the teacher nets and the student net follow the architecture of DeepLabv3+ [27]. And we conduct feature aggregation at the fourth encoding block (low-level feature) and the last convolutional layer (high-level feature) of DeepLabv3+ [27]. Finally, the overall loss function for the Model Ensemble stage is defined as follows: \[L_{me}(x)=L_{la}(x)+\lambda_{fa}\sum_{i=1}^{n}L^{(i)}_{fa}(x) \tag{15}\] where \(n\) is the number of teacher models (\(n=2\) for the simplest scenario), \(\lambda_{fa}\) is the trade-off coefficient that is set to 0.001. ## 4 Experiments and Results ### Experimental Setup #### 4.1.1 Materials In this study, to validate the effectiveness of our method, we conducted extensive experiments on four abdominal segmentation datasets, including the LiTS dataset, RS dataset, BTCV dataset, and SRRSHA dataset. The **LiTS dataset**[28] is a public dataset consisting of 131 CT scans that have been annotated to identify both liver and liver tumors. To create a dataset that exclusively focuses on the liver, we merged the liver and tumor labels by adjusting the label values of the tumor areas to match those of the liver areas. The **RS dataset**[29] is an in-house spleen segmentation dataset collected by Ritsumeikan University, which includes 51 CT scans with spleen delineations. The **BTCV dataset**[30] is a public dataset including 30 CT scans with 13 organs annotated. In our experiments, we only use the liver and spleen labels. The **SRRSHA dataset** is an in-house dataset collected by Sir Run Run Shaw Hospital, which includes 277 MRI scans with 4 abdomen organs annotated. In this study, only liver and spleen labels are used. Table 1 shows the details of the four datasets. LiTS and RS datasets are reserved for source datasets. We randomly split each of them into training/validation/testing sets with a fixed ratio of 60%:10%:30% and use them to train the single-organ segmentation teacher nets. Meanwhile, BTCV and SRRSHA datasets are reserved for target datasets. We randomly split each of them into training/testing sets with a fixed ratio of 70%:30% and use them to test the student net. (Note that there is no validation set for target datasets since our task is unsupervised model adaptation.) For preprocessing, all CT slices are truncated into the range of [-250, 250] Hu to eliminate irrelevant tissues; Meanwhile, for MR images, we clipped the 35% highest intensity values to eliminate irrelevant details. #### 4.1.2 Competing Methods Since we are the first to solve the Multi-Model Adaptation problem, there exists no similar work that learns a multi-organ segmentation model from the union of off-the-shelf single-organ segmentation models. Therefore, we compare our method with relevant domain adaptation methods: **(1) AdvEnt**[31], a popular unsupervised domain adaptation benchmark approach that encourages entropy consistency between the source domain and the target domain; **(2) AdaMI**[14], a source-free domain adaptation approach that generates class-ratio prior via an auxiliary network in the source domain and refines the segmentation mask guided by the class-ratio prior in the target domain; **(3) DPL**[16], a model adaptation approach that generates more discriminative and less noisy supervision by pixel-level denoising with uncertainty estimation and class-level denoising with prototype estimation; **(4) MDAN**[32], a multi-source domain adaptation approach that distinguishes the pair of the source domain and target domain by adding k domain classifiers. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Dataset** & **Modality** & **Scans** & **Organ** & **Axis** & **Size** & **Spacing** \\ \hline \multirow{3}{*}{LiTS} & \multirow{3}{*}{CT} & \multirow{3}{*}{131} & \multirow{3}{*}{Liver} & X & 512 & 0.63–1.00 \\ & & & & Y & 512 & 0.63–1.00 \\ & & & & Z & 75–987 & 0.70–5.00 \\ \hline \multirow{3}{*}{RS} & \multirow{3}{*}{CT} & \multirow{3}{*}{51} & \multirow{3}{*}{Spleen} & X & 512 & 0.63–1.00 \\ & & & & Y & 512 & 0.63–1.00 \\ & & & & Z & 75–987 & 0.70–5.00 \\ \hline \multirow{3}{*}{BTCV} & \multirow{3}{*}{CT} & \multirow{3}{*}{30} & Liver, spleen and & X & 512 & 0.63–1.00 \\ & & & other 11 organs & Y & 512 & 0.63–1.00 \\ & & & & Z & 75–987 & 0.70–5.00 \\ \hline \multirow{3}{*}{SRRSHA} & \multirow{3}{*}{MRI} & \multirow{3}{*}{277} & Liver, spleen and & X & 512 & 0.63–1.00 \\ & & & other 2 organs & Y & 512 & 0.63–1.00 \\ \cline{1-1} & & & & Z & 75–987 & 0.70–5.00 \\ \hline \hline \end{tabular} \end{table} Table 1: A summary description of the used datasets. Among these competing methods, only MDAN can obtain a multi-organ segmentation model from single-orange segmentation models, while other methods merely obtain adapted single-organ segmentation methods. But, it should be noted that MDAN needs access to source domain data. Besides, AdvEnt, AdaMI, and MDAN are strong competing methods since they utilize more source domain information by either accessing source domain data or altering source structures of source models. However, our method is completely based on off-the-shelf model adaptation, which neither uses source domain data nor alters source model structures. #### 4.1.3 Evaluation Metrics To quantitatively evaluate the methods, we adopt the Dice similarity coefficient (DSC) and average surface distance (ASD) as the metrics. Specifically, DSC is for pixel-wise accuracy measurement, and ASD is for boundary segmentation evaluation. The higher DSC and lower ASD indicate better performance. #### 4.1.4 Implmetation Details In this study, all methods are implemented using Pyotorch 1.5.0 and deployed on an NVIDIA GTX 2080 GPU. To ensure the fairness of the comparison, all methods adopt DeepLabv3+ [27] with MobileNetV2 [33] as the network backbone. We adopt the Adam optimizer for training with the momentum of 0.9 and 0.99, and a learning rate of 2e-3. The training epoch number of the model adaption stage is set to 100 with a batch size of 4; Meanwhile, the training epoch number of the Model Ensemble stage is set to 200 with a batch size of 2. ### Results of the Proposed Method In this section, we present qualitative and quantitative results produced by our proposed method. Fig. 4 shows examples of results produced by different stages of our method on different datasets. As observed, although performed well on the source LiTS and RS datasets (as shown in the first row in Fig. 4), these well-tra mance on the target BTCV and SRRSHA datasets (see the second row in Fig. 4). This phenomenon is caused by the domain shifts between the data acquired under different conditions. Evaluated by quantitative analysis, the well-trained liver segmentation model achieves 94.95% in DSC when tested on the source LiTS dataset, and the well-trained spleen segmentation model achieves 96.70% in DSC when tested on the source RS dataset. However, the liver model only achieves 75.51% in DSC (-19.44%) on the SRRSHA dataset and 89.69% (-5.26%) on the BTCV dataset, and the spleen model only achieves 34.84% (-61.86%) in DSC on the SRRSHA dataset and 69.73% (-26.97%) on the BTCV dataset (see Table 2). Note that a more severe performance drop is observed in cross-modal (CT to MRI) segmentation, i.e., LiTS to SRRSHA and RS to SRRSHA, due to larger domain gaps. Therefore, the first step in imposing off-the-shelf models is to adapt each Figure 4: Visual results produces by each stage of our method on four abdomen datasets. The ground truths are delineated in red, the prediction results of liver segmentation are delineated in green and the prediction results of spleen segmentation are delineated in yellow. well-trained model to the target datasets. To solve this problem, our model adaptation (MA) stage fine-tunes the models without access to corresponding source data. Comparing the second row (before adaptation) and third row (after adaptation) in Fig. 4, obvious false positives in the first column and false negatives in the sixth column are corrected after model adaptation. More specifically, on the SRRSHA dataset, the dice scores are improved from 75.51% to 85.16% (outperforms unadapted teacher net by +9.65%) for liver segmentation, and from 34.84% to 79.91% (+45.07%) for spleen segmentation. On the BTCV dataset, the dice scores are improved from 89.69% to 90.64% (+0.95%) for liver segmentation, and from 69.73% to 85.79% (+16.06%) for spleen segmentation. Finally, to learn a single multi-organ segmentation model from the union of adapted single-organ segmentation models, we introduce the Model Ensemble stage. In this stage, the student net can segment the liver and spleen simultaneously without any harm to performance on both liver and spleen segmentation tasks. Quantitatively, the whole framework achieves the DSC of 88.59% (outperforms the adapted teacher net by +3.43%) and 80.78% (+0.87%) for liver and spleen segmentation tasks on the SRRSHA dataset, and achieves 91.67% (+1.03%) and 89.69% (+3.90%) for liver and spleen segmentation on the BTCV dataset. The improved results demonstrate that our method can effectively acquire essential knowledge from two single-organ segmentation models and resolve the conflicts between the two models. The visual results shown in the \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Source** & **Target** & \multicolumn{2}{c}{**Liver**} & \multicolumn{2}{c}{**Spleen**} \\ **Domain** & **Domain** & DSC & ASD & DSC & ASD \\ \hline \multirow{4}{*}{LiTS} & LiTS & 0.9495 & 2.93 & - & - \\ & SRRSHA & 0.7551 & 4.89 & - & - \\ & BTCV & 0.8969 & 4.20 & - & - \\ \hline \multirow{4}{*}{RS} & RS & - & - & 0.9670 & 0.18 \\ & SRRSHA & - & - & 0.3484 & 5.86 \\ \cline{1-1} & BTCV & - & - & 0.6973 & 10.11 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative analysis and comparison of well-trained models (w/o domain adaptation) verified on different domains. fourth row in Fig. 4 are consistent with the quantitative results. ### Comparison with Domain Adaptation Methods In this section, we conduct comprehensive comparison experiments between our method and competing methods. Table 3 shows the quantitative results of our proposed framework and other methods, in which we also include the lower-bound method (teacher nets before domain adaptation) and the upper-bound method (fully supervised segmentation). The "Source Access" column indicates what type of source domain information is used by the methods. Specifically, "\(\diagup\)" means the method accesses the source dataset; "\(\diagup\)" means the method alters the structures of source teacher models; "\(\times\)" means the method only uses the off-the-shelf source models, which neither needs the source data nor alters source model structures. It is observed that all the competing methods Figure 5: Visual comparison results between our proposed MA and different domain adaptation methods verified on Abdomen Dataset. The ground truths are delineated in red, the prediction results of liver segmentation are delineated in green and the prediction results of spleen segmentation are delineated in yellow. can improve the performance over the lower-bound method and our proposed method achieves the best performance in most segmentation tasks. Note that our method does not need to access source data, performing even better than AdvEnt and MDAN that use information from source domains (Fig. 5, Table 3). The underlying reason is that Advent and MDAN mainly focus on feature-level alignment, which only aligns high-level information but ignores pixel-level information. ### Ablation study In this section, we specifically discuss the rationality of each module in our proposed method on the BTCV dataset. \begin{table} \begin{tabular}{c c c c|c c c c|c c c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{**Method**} & \multirow{3}{*}{\begin{tabular}{c} **Source** \\ **Access** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **Source** \\ **Domain** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **Source** \\ **Liver** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **Spleen** \\ **ASD** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **Liver** \\ **SDC** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **Spleen** \\ **ASD** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **Liver** \\ \end{tabular} } & \multirow{3}{*}{ \begin{tabular}{c} **Spleen** \\ **SDC** \\ \end{tabular} } \\ \cline{4-11} **Upper-** & \multirow{2}{*}{Supervised} & \(\times\) & \begin{tabular}{c} SRRSHA \\ / BTCV \\ \end{tabular} & 94.41 & 1.38 & 87.67 & 0.56 & 95.93 & 1.46 & 95.38 & 0.28 \\ \hline **Lower-** & \multirow{2}{*}{Teacher nets} & \multirow{2}{*}{\(\times\)} & LiTS & 75.51 & 4.89 & - & - & 89.69 & 4.20 & - & - \\ **Bound** & & & RS & - & - & 34.84 & 5.86 & - & - & 69.73 & 10.11 \\ \hline \multirow{8}{*}{**SOS**} & AdvEnt [31] & \multirow{2}{*}{\begin{tabular}{c} AdvEnt [31] \\ \end{tabular} } & LiTS & 84.38 & 3.70 & - & - & 91.54 & 2.62 & - & - \\ & & RS & - & - & 76.77 & 1.91 & - & - & 85.49 & 2.72 \\ \cline{1-1} & & LiTS & 62.92 & 5.68 & - & - & 89.64 & 4.50 & - & - \\ \cline{1-1} & & RS & - & - & 55.04 & 4.66 & - & - & 83.60 & 3.84 \\ \cline{1-1} \cline{2-11} & \multirow{2}{*}{\(\times\)} & LiTS & 80.85 & 4.21 & - & - & 89.97 & 4.91 & - & - \\ \cline{1-1} & & RS & - & - & 73.79 & 2.47 & - & - & 71.65 & 5.35 \\ \cline{1-1} & & LiTS & 85.16 & 3.21 & - & - & 90.64 & 3.65 & - & - \\ \cline{1-1} & & RS & - & - & 79.91 & **1.51** & - & - & 85.79 & 2.14 \\ \hline \multirow{2}{*}{**MOS**} & MDAN [32] & \multirow{2}{*}{ \begin{tabular}{c} **�** \\ **Ours** \\ \end{tabular} } & both & 88.24 & 3.73 & **81.34** & 2.61 & **93.27** & 3.43 & 80.03 & 5.08 \\ \cline{1-1} & & & both & **88.59** & **2.03** & 80.78 & 1.74 & 91.67 & **2.14** & **89.69** & **1.46** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative analysis and comparison of different methods verified on different Datasets. “SOS” represents single-organ segmentation, “MOS” represents multi-organ segmentation. “�” means the method accesses the source dataset; “✗” means the method alters the structures of source teacher models; “\(\times\)” means the method neither needs the source data nor alters source model structures. #### 4.4.1 Contribution of Each Component To validate the effectiveness of each component of our framework, we start from the baseline (single-organ segmentation using DeepLabV3+) and successively add our framework components. Specifically, our Model Adaptation (MA) stage includes two main components: the label refinement module (LRM) and the feature generalization module (FGM). Our Model Ensemble (ME) stage includes two components: label aggregation (LA) and feature aggregation (FA). As shown in Table 4, the performance increases when adding each component and our proposed full method achieves the best results. #### 4.4.2 Investigation on Pseudo label Refinement To verify the effectiveness of the pseudo label refinement strategy used in the Model Adaptation stage, we first present some examples of refined labels generated by LRM in Fig. 6. It is shown that our module is effective in reducing pixel-wise noises especially when the predicted results are not confident. Besides, we also compare the LRM with the known pseudo-labeling method and \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Source**} & \multicolumn{2}{c}{**Liver**} & \multicolumn{2}{c}{**Spleen**} \\ & & **Domain** & DSC & ASD & DSC & ASD \\ \hline **Teacher** & Baseline & LiTS & 0.8969 & 4.20 & - & - \\ **Net** & (w/o domain adaptation) & RS & - & - & 0.6973 & 10.11 \\ \hline **Adapted** & LRM & LiTS & 0.9016 & 3.97 & - & - \\ **Teacher** & & RS & - & - & 0.8391 & 4.20 \\ & MA & LiTS & 0.9064 & 3.65 & - & - \\ & (LRM+FGM) & RS & - & - & 0.8579 & 2.14 \\ \hline & LA & Both & 0.9067 & 3.87 & 0.7305 & 9.18 \\ & ME & Both & 0.9092 & 4.05 & 0.7340 & 8.42 \\ **Net** & \begin{tabular}{c} Ours \\ (MA+ME) \\ \end{tabular} & Both & **0.9167** & **2.14** & **0.8969** & **8.14** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative Analysis of each component verified on the BTCV Dataset. The whole method experiences two stages: the model adaptation (MA) stage and model ensemble (ME) stage. “LRM” represent the label refinement module, “FGM” represents the feature generalization module, “LA” represents the label aggregation, and “FA” represents the feature aggregation. the quantitative results are shown in Table 5. Specifically, "Prediction" means directly employing network predictions as pseudo labels. From Table 5, we can see that our LRM attains better performance and is effective in refining pseudo labels. #### 4.4.3 Investigation on Ensemble Function To demonstrate the rationality of the ensemble function used in the Model Ensemble stage, we first provide the visual results of our ensemble function in Fig. 7. It is observed that our ensemble function resolves conflicts between multiple teacher nets and correctly segments the organ. We also replace our ensemble function with the average ensemble (directly average the softmax output of multiple teacher models) and certainty-aware ensemble without entropy normalization (w/o Eq. 8). The comparison results are displayed in Table 6 and Fig. 8. By comparison, our proposed certainty-aware ensemble with entropy normalization exhibits the best performance. DeepLabV3+ [27] to Unet [2] and the results are shown in Table 7. The experimental results demonstrate that our proposed method is robust to different backbones, implying the stability and generalization of our method. Meanwhile, the results of multi-source ("D+D", "U+U", and "U+D") also show that our method can flexibly merge knowledge from teacher models with different network structures, which suggests that our framework has good application prospects. #### 4.4.5 Extension to New Single-Organ Segmentation Model To further verify the applicability and adaptability of our proposed framework, we extend our model to a more complex scenario in which a new single-organ segmentation model (kidney) is introduced. The new single-organ segmentation model is trained on the Kidney tumor segmentation challenge (KiTS) dataset [34], which contains 210 CT scans (size range: \([29\sim 1059]\times[512]\times[512]\times[512]\)). Figure 8: Examples of ensemble results produced by Certainty-Aware Ensemble Function. The **green areas** are liver segmentation results and the yellow areas are spleen segmentation results. Figure 7: Examples of ensemble results produced by Certainty-Aware Ensemble Function. The **green areas** are liver segmentation results and the yellow areas are spleen segmentation results. The **red areas** are the selection area of liver teacher model and the blue areas are the selection area of liver teacher model. \(\{512,796\}\) voxels) with annotation for kidney and kidney tumors. The in-plane spacing of CT slices varies from 0.44mm to 1.04mm, and the slice thickness varies from 0.5mm to 5.0mm. In the new Multi-Model Adaptation scenario, we aim to aggregate three single-organ models into one multi-organ model. The results are shown in Tab. 8, in which "Ours (two-to-one)" represents combining two source models into one target model while "Ours (three-to-one)" represents combining three source models into one target model. Experimental results demonstrate that our framework is still effective when introducing a new dataset. Besides, we also show that our framework is flexible enough to be extended to combine any number of source models. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Ensemble Function**} & \multicolumn{2}{c}{**Liver**} & \multicolumn{2}{c}{**Spleen**} \\ & DSC & ASD & DSC & ASD \\ \hline Average Ensemble & 0.9048 & 2.37 & 0.8419 & 1.90 \\ Certainty-Aware Ensemble & 0.9136 & 3.07 & 0.8575 & 2.22 \\ (w/o normalization) & & & & \\ Certainty-Aware Ensemble & **0.9167** & **2.14** & **0.8969** & **1.46** \\ \hline \hline \end{tabular} \end{table} Table 6: Different Ensemble Function verified on the BTCV Dataset. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Backbone**} & \multicolumn{2}{c}{**Source**} & \multicolumn{2}{c}{**Liver**} & \multicolumn{2}{c}{**Spleen**} \\ & & & **Domain** & DSC & ASD & DSC & ASD \\ \hline \multirow{4}{*}{**Teacher**} & Baseline & \multirow{4}{*}{DeeplabV3+} & LiTS & 0.8969 & 4.20 & - & - \\ & w/o & & RS & - & - & 0.6973 & 10.11 \\ & domain & UNet & LiTS & 0.9067 & 3.58 & - & - \\ & adaptation & & RS & - & - & 0.7438 & 7.96 \\ \hline \multirow{2}{*}{**Adapted**} & \multirow{4}{*}{DeeplabV3+} & LiTS & 0.9064 & 3.65 & - & - \\ & & RS & - & - & 0.8579 & 2.14 \\ \cline{1-1} & MA & & LiTS & 0.9111 & 2.95 & - & - \\ \cline{1-1} & & UNet & RS & - & - & 0.7675 & 6.78 \\ \hline \multirow{2}{*}{**Student**} & \multirow{2}{*}{Ours} & D+D & Both & 0.9126 & 2.14 & 0.8969 & 1.46 \\ & & U+U & Both & 0.9156 & 2.58 & 0.7781 & 7.21 \\ \cline{1-1} & & U+D & Both & 0.9173 & 2.07 & 0.8714 & 2.98 \\ \hline \hline \end{tabular} \end{table} Table 7: Quantitative analysis and comparison of different network backbones verified on the BTCV Dataset. U is short for Unet backbone, and D is short for DeeplabV3+ Backbone. U+D represents combining a teacher net with the Unet backbone and a teacher net with the DeeplabV3+ Backbone. ## 5 Discussion and Conclusions In this paper, we present a novel approach for multi-organ segmentation that addresses the challenge of limited annotated data. Specifically, we propose a Multi-Model Adaptation framework that leverages off-the-shelf single-organ segmentation models to learn a multi-organ segmentation model. The framework comprises two stages: Model Adaptation and Model Ensemble. In the Model Adaptation stage, we fine-tune each single-organ segmentation model to improve its generalization on unseen target datasets. In the Model Ensemble stage, we aggregate the knowledge from multiple adapted models to obtain a robust multi-organ segmentation model. We conducted extensive experiments on four abdominal image datasets, which demonstrated the feasibility and effectiveness of our approach. Despite advancements achieved, our method still falls short in terms of segmentation accuracy compared with supervised segmentation methods, which may limit its clinical applicability. Future research could focus on improving the segmentation performance by incorporating more knowledge about the organs, such as using shape adversarial prior[35] and active learning in such setting[36], and exploring the possibility of combining partial annotated models not limited to single-organ models. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & **Source** & \multicolumn{2}{c}{**Liver**} & \multicolumn{2}{c}{**Spleen**} & \multicolumn{2}{c}{**Kidney**} \\ & **Domain** & DSC & ASD & DSC & ASD & DSC & ASD \\ \hline **Baseline (w/o adaptation)** & KiTS & - & - & - & - & 0.8497 & 2.46 \\ **MA (ours)** & KiTS & - & - & - & - & 0.8829 & 1.34 \\ \hline **Ours (two-to-one)** & LiTS+RS & 0.9126 & 2.14 & 0.8969 & 1.46 & - & - \\ **Ours (three-to-one)** & LiTS+RS+KiTS & **0.9182** & **2.07** & **0.8984** & **1.51** & **0.9282** & **1.27** \\ \hline \hline \end{tabular} \end{table} Table 8: Quantitative analysis of introducing a new single-organ model verified on the BTCV dataset. “Ours (two-to-one)” represents combining two source models into one target model while “Ours (three-to-one)” represents combining three source models into one target model. ## Acknowledgment This work was supported in part by the Major Scientific Research Project of Zhejiang Lab under Grant 2020ND8AD01, the Jiangsu Funding Program for Excellent Postdoctoral Talent.
2301.02685
Gauss-Manin equations for propagators in the case of arbitrary masses
We derive the complete list of singularities of propagators in theories with arbitrary (complex) masses and for arbitrary diagram. We derive in a closed form differential equations for the propagator as a function of the momentum and masses.
S. Srednyak
2023-01-06T19:05:56Z
http://arxiv.org/abs/2301.02685v2
# Gauss-Manin equations for propagators in the case of arbitrary masses ###### Abstract We derive the complete list of singularities of propagators in theories with arbitrary (complex) masses and for arbitrary diagram. We derive in a closed form differential equations for the propagator as a function of the momentum and masses. ###### Contents * 1 Introduction * 2 Aknowledgements * 3 Preliminaries * 4 Results * 5 Proofs * 5.1 Proof of Th1 * 5.2 Apparent singularities and non Fuchsian regular singular points * 5.3 Proof of Th2 * 6 Discussion * 7 Comparison with the known literature * 8 Conclusion ## 1 Introduction In this paper we derive the Gauss-Manin connection for propagators for arbitrary mass case. Differential equations for perturbative amplitudes play an important role in their theory. They can be used for numerical evaluation of the amplitudes as well as for analytic study. There has been a large interest in this topic for the case of propagators. In particular, the papers [1, 2, 3] address the case of sunrise graphs. More recently, there is more work on the banana family [4, 5]. Beginning steps of all these analyses is the derivation of differential equations for the function represented by the diagram. We observe that singularities of propagators can be completely analysed in closed form. In particular, we derive explicit equations for all of the singularities of the diagram function, relating them to the combinatorics of the diagram. Our derivation is based on the observation that in appropriate coordinates the Landau polynomials are linear functions of the coordinates. This result is non trivial because it is known [6] that systems with regular singularities can possess higher order poles and apparent singularities. We use tools from the theory of hypergeometric functions and holonomic D-modules to obtain our results. ## 2 Aknowledgements The author was supported in part by BNL LDRD 21-045S NPP. ## 3 Preliminaries We consider the standard propagators in massive theories [7] that can be written in the form \[J_{m}(p,m_{i})=\int\frac{1}{\prod((q_{i}+\delta_{i}p)^{2}+m_{i}^{2})}q^{m} \prod_{a=1}^{L}d^{d}q_{a} \tag{1}\] where \(\delta_{i}=0,1\), in general depending on the choice of momentum flow. Here \[q_{i}=\sum_{a=1}^{L}l_{i,a}q_{a} \tag{2}\] \[l_{i,a}=0,\pm 1 \tag{3}\] \(q_{a}\) is a basis of loop momenta and \(l_{i,a}\) are combinatorial coefficients defined by momentum flow. In particular, this family includes the unequal mass sunrise intgrals considered in [3]. These integrals have the form \[I=\int d^{d}q_{1}d^{d}q_{2}\frac{1}{q_{1}^{2}+m_{1}^{2}}\frac{1}{q_{2}^{2}+m_ {2}^{2}}\frac{1}{(q_{1}+q_{2}+p)^{2}+m_{3}^{2}} \tag{4}\] ## 4 Results In this section we formulate our main results. **Th1** _(Characterization of singularity locus of the propagator). The singularity locus of the propagator is given by the set_ \[x=r_{i_{1}}m_{i_{1}}+r_{i_{2}}m_{i_{2}}+....+r_{i_{s}}m_{i_{s}} \tag{5}\] _where_ \(r_{i_{*}}\) _are rational numbers that depend on the diagram. There is finite set of such numbers, the cardinality of which grows exponentially with the number of loops._ \(\square\)__ **Prop.**_The singularities of the propagator in masses are located at the set_ \[\sum r_{i}^{\prime D}m_{i}=0 \tag{6}\] _for some integer numbers_ \(r_{i}^{\prime D}\) _that depend on the diagram._ **Th2** _( Differential equations for the propagator). The propagator satisfies the following system of equations_ \[\frac{\partial f^{D}}{\partial z_{k}}=(\sum_{R=\{r_{i_{1}}...r_{i_{s}}\}}\frac{ A_{k,R}^{D}}{x+r_{i_{1}}m_{i_{1}}+r_{i_{2}}m_{i_{2}}...r_{i_{s}}m_{i_{s}}})f^{D} \tag{7}\] _where the sum is extended over the set described in Th1. Here_ \(D\) _denotes a diagram and_ \(z_{k}=\{\sqrt{p^{2},m_{1},...,m_{I}}\}\) _is any of the variables on which the diagram function depends. The matrices_ \(A_{k,I}^{D}\) _depend only on coupling, dimension and diagram topology, and have no dependence on masses or the momentum._ _Analogous formula holds for the sum over all diagrams._ \(\square\)__ ## 5 Proofs ### Proof of Th1 We will carry out the proof only for leading singularities. The analysis of subleading singularities reduces to the case of sub diagrams. For leading singularity, there is a set of propagators \(D_{i_{s}}(p,q_{i})\) that develop a vanishing cycle in their intersection \[D_{I}=\cap_{i_{s}\in I}D_{i_{s}} \tag{8}\] We will consider the vanishing cycles at finite distance in q-space. The emergence of vanishing cycles corresponds to degeneracy of the system of normals to the varieties \(\{q:D_{i}(p,q)=0\}\). Note that these varieties are highly degenerate. Each of them has as singularity locus a \((L-1)d\)-dimensional plane in q-space.1 Nonetheless, our criterion still works. It can be seen by applying the general theory of vanishing cycles as applied to singular schemes [8, 9]. Footnote 1: This is most easily seen in an example. The quadric \(x_{3}^{2}+x_{4}^{2}=0\) in \(\mathbb{C}^{4}\) with coordinates \(x_{1},x_{2},x_{3},x_{4}\) has the plane \((x_{1},x_{2},0,0)\) as singularity. The criterion formulated above results in equations \[\sum b_{i}(\delta_{s,i}p+q_{s,i,1}+...+q_{s,i,r_{i}})=0,\ s\in\{0,...,L\} \tag{9}\] for some \(b_{i}\) that are not all zero. These equations can be solved for \(q_{i}\) as \[q_{a}=\alpha_{a}p \tag{10}\] fo certain coefficients \(\alpha_{a}\). The singularities develop only when all vectors are collinear. Then the equations \(D_{s}(p,q)=0\) can be rewritten in the form \[D_{s}=(\delta_{s}p+q_{s,1}+...q_{s,r_{s}})^{2}-m_{s}^{2}=(\delta_{s}+a_{s,1}+...+ a_{s,r_{s}})^{2}p^{2}-m_{s}^{2}=0 \tag{11}\] or \[\delta_{s}+a_{s,1}+...+a_{s,r_{s}}=\pm m_{s}/x \tag{12}\] After elimination of the variables \(a_{s,i}\) we obtain our claim. As a byproduct of our proof, we obtain the following conditon on the singularities \[\begin{vmatrix}l_{i_{1},1}&l_{i_{1},2}&...&l_{i_{1},L}&1-\frac{m_{i_{1}}}{x} \\ l_{i_{1},1}&l_{i_{1},2}&...&l_{i_{1},L}&1-\frac{m_{i_{1}}}{x}\\...&&&\\ l_{i_{L+1},1}&l_{i_{L+1},2}&...&l_{i_{L+1},L}&1-\frac{m_{i_{L+1}}}{x}\\ \end{vmatrix}=0 \tag{13}\] The case of the vanishing cycle at infinity is simpler. In this case, we simply drop the terms \(m_{i}^{2}\) and consider \(q_{i}\) as homogeneous coordinates on the projective space \(\mathbb{CP}^{Ld-1}\). Considerations similar to the above lead to the equation \[p^{2}=0 \tag{14}\] which is the only leading Landau singularity at infinity. ### Apparent singularities and non Fuchsian regular singular points In this subsection we discuss Th2 from the point of view of general theory of regular singularities. The existence of Gauss-Manin connection for the periods of rational forms was established in [10]. The block structure of the connection is discussed in [11]. The statement of our theorem does not follow from general principles concerning salt bundles with a given set of regular singular points because: * There can be higher order poles that nonetheless lead to regular solutions. * There can be apparent singularities. Case 1. is documented in [12],chapter 2. The study of reduction of order of poles by algebraic gauge transformations can be very fruitful and has lead to computational algorithms, see [13, 14, 15]. Case 2. is subject to numerous works [6, 16, 17, 18, 19, 20, 21]. It has interesting relation to isomonodromy [22, 23] and to transcendental number theory [20, 21]. These examples preclude the conclusion of our statement from general principles and necessitate the use of alagebraic methods. From general prrinciples, we could only conclude the following form of the connection \[\frac{dJ}{dx}=\sum_{k}\sum_{I_{1},\ldots,I_{k}}\frac{A}{\prod_{I}(x-x_{I})_{I}^{n}}J \tag{15}\] where \(x_{I}\) include the singularities given by the vanishing cycle condition and perhaps some additional singularities. Some of the singularities resulting from vanishing cycle conditon can actually be apparent singularities. The decision process that would allow to find out which of these singularities are apparent proceeds through the analysis of action of the intersection pairing matrix for middle dimensional cycles on the hypersurface \(\{q|P(q)=0\}\) on the original cycle of integration and determining which elements of the orbit of such action correspond to long cycles at each of the components of vanishing loci. ### Proof of Th2 To prove our theorem, we embed the integral 1 into a family of integrals, generically deforming its coefficients ( versal deformation in terminology of [10]). I.e. we consider the integral \[I(L)=\int_{\Delta}(L(x))^{a}D(x)^{b}d^{d}x \tag{16}\] where the coefficients of the polynomial \(L\) are now genric complex numbers. For such integrals, the existence of Gauss-Manin connection was established in [11]. An alternative, constructive approach can be obtained as follows. First, according to [25] there exists holonomic D-module that this hyperfunction \(I(L)\) satisfies. It is given, constructively, by the ideal generated by \[\sum\omega_{\mu}L_{\omega}\frac{\partial}{\partial L_{\omega}}- \beta_{\mu} \tag{17}\] \[\prod_{\omega\in\Omega^{\prime}}\frac{\partial^{a_{\omega}}}{ \partial a_{\omega}}-\prod_{\omega\in\Omega^{\prime\prime}}\frac{\partial^{a_ {\omega}}}{\partial a_{\omega}} \tag{18}\] This holonomic D-module is equivalent, on the generic stratum, to the Gauss-Manin connection ( see [26, 27]). This equivalence is obtained by the choice of basis of itegrals. One such choice is provided by \(I_{m}(L)=\int_{\Delta}(L(x))^{a}D(x)^{b}x^{m}d^{d}x\). The Gauss-Manin connection can be written as \[\frac{\partial}{\partial L_{\omega}}I_{m}(L)=\sum_{i}\frac{A_{\omega,i,m,m^{ \prime}}(L)}{D_{i}(L)}I_{m^{\prime}}(L) \tag{19}\] where \(D_{i}(L)\) are equations for compoenents of the discriminantal locus of \(L\). Our theorem then follows by restricting the Gauss-Manin connection to 1-dimensional subspace of the deformed parmeter space \(\{L\}\)[26]. ## 6 Discussion Our result fits propagators of quantum field theories into the family of equations \[\frac{\partial f(x,l)}{\partial x_{k}}=(\sum\frac{A_{k,I}}{l_{I}})f \tag{20}\] where \(A_{k,I}\) are constant matrices and \(l_{I}\) are functions linear in the variables \(x_{i}\). This family of functions provides a natural generalization of the Grassmannian hypergeometric functions considered in [27, 28]. It is desirable to obtain combinatorial characterization of their solutions. To solve this problem, it is necessary to consider their dependence on the parameters \(l_{I,i}\). This dependence leads to the study of irregular singularities [29, 30]. Full solution of this problem would involve quantum groups [31, 32] ## 7 Comparison with the known literature In papers [3, 1] deep properties of the sunrise family were studied. Sunrise graphs fall inside the class of the diagrams that we consider. While the equations derived in [3, 1] are seemingly more complicated, they in fact were derived before in [33]. The examination of formulas (7) and (12) of [33] shows that the equations of [33] can be transformed into the form stated in our theorem. The polynomial D in this paper can be decomposed into a product of linear forms, after which a partial fractioning procedure can be applied. Our method can be applied to the family of banana graphs [5, 4]. ## 8 Conclusion In this paper we obtained Gauss-Manin connection for propagators that depend on an arbitrary set of masses. We hope our results can be useful for numerical computation of propagators and self energies.
2306.09777
Smart Sentiment Analysis-based Search Engine Classification Intelligence
Search engines are widely used for finding information on the internet. However, there are limitations in the current search approach, such as providing popular but not necessarily relevant results. This research addresses the issue of polysemy in search results by implementing a search function that determines the sentimentality of the retrieved information. The study utilizes a web crawler to collect data from the British Broadcasting Corporation (BBC) news site, and the sentimentality of the news articles is determined using the Sentistrength program. The results demonstrate that the proposed search function improves recall value while accurately retrieving nonpolysemous news. Furthermore, Sentistrength outperforms deep learning and clustering methods in classifying search results. The methodology presented in this article can be applied to analyze the sentimentality and reputation of entities on the internet.
Mike Nkongolo
2023-06-16T11:27:00Z
http://arxiv.org/abs/2306.09777v1
# Smart Sentiment Analysis-based Search Engine Classification Intelligence ###### Abstract Search engines are widely used for finding information on the Internet. However, there are limitations in the current search approach, such as providing popular but not necessarily relevant results. This research addresses the issue of polysemy in search results by implementing a search function that determines the sentimentality of the retrieved information. The study utilizes a web crawler to collect data from the British Broadcasting Corporation (BBC) news site, and the sentimentality of the news articles is determined using the Sentistrength program. The results demonstrate that the proposed search function improves recall value while accurately retrieving nonpolysemous news. Furthermore, Sentistrength outperforms deep learning and clustering methods in classifying search results. The methodology presented in this article can be applied to analyze the sentimentality and reputation of entities on the Internet. Web mining, NLTK, BM25, Word2vec, inverted index, search engine optimization, sentiment analysis, tokenization, sentistrength ## I Introduction Nowadays, popular news sites like China News and BBC News incorporate search functions, but these functions often lack advanced options like sentiment analysis. Users typically search for news using keywords, and search engines prioritize results based on relevance and popularity, rather than the user's specific expectations [1, 2]. This leads to a problem when a keyword has multiple meanings, as the desired results may not be obtained. While search engines like Google PageRank prioritize recent and reliable results, they may not always be relevant to the user's query [1]. For instance, a search for _apples_ might yield results related to both fruits and phones, highlighting the issue of polysemy. To address this, users should be able to provide additional information to refine the search scope and improve accuracy. In this research, we focused on a problem that has received limited attention in existing articles. For instance, a previous study [3] manually collected search results and employed anomaly detection and neural networks for classification. In contrast, we implemented a search function on the search engine itself to improve the classification of search results. The proposed approach allowed for automatic and efficient data collection compared to the manual approach used in [3]. Furthermore, the method incorporates sentiment analysis, aiming to provide a framework that quickly retrieves desired internet data, classifies patterns, and assesses the online presence of entities. By applying the search function to BBC news and employing the Sentistrength algorithm, we were able to detect the polarity (negative, neutral, or positive) of the news content. This research investigates whether the classified search results impact the accuracy of the Sentistrength algorithm in sentiment analysis. We constructed a database containing diverse categories of news, although the search function was not capable of searching the entire Internet. Nonetheless, it collaborated with the search engine to gather data automatically. Each news article in the database was labeled, enabling the application of the Sentistrength computation. Furthermore, we implemented fixed tags that were categorized to enhance the organization of the data. The structure of this article consists of the background, which is discussed in Section II, followed by the research methodology presented in Section III. The results obtained are elaborated upon in Section IV, and finally, the conclusion and recommendations are provided in Section V. ## II Background The authors in [4] utilized deep learning and machine learning techniques to perform sentiment analysis on COVID19 data. The experimental results demonstrated that their proposed approach achieved an accuracy ranging from 93% to 95%, outperforming other techniques examined in the study. Another study conducted by [5] employed a Natural Language Processing algorithm to segment a collection of medical publications. In their methodology, both the words and the information contained within these segmented words were analyzed. The methodology focused on evaluating the effectiveness of the Natural Language Processing algorithm in the field of medicine. However, it should be noted that in this particular study, the algorithm was solely applied to process the content of headlines without utilizing pattern matching. The benefits of employing a search engine optimization approach were discussed in [6]. The study utilized Google to extract valuable information such as website browsing duration and user location, which are critical factors for optimizing search engine performance. These features can be utilized to analyze user behavior and improve the overall search engine experience. In our methodology, when users input keywords on the search engine's homepage, only a predefined number of classified BBC news articles are displayed. By incorporating users' behaviors into our search function, it is possible for the search engine to make predictions about users' intended meanings when they enter polysemous keywords. For instance, if a user frequently searches for animal-related information, and they input the keyword _Jaguar_, the search engine can anticipate that the user is likely looking for information about the animal rather than the car. This represents the proposed optimal classification solution for addressing the polysemic problem discussed in this article. In a related study, [7] introduced an inverted index method that enables rapid execution of temporal queries. The applicability of this method was demonstrated using a COVID19 dataset. To enhance COVID-19 research, a novel approach called reverse sort-index was implemented, enabling real-time querying. This approach included various query categories, such as non-temporal, relative temporal, and absolute temporal. Experimental results demonstrated the effectiveness of the reverse sort indexing method compared to existing techniques, particularly in facilitating fast query execution for search engines. The inverted index proved to be efficient in retrieving nonpolysemic big data. In our methodology, we focused on searching and storing 800 BBC news articles in the database. However, the proposed method can be further applied to other search engines to extract a larger volume of data. It is worth noting that sorting indexes, as discussed in [7], is also an important aspect to consider. In their study, Xu et al. [8] developed a text-driven framework for aircraft fault diagnosis. The framework incorporated Word2vec and Convolution Neural Network (CNN) techniques. Numerous text files were utilized in the experiments, where Word2vec was employed to retrieve textual pattern vectors. These vectors were then passed to the CNN, which made the final decision regarding aircraft fault diagnosis. The CNN model incorporated a Cloud Similarity Measure (CSM) to enhance the performance of the classifier and provide support for aircraft maintenance. By combining unstructured and structured patterns, [8] successfully determined the cause of the aircraft fault. In our approach, we employed Word2vector to extract the content of search results. Based on the similarity of vectors, we recommended search results to users. By employing this approach, users have the ability to view search results based on the keywords they entered and the similarity of the search results. Many existing search engines utilize similar methodologies but with variations in their optimization techniques. For instance, Reddy et al. [5] applied Natural Language Processing to extract nonpolysemic information from a vast collection of medical data. Similarly, in our research, we utilized Natural Language Processing to select relevant information from BBC news. The significance of search engine optimization was emphasized in Pawade et al. [6], where they employed an inverted index search method on a large dataset to enhance search engine performance. Similarly, to the approach described in [6], we have employed an inverted index technique in our research. However, our implementation consists of a smaller number of entries compared to the study mentioned. This highlights the potential of our search function to be applied in searching for large volumes of data. ## III Research Methodology To enhance the accuracy of the search function and address the problem of polysemy, a search feature was developed specifically for categorizing and identifying BBC news based on user-defined categories. In order to collect the necessary data, a web mining methodology was employed to extract the content from web documents. The data retrieval process involved crawling the BBC news website using a web crawler, as described in [9]. Following the pre-processing and inverted index stages using the NLTK toolkit, the obtained data was stored in a database for further analysis. ### _The BBC data_ Due to its status as the largest news broadcaster in the world [10], we chose to crawl data from the BBC website, which ensured the reliability and credibility of the news content examined in this research [11, 10]. Furthermore, the BBC news website covers a wide range of topics, making it well-suited for our search function. For our data sample, we randomly selected 800 BBC news articles, ensuring a stochastic selection process to maintain the reliability of the search results. These articles were stored in a database table called _news_, containing fields such as the Uniform Resource Locator (URL), content, date (dt), title, and label. The label field indicates the classified information associated with each BBC news article. Finally, we performed sentiment analysis on the news table to determine the overall tone of the content, which can be valuable in assessing the influence and impact of BBC news on the web. ### _Crawling_ Web crawling involves the automated search for information on the Internet using predefined rules or conditions. In our research, we employ a web crawler due to its numerous advantages in efficiently downloading web pages and enabling multi-threading. For web crawling, we utilize a Python-based Google App Engine (PGAE), as introduced in [12]. The configuration container includes a core file that encompasses essential parameters such as the interval time between two fetches, the directory path for storing the retrieved data, and the designated start and end times for the crawling process. The interval time between two fetches regulates the frequency at which the website is crawled to retrieve data. If the crawling frequency is set too high, there is a risk of the internet Protocol (IP) address being blocked. In the data engineering process, we made use of HyperText Markup Language (HTML) div and span tags to identify the structure of the documents. Additionally, HTML components such as paragraph (p) and emphasis (em) were utilized to accurately represent the semantics of the web content. However, the inclusion of div and span tags can hinder the accessibility of the web content. To address this, we cleaned the crawled features by removing any div tags from the web pages. The resulting cleaned data was then stored in a JSON file, which will subsequently be transferred and stored in a database in the form of a table. In this table (news), the headers of the web pages were defined to store the BBC news. Refer to Fig. 1 for an illustration of the table storing the BBC news. ### _Search function implementation_ We chose the NLTK (Natural Language Toolkit) to implement the search function, which involved tokenizing sentences into individual words. By utilizing the inverted index, we were able to enhance the retrieval of pertinent information. Additionally, the Word2vec model was employed to compute the similarity between news articles. The Sentistrength algorithm was applied to perform sentiment analysis on the collected data, as described in [13]. ### _The NLTK and inverted index_ In this study, the Python NLTK library was utilized, specifically employing two methods: NLTK tokenize and NLTK stem. The NLTK tokenize function was employed to segment sentences into individual words, with spaces serving as the delimiters. Similarly, the NLTK stem function was applied to normalize words by transforming them into an acceptable format, such as converting past tense to present tense. For the purpose of this research, the NLTK tool was used to tokenize the search queries entered by users in the search engine. The proposed search function then performs searches based on the tokenized words derived from the search results. During the data cleaning phase, irrelevant words, known as stop words, were eliminated from the dataset. Stop words are commonly occurring words in a language that have little semantic significance. In this study, a list of six common determiners (a, that, the, an, and, those) was identified as irrelevant words in the web text. These determiners assist in describing nouns and expressing concepts related to localization or numbers. To enhance computational efficiency, stop words were removed from the dataset. By eliminating stop words, the number of indexes in the corpus was reduced, resulting in improved retrieval efficiency. Ignoring stop words in this research aimed to enhance the accuracy of the search function. For example, when searching for _apples_, the search engine may display 100 search results. However, if the search query includes _bananas_ and _apples_, the search function would tokenize the sentence into three parts: (banana), (and), and (apples). Without removing the stop word, this approach could lead to errors and potentially retrieve more than 100 search results. To improve the accuracy of computation, stop words were removed from the dataset in this research. The Python library used in the implementation of the search function included pretrained models and corpora. Inverted indexing was adopted to facilitate information retrieval based on attribute values. Each entry in the index table contained a specific attribute and the address of records with that attribute value. The position of the record was determined by its attribute value. An inverted index file was computed and created, based on tokenizing and cleaning the words from the BBC article using NLTK. These words were then indexed and stored in the database for sentiment analysis. When a user searches for these words, the index allows for rapid retrieval of relevant search results. The table depicted in Fig. 2 illustrates the storage of these indexes. Fig. 1: The BBC news table Fig. 2: The database inverted index table In the table shown in Fig. 2, the first column stores the search keywords entered by the user. The second column indicates the number of indexes that match the keywords, and the third column stores the specific keywords that the index corresponds to. For example, if a user searches for BBC, the table would display 697 indexes and articles that match that keyword. This implementation of inverted indexing enhances concurrency and automates the generation of attribute values, thereby determining the location of the corresponding records. ### _The BM25 and Sentistrength algorithms_ The BM25 technique is a ranking method commonly used by search engines to estimate the relevance of a document based on a specific search query. In this approach, the user's keywords are tokenized into individual words, and these words are matched with the index file to determine their occurrences. A score is then calculated to measure the similarity of each web article to the search query. The search results are subsequently sorted based on this score. Compared to the traditional Term Frequency Inverse Document Frequency (TFIDF) approach, BM25 incorporates adjustable parameters that enhance its power and flexibility. These parameters allow for more fine-tuning of the ranking process, resulting in improved search accuracy and effectiveness. The BM25 algorithm incorporates the concept of average document length, which considers the impact of a document based on its average length compared to other documents. Sentistrength, on the other hand, is a sentiment strength detection program that utilizes nonlexical linguistic rules and information. Previous experiments have demonstrated the reliable performance of Sentistrength in sentiment analysis for web mining tasks. In the Sentistrength approach, each textual pattern is assigned three scores: negative, positive, and neutral. The overall polarity of the textual data is then computed by considering the emotional valence indicated by these scores. Equation 1 represents the calculation of the overall polarity. \[\begin{split}\text{\emph{positive + neutral + negative}}\\ \text{\emph{polarity}}=\end{split} \tag{1}\] Consequently, by averaging the polarity scores on a scale ranging from -5 (indicating negativity) to 5 (indicating positivity), the final polarity of the text can be determined. A score of 0 corresponds to a neutral polarity assignment. Fig. 3 illustrates the framework of our search function. The NLTK is utilized to tokenize the BBC news into words, where spaces are used as separators. Punctuation and full stops are removed using Regular Expressions, and all words are converted to lowercase. Irrelevant words with no meaning are eliminated, and word normalization is performed through stemming. The resulting words are restricted to their primitive form to ensure complete search functionality. The inverted index enables quick and efficient keyword searches, as each cleaned word has an index that corresponds to specific BBC articles. The search results are ranked using the BM25 algorithm, which calculates scores based on the similarities between keywords and web articles. The search function module searches for users' keywords in the index table and retrieves the corresponding indexes to find matching news. In cases where entered keywords are not found in the index table, a Regular Expression matching process is applied to identify the most similar indexes and words. The interface module serves as the front end of the framework, displaying the search results to users based on their input keywords. The content page is used to present the search results to the user. Fig. 4 illustrates the process of the search function, which incorporates sentiment analysis. The data collection phase involves using a web crawler to gather news from the BBC website, which is then stored in a JSON file and transferred to the database. Fig. 4: Search function and sentiment analysis process Fig. 3: The search function framework The pre-processing module automates the extraction of relevant patterns from the stored news data in the database. This includes tokenizing the news features into words using the NLTK, removing punctuation and converting to lowercase, removing stop words, and normalizing the text through stemming. The cleaned words are then converted into indexes that represent the matched BBC articles, and these indexes are stored in a posting table in the database. During the search process, users input keywords through the web interface, which are searched against the database index table. The matching keywords are then used to retrieve the corresponding web articles, and in some cases, the results are also passed to the BM25 tool. The sorted search results are displayed on the web page for the users to see. Lastly, sentiment analysis is implemented on the database to evaluate the polarity of the results associated with the input keywords. ### _Evaluation metrics_ The search method is evaluated by comparing the proposed search function with a normal search function. Recall and precision rates are computed to assess the performance of the proposed search function. The sentiment analysis is also evaluated using the Sentistrength algorithm and BBC news data, with recall and precision metrics [14]. Finally, the recall and precision rates of both search functions are compared to the performance of Sentistrength. Precision (P) is calculated as the percentage of relevant news (RN) among the total number of retrieved news (TRN), as shown in Equation 2. \[P=\frac{RN}{TRN}\times 100 \tag{2}\] The recall (R) is a metric to measure the importance of the data retrieval module [15]. It is expressed in Equation 3. \[R=\frac{RN}{TNS}\times 100 \tag{3}\] Where TNS denotes the total amount of relevant news. ### _Experimental results_ In this section, we will calculate the recall and precision rates for the data using both the normal and classified search functions (Fig. 12). Additionally, we evaluate the performance of the Sentistrength algorithm using the same metrics [3, 16]. Furthermore, we compare the obtained results with existing methodologies, such as deep learning and clustering. The following section provides a detailed analysis of the recall and precision rates of the classified search function in comparison to the normal search (Fig. 12). The Sentistrength algorithm will be evaluated using the normal and classified search results, utilizing the same metrics. Furthermore, a comparative analysis will be conducted, considering the results obtained from deep learning Fig. 8. Searching for Covid vaccine news and clustering methodologies. In Fig. 12, the classified search function achieved a recall rate of 99% and a precision of 50% for the Covid category. For the vaccine category, the classified search function achieved a recall rate of 90% and a precision of 33.8%. Additionally, in the travel category, the recall rate of the search function was 95% with a precision of 45%. These experimental findings highlight that the classified search function improves the recall rate by sacrificing precision. Therefore, the proposed search function approach has the potential to enhance the recall rate for internet news searches. It should be noted that polysemous words were utilized to test the search function, as they possess different meanings in various contexts, such as vaccine and travel. The classified search function effectively addresses the challenge of polysemy. The sentiment analysis computation, based on the data presented in Fig. 12, is visualized in Fig. 13. The results indicate a neutral polarity for BBC news (Fig. 13). Moreover, the Sentistrength algorithm significantly improves the recall and precision rates of the classified search function, achieving 100% and 75% respectively. In comparison, deep learning and clustering methods achieved a precision of 100%. ## V Conclusion and discussion This study focuses on improving the precision of the classified search function using the Sentistrength program. Although the experiments were conducted on a single database, the results can serve as a reference for search engine optimization studies. The proposed search function includes options for users to select different categories based on their specific news preferences. However, there are limitations to consider. The methodology cannot filter out fake news sites from the search results, and the number of categories may not cover all relevant groups for users. Expanding the list of categories can enhance search efficiency. Additionally, the search function algorithm can be enhanced by incorporating advanced classification schemes and intelligent tagging using machine learning. Future work could involve the automatic creation of search categories using artificial intelligence.